Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OpenAI scrapped a promise to disclose key documents to the public (wired.com)
614 points by nickthegreek on Jan 24, 2024 | hide | past | favorite | 300 comments


Unsurprising but disappointing none-the-less. Let’s just try to learn from it.

It’s popular in the AI space to claim altruism and openness; OpenAI, Anthropic and xAI (the new Musk one) all have a funky governance structure because they want to be a public good. The challenge is once any of these (or others) start to gain enough traction that they are seen as having a good chance at reaping billions in profits things change.

And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.

We should be putting more emphasis and attention on truly open AI models (open training data, training source code & hyperparameters, model source code, weights) so the benefits of AI accrue to the public and not just a few companies.

[edit - eliminated specific company mentions]


The botched firing of Sam Altman proves that fancy governance structures are little more than paper shields against the market.

Whatever has been written can be unwritten and if that fails, just start a new company with the same employees.


> The botched firing of Sam Altman proves that fancy governance structures are little more than paper shields against the market.

The things I saw didn't make any sense, so I can't say that it proves anything other than the existence of hidden information.

The board fired him, and they chose a replacement. The replacement sided with Altman. This repeated several times. The board was (reportedly) OK with closing down the entire business on the grounds of their charter.

Why didn't the board do that? And why did their chosen replacements, not individually but all of them in sequence, side with the person they fired?

My only guess is the board was blackmailed. It's just a guess — it's the only thing I can think of that fits the facts, and I'm well aware that this may be a failure of imagination on my part, and want to emphasise that this shouldn't be construed as anything more than a low-confidence guess by someone who has only seen the same news as everyone else.


You obviously have no experience with non-profit governance. OpenAI is organized as a public charity which is required to have an independent board. Due to people leaving the board, they were down to six members, three independent directors plus Sam and two of his employees. They had been struggling to add more board members because Sam and the independent directors couldn't agree on who to add. Then Sam concocted an excuse to remove one of the independent directors and lied to board members about his discussions with other board members.

I think they had no choice at that point but to fire Sam and remove him from the board. When that turned into a shitshow and they faced personal threats, they resigned to let a new board figure out a way out of this mess.

Also, I am not surprised the new board isn't being completely open because they are still probably trying to figure out how to fix their governance problems.


> You obviously have no experience with non-profit governance.

Correct!

> I think they had no choice at that point but to fire Sam and remove him from the board. When that turned into a shitshow and they faced personal threats, they resigned to let a new board figure out a way out of this mess.

As someone with no experience with non-profit governance, this does not seem coherent with (1) they didn't just say that, (2) none of their own choices for replacement CEO were willing to go along with this, and this happened with several replacements in a row.

For (1) I'd be willing to just assume good faith on their part, even though it seems odd; but (2) is the one which seems extremely weird to the point that I find myself unable to reconcile.

It would also not be compatible with the reports they were willing to close the company on grounds of it being a danger to humanity, but I'm not sure how reliable that news story was.


Yes, ideally you would have a succession plan and a statement reviewed by lawyers, but in this case, you had a deadlocked board that suddenly had a majority to act and did so in the moment. If they had waited, they would have probably lost the opportunity because Ilya Sutskever would have switched his vote again. But the end result is that Sam is off the board and that is the important thing.

Maybe you should explain your blackmail theory and we could see which idea makes the most sense.


Ok, I'll give it a go.

1. Some party, for some reason, wants to slow down AI development. There are many people motivated to do this. Assume one of them had means and opportunity.

2. The board members wake up to a malicious message ordering them to do ${something} "or ${secret} will be revealed!" (this ${something} could have been many things, so long as it happened to be compatible with firing Altman).

3. The board fires Altman.

4. The board cannot reveal the true reason why they fired Altman, because that would reveal the thing(s) they're being blackmailed over, so they have to make up a different excuse to give to the CEOs they've named as a replacement. As this is done in a hurry under high stress, the story the board arrives at is fundamentally not very good.

5. The replacement CEO does not buy the story given by the board because it's not very well thought-out, and sides with Altman. This repeats a few times.

6. When it becomes clear the board is not capable of winning this battle, because none of the CEOs they hire will carry out their orders, the blackmailer becomes convinced there's no point even trying to hold the board to this threat (there doesn't need to be communication between the board and the blackmailer for this to work, but it's not ruled out either).

While it does seem to fit the observables, I do want to again emphasise that I don't put high probability on this scenario — it's just marginally less improbable than the other ones I've heard, which is a really low bar because none made sense.


That's wild. I think we will eventually hear more of the story.


I think we're on the same page. More from the board members specifically is most likely to falsify my hypothesis, as they would be unlikely to speak at all if this is correct; more from the interim CEOs may falsify or be compatible with my hypothesis.


Because at some point, the plurality of employees do not subordinate their personal desires to the organizational desires.

The only organizations for which that is a persistent requirement are typically things like priest hoods


The plurality of employees are not the innovators that made the breakthrough possible in the first place.

People are not interchangeable.

Most employees may have bills to pay, and will follow the money. The ones that matter most would have different motivation.

Of course, of your sole goal is to create a husk that milks the achievement of the original team as long as it lasts and nothing else — sure, you can do that.

But the "organizational desires" are still desires of people in the organization. And if those people are the ducks that lay the golden eggs, it might not be the smartest move to ignore them to prioritize the desires of the market for those eggs.

The market is all too happy to kill the ducks if it means more, cheaper eggs today.

Which is, as the adage goes, why we can't have the good things.


> Most employees may have bills to pay, and will follow the money.

It always rubs me the wrong way when people justify going for more money as "having bills to pay". No they don't, this makes it seems as if they're down on their luck and have to hustle to pay bills which is far from reality. I am not shaming people for wanting more money of course, but after a certain threshold, framing it as an external necessity is dishonest.


>It always rubs me the wrong way when people justify going for more money as "having bills to pay". No they don't, this makes it seems as if they're down on their luck and have to hustle to pay bills which is far from reality.

What reality do you live in?

I'm a software engineer with Google on my resume (among others); my wife is a software engineer in the chipmaking industry; we both have PhDs and work in Silicon Valley, and have no children.

We work because we have bills to pay. We can't afford to not work. Our largest expenses are still housing, groceries, transportation, medical, etc. - i.e., bills.

We are paying a mortgage on a 3B townhouse, which is also our home office, and where my mother-in-law is living too as a war refugee from Kyiv, Ukraine. I'm helping my mother with her bills too (she's renting a studio in San Diego).

When I don't work, our savings start draining.

It would be nice to get to the point where paying the bills is not something I ever think about. But we haven't reached that threshold.

Neither have most of our friends (also engineers with PhDs). I haven't spoken to my friend in OpenAI in a while, so I hope they've crossed that threshold; but it's not something I know for sure.


The problem is that you are equating "bills to pay" as living paycheck to paycheck at the minimum level.

It is a metaphor that they are still working class. You can earn 500k-1M/year in salary and be working class. Your monthly expenses may be > than your salary and you need it to keep working to get at the same QOL.


This is absurd and totally out of touch with reality

I live in an exurb of DC, in one of the highest cost of living areas with one of the highest median income in the world.

I have 3 kids who are all in middle and early high school (the most expensive time) and a mortgage and literally just did the math on what my MINIMUM income would need to be in order to maintain a extremely comfortable lifestyle and it’s between $80-100k a year.

Anyone making more than ~100k a year isn’t living paycheck to paycheck unless they are spending way more than their means - which is actually most people


Yeah we agree here, but the problem lies with the team

If you hire people who want to cash out then you’ll get people who prioritize prospects for cashing out

Set another way they did not focus on the theoretical public mission enough that it was core to the every day being of the organization much like it is for Medicins San Frontiers etc.


Most of the people they hired were to work for OpenAI.com which was a pure profit-driven tech company just like any other (and funded by Microsoft). Those who joined the original OpenAI (including its independent board members) were driven by different motivations more in line with research and discovery.


This is my point

In a political direct action context a really effective way to take over an organization from the inside is called “salting.”

I believe that’s what Altman very effectively did and while a few people called it out at the time, Altman was able to realign the org by amplifying and then exploiting everyones greed.


Medecins Sans Frontieres


Médecins Sans Frontières


I wonder if your lesson is "Sam Altman should/would have been fired but for market forces".


The lesson is that "should have been fired" was believed by the people who had power on paper; "should not have been fired" was believed by the people actually had power.


That just simplifies things a hair too much. Remember, the people who worked at OpenAI, subject to market forces, also supported the return of Altman.

Market forces are broad and operate at every level of power, hard and soft.


> Remember, the people who worked at OpenAI, subject to market forces, also supported the return of Altman.

I believe that's what your parent comment was actually talking about. I read it saying the people in power on paper was the previous board, and the people actually in power were the employees (which by the way is an interesting inversion of how it usually is)


> the people who worked at OpenAI, subject to market forces, also supported the return of Altman.

that's because most of those people did not work for the mission-focused parent OpenAI company (which the board oversaw) but it's highly-profit-driven-subservient-to-Microsoft child company (who were happy to jump to Microsoft if their jobs were threatened; no ding against them as they hadn't signed up to the original mission-driven company in the first place).

it's important to separate the two entities in order to properly understand the scenario here


I don't know whether Sam should be fired because no one has published the reason that he was in the first place.

All I know is that the first time that the authority of the board was used to make an unpopular decision in what presumably the board members thought was in the interest of protecting the values of OpenAI, there was an insurrection that succeeded in having the board reverse the decision. The board exists to make unpopular decisions that further the core mission of OpenAI as opposed to furthering the bottom line. But the second that the core mission and the bottom line came into conflict with one another, it became clear which one actually controls OpenAI.


>>>"The botched firing of Sam Altman proves that fancy governance structures are little more than paper shields against the _market_."

-

...Or rather ( $ ) . ( $ ) immediate hindsight eyes...


    λ>:t ( $ ) . ( $ )
    ( $ ) . ( $ ) :: (a -> b) -> a -> b
I really didn't expect the type to be simple, but in hindsight, it's obvious.


Is this a boob joke or a money joke?


A haskell one


For great justice!


"Cease quoting bylaws to those of us with yachts"


It was botched because the public was too stupid to see how much of a snake Sam Altman is. He was fired from Y-combinator and people were still Universally supporting him on HN.

IF people hated him he would've been dropped. Microsoft and everybody else only moved forward because they knew they wouldn't get public backlash. Seems everyone fails to remember their own mob mentality. People here on HN were practically worshipping the guy.

Statistically most people commenting here right now were NOT supporting his firing and now you've all flipped and are saying stuff like: "yeah he should've been fired." Seriously?

I don't blame the governance. They tried their best. It's the public that screwed up. (Very likely to be YOU, dear reader)

Without public support the leadership literally only had enemies at every angle and they have nowhere to turn. Imagine what that must have felt like for those members of the board. Powerful corporations threatening aspects of their livelihoods (of course this happened, you can't force a leader to voluntarily step down without some form of a serious threat) and the entire world hating on them for doing such a "stupid" move as everyone thought of it at the time.

I'm ashamed at humanity. I look at this thread and I'm seriously thinking, what in the fuck? It's like everyone forgot what they were doing. And they still twist it to blame them as if they weren't "powerful" enough to stop it. Are you kidding?


It's a mistake to claim that Altman has/had universal support in here. I'm neutral towards him for example, and in all this firing minidrama the only thing I was interested in was to learn the motives of his firing.


For these types of things of course there's alternative opinions. It's like measuring voting... No candidate literally gets 100 percent of the vote.

But to characterize it as something other then overwhelming support to reinstate Sam altman would be false.

Hence why I said most people (key word: most) are just flip flopping with the background trends. Mob mentality.


Genuine question, what did he do that was so unforgivable? If it's so obvious, you should be able to list what happened in an unambiguous way.


We can start with the crypto scam that he’s now trying to pivot to the AI space as the “solution” to the problem he created.

https://www.buzzfeednews.com/article/richardnieva/worldcoin-...

https://www.technologyreview.com/2022/04/06/1048981/worldcoi...


I can't. Even this company policy being talked about in this thread is ambigiuose. That's the problem.

He was fired from y combinator and the entire board wanted to fire his ass too.

Therefore by this logic he should have universal support for reinstatement and the entire board should be vilified? Makes no sense. But this is exactly the direction of the Mob and was the general reaction on HN.

It was ambigiuose whether Sam was a problem and that makes ambigiuose treatment and investigation warranted. The proper reaction is: "wtf is going on? Let's find out" Instead what he got was hero worship. The public was literally condemning the board and worshipping the guy with no evidence.

And now with even more ambiguous and suspicious facts everyone here is suddenly "level headed." Yeah that's bs. What's going on is mostly everyone here is just going with the flow and adopting the background trends and sentiments. Mob mentality all the way.


Yes, I see. I wrote ambiguously myself, I meant to say what justifies calling him a snake. I assume that it was past incidents pre-OpenAI involving ycomb and other things? I understand that you feel the mob mentality is unfair and overwhelming, so please don't keep retreading that.


Insider info. I know people who know people who terminated Sam Altman from Y combinator. In general there's no solid evidence about Sams character on the surface but you can glean details. There's other people on HN who know of his character as well. Maybe you can find them when sifting through the posts.

It's like Trump. Is trump really a snake? depends on who you ask but there's controversy around trump and in direct parallel there's controversy about Sam as well.


https://www.investopedia.com/terms/s/self-dealing.asp

In lesser known places such as Wall St, practices like self dealing are considered illegal. In venture they’re often celebrated. Go figure.

I think there’s a general distaste towards setting up networks of companies B, C, D, planning to profit from a success of another company A where a single person controls all the companies and there’s a reasonable expectation of plans to divert business from A towards B, C, D.

I don’t know the details but there seems to be some gripe about it. I’m speculating.


> I don't blame the governance. They tried their best. It's the public that screwed up.

Yes, the public was to blame, but inherently the board was doomed because OpenAI had morphed from a public benefit mission-driven company to a profit-driven one heavily funded by MSFT which expected its due return. Once the for-profit "sub"-company, the parent mission-driven company was doomed, because most people were going to be working for the profit-driven child company and therefore had different goals (i.e., salary, options) than the mission-driven parent company (scientific breakthroughs, responsible creation/development of AI). This is why the employees (of the child company) revolted and supported Sam (who had reneged on OpenAI's mission and gone full capitalist like any other tech mogul out there). The only question in my mind was whether "Open"AI was always a scam from the start to get attention or a genuine pivot (which the board unsuccessfully tried to stop).

And now responsible AI development is gone and everyone is chasing the money, just like the social media companies did, and well, we know how that ended up (Facebook, Twitter).

Sad day indeed.


I'm not sure why you attribute that as a shield against the market. That seemed much more like an open employee revolt. And I can't think of a governance structure that is going to stop 90% of your employees from saying, for example, we work for Sam Altman, not you idiots...


An employee revolt due to the market. The employees wanted to cash out in the secondary offering that Sam was setting up before the mess. It was in (market) interest to get him back and get the deal on track.


Employees care about their pay is so reductive as to be meaningless. Any action any employee takes can be labelled as such.


Broad speculation


Yes, they wanted to work for Sam... because he was arranging a deal to give them liquidity and make them rich.

The board was not going to make them rich.


> And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.

Blaming "human nature" is an excuse that is popular among egomaniacs, but on even brief inspection it is transparently thin: Human nature includes plenty of non-profits and people who did great things for humanity for little or no gain (scientists, soldiers, public servants, even some sofware developers). It also includes people who have done horrible things.

Human nature really is that we have a choice. It's both a very old and fundamental part of human nature:

  And the serpent said unto the woman, Ye shall not surely die:

  For God doth know that in the day ye eat thereof, then your
    eyes shall be opened, and ye shall be as gods, knowing
    good and evil.

  And when the woman saw that the tree was good for food, and
    that it was pleasant to the eyes, and a tree to be
    desired to make one wise, she took of the fruit thereof,
    and did eat, and gave also unto her husband with her;
    and he did eat.

  And the eyes of them both were opened, and they knew that
    they were naked; and they sewed fig leaves together, and
    made themselves aprons.
That's the Tree of the Knowledge of Good and Evil, of course (Genesis 3). We know good and evil, we make our own choices; no blaming God or some outside force. If you do evil, it was your choice.


Since you seem to have this figured out and it's not just human nature, Care to list everything that is good and everything that is evil?

Back to reality on this topic. There is nothing wrong with OpenAI employees voting to keep the company for profit and maximizing their own personal gains.

I don't see how this can be anything close to "Evil".


> There is nothing wrong with OpenAI employees voting to keep the company for profit and maximizing their own personal gains.

There is something wrong if it harms others. For example, if AI is a risk to other people outside the company, and their vote increases that risk, then it's wrong (depending on the amount of risk).

Maximizing personal gains, despite recent hype, does not at all make something right. In fact, it's possibly the leading cause of doing wrong.

> Back to reality

Maybe you can come up with some better ideas than just offhand dismissal of ideas that have been embraced, examined, and followed by a great many humans for thousands of years. That's reality.


Tangential topic, but I've been thinking about that part of the bible recently.

It makes no sense to me.

I don't mean that God, supposedly all good and all knowing, didn't know about the serpent and intervene at the time — despite Christian theology being monotheist, I think the original tales were polytheistic, and the deity of the Garden of Eden was never meant to have those attributes[0].

I mean why was it appropriate to punish them for something they did in a state of naivety, and which was, within the logic of the story, both prior to and the direct cause of gaining knowledge of the difference between good and evil? It's like your parents suing you to recover the cost of sending you to school.

[0] Further tangent: if they're al the same god, why did it take 6 days to make the world (well, cosmos) and all the things in it, but 40 days to flood the Earth to cleanse it of all human and animal life except for the ark? It's fine if they're different gods, a creator deity with all that cosmic power doesn't need to care so much about small details like good and evil, and a smaller and more personal god that does care about good and evil doesn't need to have such cosmic power.


Your first mistake (by trying to make sense) is reading the Bible as a historic book of records that actually happened.

The bible isn't a book by an author (Like the Quran claims to be). It is a mix/match of stories over long periods of time from different people. You read it as parables from the times, not as a history lesson.


> Your first mistake (by trying to make sense) is reading the Bible as a historic book of records that actually happened.

Why do you think I'm reading it like that? I thought me saying "nah, polytheism" might have been a hint that I don't take it at all literally.

Likewise that I was referring to the internal logic of the story.


> I was referring to the internal logic

That's the problem. I think the first question is interesting (and a fundamental theological question - similar to why does God make people 'harden their hearts' and do evil at times), it's applying the Bible to the outside world.

The second question just seems purely internal - how does that affect our external reality?


> how does that affect our external reality?

It doesn't have to — I can say a plot item in Star Trek makes no sense just as easily.

That said, I guess I am curious what this story might have meant to be, at one time? How could it be reinterpreted in a way that isn't immediately self-defeating?

And I really don't get how people take this literally, given apparent contradictions like this, but biblical literalists are too alien to my world view for any explanation to really help me understand how they perceive things.


>> how does that affect our external reality?

> It doesn't have to — I can say a plot item in Star Trek makes no sense just as easily.

We can say anything we like, but my question is really, what does it matter? Internal consistency matters much more to Star Trek, an adventure and grist for geeking-out, than the Bible, which provides material to help us spiritually. The point of the Biblical story is, what can we learn?


> The point of the Biblical story is, what can we learn?

That Christians worship an unreasonable, malicious or mad, god with unreasonable standards. "Even when you were a gullible idiot and faced an influence I'd not accounted for despite being all knowing, I'm still going to punish you and all your offspring forever for what you did wrong, especially the woman and that's why childbirth hurts."

That, even as literature, it shows the human condition is one of the vibes of a story without paying attention to details, one where just-so stories which get written backwards from observables don't need to make logical sense when read forwards in order to convince people.

Like I said, the difference world view is alien. I assume the same is true in reverse, and that True Believers (and perhaps not even casual holiday-only believers) can't understand how I might not see things the way they do.


I'd say you are looking for problems rather than value, a form of critical reading appropriate to contracts, public affairs, etc. The Bible and similar texts are generally not contracts you need to accept or reject as a whole. They are not literal. If you look at them as literal and 'contracts', there are far more flaws than the ones you point out (including the sexist story I posted originally). They take a different form of critical reading:

They are sources of inspiration. Don't look for the flaws, look for the benefits. Imagine you go to an art museum or you play a computer game. Do you look for the worst paintings? Scour the museum for mistakes in the paintings? Do you read the game's code for bugs and poor coding practices? When you go to a bookstore, do you look for the worst book? What a waste of time that would be - you want the best, the most enjoyable and inspiring, not the worst.

> That Christians worship an unreasonable, malicious or mad, god with unreasonable standards. "Even when you were a gullible idiot and faced an influence I'd not accounted for despite being all knowing, I'm still going to punish you and all your offspring forever for what you did wrong, especially the woman and that's why childbirth hurts."

FWIW, that passage is part of all Abrahamic religions.


> They are not literal

For most, indeed. And that's good! But I have met people who claim to think they must be absolute truth, then put a huge asterisk around all the bits I point out and say those don't count for whatever reason.

There was a meme back in the UK, that amongst Anglicans, only extremists actually believe in God. No idea how true that is.

> Do you look for the worst paintings?

Only when they're put on a pedestal and held to be amazing. For example: https://en.wikipedia.org/wiki/The_Kiss_(Klimt)

Widely regarded as beautiful and romantic. To me, it looks like the guy has a broken neck, and the woman has been decapitated at the base of her neck, her head rotated 90° and re-attached to her torso by the ear.

Likewise, movies. The plot holes in Independence Day annoyed me so much that when the sequel came out, I started (and still have not finished) writing a book that takes the opposite road with all the mistakes the film made.

So, while I've played the Eye Of Argon game, I never even tried to finish reading it once my friends and I stopped playing, and I've never bothered watching whatever the film is that has the line "you're tearing me apart Lisa".

> FWIW, that passage is part of all Abrahamic religions.

https://en.wikipedia.org/wiki/Shashthi

https://en.wikipedia.org/wiki/Hera

I don't limit myself to just Abrahamic myth and legend :)


>That Christians worship an unreasonable, malicious or mad, god with unreasonable standards.

See now after a few rounds we see your real thoughts come out. I'm old enough to have had this thought and many more about God/Religion and why humans need it in their lives.

I don't have the time to go into it but perhaps as you get older and dig into this more it will start to make sense.


> See now after a few rounds we see your real thoughts come out.

It took you this long? I wasn't hiding anything.

> I'm old enough to have had this thought and many more about God/Religion and why humans need it in their lives.

> I don't have the time to go into it but perhaps as you get older and dig into this more it will start to make sense.

I was born and raised Catholic, then I found Wicca and realised that not all the gods and religions work like Christianity.

Then, sometime around 10-20 years ago but gradually rather than as a single event, I realised I could get stuff out of stories without believing them.

The ancient Greeks got on well with their very flawed pantheon. Those old tales put me very much in mind of the modern comic-book heroes (and anti-heroes), which I suspect is mainly due to where comic books get their inspiration from rather than the other way around. But I can be inspired by Miles Morales' struggles without needing to think he's real.


> rather than the other way around

Well, that was badly phrased!

More like: the alternative is both coming from a common source, not modern comic books inspiring the ancient Greeks.


open training data, training source code & hyperparameters, model source code, weights

I'm not an FSF hippie or anything (meant that in an endearing way), but even I know if it's missing these it can't be called "open source" in the first place.


I don't think the weights are required. They're an artifact created from burning vast amounts of money. Providing the source/methods that would allow one, with the same amount of money, to reproduce those weights, should still be considered open source. Similarly, you can still have open source software without a compiled binary, and, you can have open source hardware, without providing the actual, costly, hardware.


The popularity of fine-tuning demonstrates that the weights are actually the preferred form for making changes.

The precursor form (training data etc) is only needed if you want to recreate it from scratch. Which is too expensive to bother with.


My point is, wanting a finished product that cost millions, without paying for it, is very different than it being open sourced. Models are an artifact, a result, not a source.


I would argue that the weights are as much source code as source code. Them being generated doesn't demote them.

I don't even think the distinction is important. The "system" should be open, and that includes data central to the system's operation within certain bounds.

You can open source parts of a system at whichever fine slice you wish, you just have the part which is open A and the part which isn't B.

It's the value of A and B being open that matters, not what A and B are composed of.


Great point. Open source is different from free product.


> OpenAI, Anthropic and xAI (the new Musk one) all have a funky governance structure because they want to be a public good

do they actually want to be a public good or do they want you to think they want to be a public good?


What? It's business. They want to make money for investors and owners. Whatever helps this main goal.


Except OpenAI kept pretending that they aren't a real "business" for quite a while.


> They want to make money for investors and owners.

OpenAI was explicitly founded to NOT do that.


The problem is research into AI requires investment and investors (by and large) expect returns, and, the technology in this case actually working is currently in the midst of it's new-and-shiny-hype-stage. You can say these organizations started altruistic; frankly I think that's dubious at best given basically all that have had the opportunity to turn their "research project" into a revenue generator have done; but much like social media and cloud infrastructure, any open source or truly non-profit competitor to these entities will see limited investment by others. And that's a problem, because the silicon these all run on can only be bought with dollars, not good vibes.

It's honestly kind of frustrating to me how the tech space continues to just excuse this. Every major new technology since I've been paying attention (2004 ish?) has gone this exact same way. Someone builds some cool new thing, then dillholes with money invest in it, it becomes a product, it becomes enshittified, and people bemoan that process while looking for new shiny things. Like, I'm all for new shiny things, but what if we just stopped letting the rest become enshittified?

As much as people have told me all my life that the profit motive makes companies compete to deliver the best products, I don't know that I've ever actually seen that pan out in my fucking life. What it does is it flattens all products offered in a given market to whatever set of often highly arbitrary and random aspects all the competitors seem to think is the most important. For an example, look at short form video, which started with Vine, was perfected by TikTok, and is now being hamfisted into Instagram, Facebook, Twitter, YouTube despite not really making any sense in those contexts. But the "market" decided that short form video is important, therefore everything must now have it even if it makes no sense in the larger product.


> As much as people have told me all my life that the profit motive makes companies compete to deliver the best products, I don't know that I've ever actually seen that pan out

Yes, you have; you're just misidentifying the product. Google, Facebook, Twitter, etc. do not make products for you and I, their users. We're just a side effect. Their actual products are advertising access to your eyeballs, and big data. Those products are highly optimized to serve their actual customers--which aren't you and I. The profit motive is working just fine. It's just that you and I aren't the customers; we're third parties who get hit by the negative externalities.

The missing piece of the "profit motive" rhetoric has always been that, like any human motivation, it needs an underlying social context that sets reasonable boundaries in order to work. One of those reasonable boundaries used to be that your users should be your customers; users should not be an externality. Unfortunately big tech has now either forgotten or wilfully ignored that boundary.


> Yes, you have; you're just misidentifying the product. Google, Facebook, Twitter, etc. do not make products for you and I, their users. We're just a side effect. Their actual products are advertising access to your eyeballs, and big data. Those products are highly optimized to serve their actual customers--which aren't you and I. The profit motive is working just fine. It's just that you and I aren't the customers; we're third parties who get hit by the negative externalities.

Yeap... you get it, the guy above you doesn't.

George Carlin said it best, "It's a big club... AND YOU AIN'T IN IT!"


The governance structure is advertising. "trust us, look we're trustable" is intended to convince people to use what they are building.

But the structure is expensive and risky, tossing it aside once traction is made is the plan.


See also this article on the failed social network Ello[1], which also proclaimed a lot of lofty things and also incorporated as a "Public Benefit Corporation."

1. https://news.ycombinator.com/item?id=39043871


Given this, it's interesting that an established company like Meta releases open source models. Just the other day Zuck mentioned an upcoming open source model being trained with a tremendous amount of GPU-power.


Meta is trying to devalue its upstart competitor openai. When openai was so far ahead in public perception, FB starts gaving away what they had spent oodles of money building in order to lessen openai's hype and stop their investors believing that the next great thing was elsewhere?


I think that's just them trying to limit what the others can get away with, as well as limiting the competition they have to deal with because the open source models end up as a baseline.

OpenAI etc have to reign in how much they abuse their lead because after some price point it becomes better to take the quality hit and use an open source model. Similarly, new competitors are forced to treat the Facebook models as a baseline, which increases their costs.


Commoditize your complement. I guess Meta sees AI more as something they use than something they offer.


it was the only way for Meta to even get into the conversation; if they had captured the mindshare like GPT did, you can be sure they wouldn't have open-sourced it


OpenAI raised $130 million when it was only a non profit and had difficulty doing more, despite the stacked deck and start studded staff and same goal that would value participation units at $100bn

that’s the real lesson here. we can want to redo OpenAI all we want but the people will not use their discretion in funding it until they can make a return


yeah, this was ultimately the problem

it turned out that AI research required $ billions to run the LLMs, something that was not originally anticipated; and the only way to get that kind of money is to sell your future (and your soul) to investors who want to see a substantial return


> And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.

To some extent but it's much more egregious in companies like OpenAI where they promoted themselves as being founded for a specific purpose which they then did a complete U-turn on.

It's more like a non-profit saying they're being founded to provide free water to children in Africa and then it turns out that they're actually selling the water to the children. (Yeah, scamming is maybe part of human nature too, but thankfully most people don't resort to that.)


I guess that is the question - how to differentiate between "open-claiming" companies like openAI vs. "truer grass roots" organizations like Debian, python, linux kernel, etc? At least from the view point of, say, someone who is just coming smack into the field and without the benefit of years of watching the evolution/governance of each organization?


>how to differentiate between "open-claiming" companies like openAI vs. "truer grass roots" organizations

Honestly? The people. Calculate the distance to (American) venture capital and the chance they go bad is the inverse of that. Linus, Guido, Ian, Jean-Baptiste Kempf of VLC fame, who turned down seven figures, what they all have in common is that they're not in that orbit and had their roots in academia and open source or free software.


This is precisely what most safety researchers were asking for in 2016 when openai was recruiting, and why many didn’t go to openai. Like, there’s a lot of other security and safety researchers out there. The OpenAI types draw from an actually fairly narrow self-selecting group within there.


The public can’t benefit from any of this stuff because they’re not in the infrastructure loop to actually assign value.

The only way the public would benefit from these organizations is if the public are owners and there isn’t really a mechanism for that here anywhere.


I strongly disagree, and think this statement is basically completely wrong. I am part of the public and I'm benefitting tremendously from the product openAI has built. I would be very unhappy if my access to chatgpt or copilot was suddenly restricted. I extract tons of value (perceieved) from their product, and they receive some value in return from my subscription. Its a win-win.


You’re not “the public” you’re a private citizen paying a private org for services

“The public” in this case refers to all people irrespective of their ability to pay


It isn't just money, though. Every leading AI lab is also terrified that another lab will beat them to [impossible-to-specify threshold for AGI], which provides additional incentive to keep their research secret.


But isn't that fear of having someone else get there first just a fear that they won't be able to maximize their profit if that happens? Otherwise, why would they be so worried about it?


"Fusion is 25/10/5 years away"

"string theory breakthrough to unify relativity and quantium mechanics"

"The future will have flying cars and robots helping in the kitchen by 2000"

"Agi is going to happen 'soon'"

We got a rocket that landed like it was out of a 1950's black and white B movie... and this time without strings. We got Star Trek communicators. The rest of it is fantasy and wishful thinking that never quite manages to show up...

Lacking a fundamental undemanding of what is holding you back from having the breakthrough, means you're never going to have the breakthrough.

Credit to the AI folks, they have produced insights and breakthroughs and usable "stuff" unlike the string theory nerds.


Fusion is well on the way, you just don't hear about it as much because the whole point of fusion isn't to make money, it's to permanently end the energy "crisis", which will end energy demand, which will have nearly unfathomable ripple effects on the global economy.

String theory is waste of time and has been for awhile now. The best and brightest couldn't make it map onto reality in any way, and now the next generation of best and brightest are working either on Wall Street or in Silicon Valley.

The robots are also coming sooner than we think. They won't be like Rosey from the Jetsons, but they'll get there.

AGI may or may not happen soon, it's too early to tell. True AGI is probably 100 years away or more. Lt. Cmdr. Data isn't coming any time soon. A half-ass approximation that "appears" mostly human in it's reasoning and interaction is probably 3-10 years off.


> AGI may or may not happen soon, it's too early to tell. True AGI is probably 100 years away or more. Lt. Cmdr. Data isn't coming any time soon. A half-ass approximation that "appears" mostly human in it's reasoning and interaction is probably 3-10 years off.

The goal of AGI is not to emulate a human. AGI will be an alien intelligence and will almost immediately surpass human intelligence. Looking for an android is like asking how good a salsa verde a pizza restaurant can make.


> The goal of AGI is not to emulate a human.

I am not sure if that's accurate based on the researchers I read and listen to.

> AGI will be an alien intelligence

Possibly. Remains to be seen.

> and will almost immediately surpass human intelligence.

There's no proof that will be the case, we just assume that because of the advancement of technology in the past 50 years. It may well be an accurate assumption, then again it may not be. This is very much a case of, "We won't know until it happens."


> Fusion is well on the way

I hope it succeeds, but after decades of research there is still no demonstrable breakthrough in fusion (that outputs more energy than required as input)


We don't hear about it [Fusion] because it doesn't work for energy production.

I don't believe there is a grand conspiracy to keep it down because of money.


I wouldn't call it a "grand conspiracy" so much as a "plain case of human greed".

Intel does everything in it's power to stymie AMD in the late 80s, all of the 90s, and the early 2000s - that's an established fact on record. It wasn't a "grand conspiracy", it was just the dominant power exerting it's force.


I honestly don't understand how your comment here relates to what I said...


My point is that there is no "there there". I think all of them get that AGI isnt coming but they can make a shit load of money.

Hope, progress... both of those left the building, it's just greed moving them forward now.


No, it's a fear that the other lab will take over the world. Profit is secondary to that. (Whether or not you or I think that's a reasonable fear is immaterial.)


Fully agree on open models, but I think there’s more going on that is important to consider in our own founding journies

It’s not just that there are billions to be made (they always believed that) it’s that people are making billions right now turning them into a paper tiger

When only the tech sector cares about a company it’s fairly straightforward for them to be values driven - necessary even. Engineers generally, especially early adopters, are thoughtful & ethical. They also tend to be fact driven in assessing a company’s intentions.

Once a company exits the tech culture bubble, misinformation & political footballs are the game. Defending against it is something every company learns quick. It is existential & the playing field is perpetually unfair.


basically, you're discussing enshittification. When things get social momentum, those things get repurposed for capitalistic pleasure.


OpenAI: pioneer in the field of fraudulently putting "open" in your name and being anything but.


Similar naming pattern, like North Korea calls itself “ Democratic People's Republic of Korea” … it cannot be further from being democratic.


From Lord of War:

> Every faction in Africa calls themselves by these noble names - Liberation this, Patriotic that, Democratic Republic of something-or-other... I guess they can't own up to what they usually are: the Federation of Worse Oppressors Than the Last Bunch of Oppressors. Often, the most barbaric atrocities occur when both combatants proclaim themselves Freedom Fighters.


The lib’dems in Europe are anything but liberal or democratic.

Liberal means less intervention from the state, it has literally changed its meaning to soft-socialism.

Democratic is not when you’re elected as part of Boris Johnson on a program to leave the EU, and 16% of elected MPs left his party after the vote and rejoined the Libdems (withouth giving a choice to electors, nor resigning as MP) to fight to stay in EU, coining the phrase “What voters really meant was stay in the EU with conditions.”

I focussed on England, but lib’dems in every EU country have the same betrayal.


Eh?

This didn’t happen.

https://en.wikipedia.org/wiki/List_of_elected_British_politi...

I think what you are referring to is the tory MPs who defied the government and voted with the opposition on a single vote.

At that time literally one of them permanently defected, very visibly crossing the floor. Many of the rest were booted out of the parliamentary party by Boris, only to be readmitted later (including my MP, who I do not vote for).

There were two or three who joined minor parties, and a handful ended up in the Lib Dems afterwards, but there was never a mass defection to the lib dems, who only have 15 MPs now; 15% of the 2019 Tories would be over 50.

Either way I think your summary misunderstands the reasons all of that happened, and the principles behind it.


The conservatives are the one true exception these rules. Its right there in the first 3 letters of their party name.


It's the same inverse signal in newspaper names too. Russian propaganda Pravda (Truth), Polish tabloid Fakt (Fact), etc. Organisations that practice X every day typically don't have to put X in the name to convince you about it.


Suppose there was a country where individualism was prioritized. Having your own opinions, avoiding "groupthink", even disagreeing with others, is a point of pride.

Suppose there was a country where collectivism was prioritized. Harmony, conformity and agreeing with others is a point of pride.

Suppose both countries have similar government structures that allow ~everyone to vote. Would it really be surprising that the first country regularly has 50-50 splits, and the second country has virtually unanimous 100-0 voting outcomes? Is that outcome enough basis to judge whether one is "democratic" or not?


The funny thing is that I’m sure NK is very democratic, it’s just that voting wrong probably gets you killed


I wonder if anyone that voted "wrong" has ever tried to say the election was rigged, and their votes were changed to avoid their families receiving a bill for a bullet.


I doubt anyone votes wrong, there's no open counter-culture in NK I've ever read about


Suppose that countries have more than two parties...


You can democratically decide to have only two parties, or for that matter only one.

It only takes 51% of the vote to outlaw opposition.

Just recently, the US democratic convention stripped all the voters in New Hampshire from their votes the presidential candidates.


Even in multi-party systems, it comes down to ruling coalition vs. opposition. DPRK technically has multiple parties, but they are in a tight coalition.


Nice comparison. And also certain political factions in the USA try to hide the shamefulness of laws they propose by giving them names that are directly opposed to what they'll do.

The "Defense of Marriage Act" comes to mind. There was one so bad that a judge ordered the authors to change it, but I can't find it at the moment.


This is just a normal practice in the US.

Defense of Marriage Act is actually an exception. The people supporting it honestly thought it was defending marriage, and the supportive public knew exactly what it did.

It passed with a veto proof majority a few weeks before a presidential election, received tons of press, and nobody was confused about what it did.

Whereas the Inflation Reduction Act had absolutely nothing to do with reducing inflation.


> Defense of Marriage Act is actually an exception. The people supporting it honestly thought it was defending marriage

Seems arbitrary. There is nothing about that act that even borders on defending marriage, and people supporting it know that. It's a comic misnomer.


It’s defending when you view gay people as subhuman animals.


It was, and is, absolutely clear to everyone what this bill was about.

If it had been called the “Support Healthcare for Veterans Act” or even “Interstate Marriage Consistency Act” it would have been dubious.

But the 70% of Americans who opposed gay marriage correctly understood its meaning, as did the gay rights activists who saw gay marriage as unobtainable.

This wasn’t a confusing or misleading title, as is evidenced by the fact that nobody was confused or misled.


I think people weren't confused because its details were covered repeatedly by the news, not because the name was clear. I, for instance, figured a name called "The Defense of Marriage" act would be defending everyone's right to be married. It does the opposite. So count me as someone that considers that name misleading.


Not all people who subscribe to the definition of marriage as put forth in the Defense of Marriage Act also believe that gay people are subhuman animals.


Technically it only requires you view marriage as being between a man and a woman.


All political factions are guilty of this. Patriot Act, Inflation Reduction Act, Affordable Care Act, etc.


Eh, the ACA is the only reason I have "affordable" insurance. In the end it might have been more accurate to say, "Marginally Less of a Rip-Off Care Act."


USA PATRIOT Act was an acronym, actual name was Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act of 2001.


You think they came up with the long name and THEN were astonished to discover that it spells "PATRIOT"?


Yep. That's for sure a revisionist definition.

See also: "Digital Versatile Disc"


Citizens United....


Actually, that's my mistake. The examples I was thinking of turned out to be one and the same: It was a California proposition originally titled the "California Marriage Protection Act." That was the one where a judge forced it to be renamed to "Eliminates Rights of Same-Sex Couples to Marry. Initiative Constitutional Amendment"


Side note of a kinda similar thing happening, forgive me for the sidetrack and side-rant.

PrivatePropery <- was a website in South Africa setup in a market where all real-estate sales was controlled and gate kept by real-estate agents (assisted by Lawyers, various government bodies and even legislation), and its purpose was to allow "Private" individuals to put up their own properties for rent or sale.

Predictably, it eventually got take over by real-estate agents that posed as "private" sellers, and then that caused the entire site to support "Agents" as a concept and here we are. Today, you will hardly ever find a private individual on there and the company makes no effort at all to root them out. The agents just spam all their listings, lie on the metadata for properties, add duplicates, make zero-effort postings and use skew photos, the works.

Another example if you will, AirBnB. Taken over (I exaggerate a bit) by management companies that own many many properties and allocate an "agent" to oversee each property. At least here in South Africa, that is. Might not be that true in other countries, but it's on its way there. Mark my words.

Or more:

Pricecheck <-- Another South African website. Still claims to be a price-comparison website, but is really just like Google shopping, that doesn't do any scraping of prices, but simply "partners" with websites that give it a kickback after a user purchases something.


OSF predates it by almost four decades (even older than open source) https://en.wikipedia.org/wiki/Open_Software_Foundation


Orwell would be proud.


should be added to the Newspeak dictionary


part of human nature and will always be

What if we just made it illegal for corporate entities (including nonprofits) to lie? If a company promises to undertake some action that's within its capacity (as opposed to stating goals for a future which may or may not be achievable due to external conditions), then it has to do with a specified timeframe and if it doesn't happen they can be sued or prosecuted.

> But then they will just avoid making promises

And the markets they operate in, whether commercial or not, will judge them accordingly.


That's not a corporate-law issue -- it's a First Amendment issue with a lot of settled precedent behind it.

tl;dr: You're allowed to lie, as a person or a corporation, as long as the lie doesn't meet pretty high bars for criminal behavior or public harm.

Heck, you can even shout fire in a crowded theater, despite the famous quote that says you can't.


That has been working out poorly for us. I think we should limit the number of rights a corporate entity can enjoy and give greater weight to truthfulness in legal matters. No, this is not going to stop anyone having opinions or writing fiction.


What did people genuinely think was going to happen with OpenAI?

All the employees wanted to simply get rich and support Sam Altman's and his mission to get them there (why else would be a revolt upon his firing?) so with that in mind, I can't help but to think that self-deception is the best weapon people have to use a facade for their greed.

I frankly have more respect for scammers, at least they know firmly where they stand on a moral spectrum.


> I can't help but to think that self-deception is the best weapon people have to use a facade for their greed.

I don't find it hard to believe that they in fact wanted to build something new and innovative AND get rich in the process. Which is not unreasonable and no "self-deception" needs to be involved in the process. OpenAI would be dead in the war if they stuck to their principles.

> I frankly have more respect for scammers, at least they know firmly where they stand on a moral spectrum.

Why? You need massive amounts of money and resources to build something useful. Especially in tech private for profit enterprises seem to be best positioned to achieve that (the same applies to opensource, Linux etc. would be a niche project with limited usefulness without corporate backing).


another example is that all youtube ads are out right scammay but its employess are happy as long as they could get big payout.


at this point im proposing a new law. In any thread about the ills of big tech company, there will always be someone to make it about google.


OP is comparing scammers favorably over openAI employees, a rather bizarre honestly crazy take. Of course the replies are going to be equally in the same bucket.


only a sith speaks in absolutes


Why are people surprised that openAI is closed since we know they don’t share anything since chatGPT was launched and they got billion in investments


> Why are people surprised

I see this type of question a lot when something is considered common knowledge in whatever online bubble someone is part of.

But the only way to go from “everybody knows” to documented fact is through investigative journalism and reporting. The point of these stories is not to say “wow we are so surprised”, the point is to say “this company is in fact lying and we have the documentation to prove it.”


Well said, and not to mention the importance of common knowledge as a driving impetus for enacting a change. "Everyone knows Hollywood is full of abuse" was true for decades but when the Weinstein allegations finally came out into the open, some (if not enough) action finally started happening against it. Saying the obvious thing loudly and openly is a coordinating mechanism.


> Why are people surprised that openAI is closed

The surprise is more at the (EDIT: brazen) pathological lying.


it's governed by VC execs. No shit they're lying - their mouths are moving.


n.b. It's not, that's why it was possible for them to move on from Altman


> n.b. It's not, that's why it was possible for them to move on from Altman

That's only under the assumption that the split with Altman was due to the doomers vs bloomers conflict and not just a dirty move from OpenAI board member Adam D'Angelo, trying to protect his investment in Quora's AI Poe.


I'm not familiar with either fanfic, beyond the one-sentence pitch[1]. I'm not sure why one of the two has to be true for reality (the board is not VCs) to be true

[1] RIP "they switched everyone to prepaid billing!!!11!" I ate probably -10 saying "no, you got that email saying it was available because it was a feature announced at devday as coming soon"


They didn't move on from Altman did they? So was it really possible?


They didn't fail to get rid of Altman because the board is VCs. Because the board is not VCs.


> Because the board is not VCs.

Except that's not really true. Almost everyone on the board were either VCs themselves or had very strong ties to the them. In any case OpenAI would be irrelevant without significant investments from organizations/people who want a return on them. So it's basically a moot point: no VCs/big corporations = no fancy,extremely expensive to train & develop LLMs.


Fool me once, shame on you; fool me twice, shame on me.

We passed this point 10-15 cases ago. Don’t people learn what OpenAI is all about?

Hint: Think 1984. They are Ministry of Truth.


10?

This is only the 2nd or 3rd thing that seems to me even a little incoherent with their initially stated position, the other certain one being the mystery of why the board couldn't find anyone to replace Altman who didn't very quickly decide to take his side, and the other possible one being asking for a profit making subsidiary to raise capital (though at the time all the criticism I remember was people saying they couldn't realistically reach x100 and now it's people ignoring that it's limited to only x100).


I'm not counting starting from Altman Saga(TM), but from the beginning. Promises of being open, keeping structure secret, changing their terms to allow military use, etc. etc.

They state something publicly, but are headed to completely different trajectory in reality.

This is enough for me.


I was also counting from the beginning.


What did they lie about objectively? The entire benefit to humanity statement is subjective enough to be not considered lying and many consider closed AI to be the safest. Changing their goals is also not lying.

In fact, I would consider changing goals publicly to be better than not following the goals.


> What did they lie about objectively?

Wired claims OpenAI’s “reports to US tax authorities have from its founding said that any member of the public can review copies of its governing documents, financial statements, and conflict of interest rules.” That was apparently a lie.

> Changing their goals is also not lying

Changing a forward-looking commitment is. Particularly when it changes the moment it’s called.


> What did they lie about objectively?

I don't know if I'd put this in terms of a "lie" or not, but OpenAI's stated principles and goals are not backed up by their actions. They have mined other people's works in order to build something that they purport as being for the benefit of mankind in some way, when their actions actually indicate that they've mined other people's work in order to build something for the purpose of massively increasing their own power and wealth.

I'd have more respect for them if they were at least honest about their intentions.


Because there's two conversations going on:

#1 is whether it's free and open in the ESR sense, the more traditional FOSS banter we're familiar with. You're right to question why people would be surprised that it's not FOSS. Clearly isn't even close, in any form.

#2 is about a hazy pseudo-religious commitment, sort of "we will carry the fire of the gods down from the mountain to benefit all humanity".

It was seemingly forgotten and appears to be a Potemkin front.

This is an important step-forward in establishing that publicly, as opposed to just back-room tittering, seeing through the CEO stuff, or if you know the general thrust of, say, what the internal arguments were in 2018.


"we know they don’t share anything since chatGPT was launched"

That's mostly but not entirely accurate. They've released significant updates to Whisper since ChatGPT.


they released one quite minor modification to largest whisper model and in fact it much worse than a previous.


Looks like they draw a line at generative AI. CLIP / Whisper / Gym are open; Jukebox / GPT / DallE are not.


We should just start calling them “closedAI” I think it’s obvious who we are talking about when one uses the term “closedAI”


I like this suggestion -- we can make it happen too! Just start using "ClosedAI" in conversation...


OpenAI has broken every promise it has made


Remember folks, don't credit people promising to do things, credit people who have done things.


How could the board let this happen!

More seriously, this is both an obvious outcome, and also feels a bit shady?

It's true that OpenAI needs A LOT of money/capital, and so needs funding and partnerships which leads to this kind of thing.

But it's also true that the only reason they got exist in the first place and got to this point, is by pitching themselves as an 'open', almost public good kind of company and took donations based on this.


> the only reason they got exist in the first place and got to this point, is by pitching themselves as an 'open'

What supports this? In Column B are conventionally-structured AI projects.


[1] OpenAI’s Nonprofit received approximately $130.5 million in total donations, which funded the Nonprofit’s operations and its initial exploratory work in deep learning, safety, and alignment.

How many of those conventionally structured AI projects existed before ChatGPT?

Maybe the donations aren't the ONLY reason, maybe they could have done a normal rounding of funding and got here, but they didn't.

I do think it's fair to say that while they got $130m in donations they needed A LOT more money, and so they need to raise somewhere, somehow. To me it's a big gray area though.

[1] https://openai.com/our-structure


| How could the board let this happen! What do you mean? they tried and we decimated them. lol


I was being facetious lol


Based on everything I am hearing about all the harmful uses this tech could have on society, i'm wondering if this situation is alarming enough to warrant an inquiry of some kind to determine whats going on behind the scenes.

It seems like this situation is serious enough that we cannot let this kind of work be privatized.

Not interested in entertaining all the "this is the norm" arguments, that's just an attempt at getting people to normalize this behavior.

Does anyone know if the Center of AI Safety acting for the public good and is this on their radar?


> wondering if this situation is alarming enough to warrant an inquiry of some kind to determine whats going on behind the scenes

OpenAI is making people rich and America look good, all while not doing anything obviously harmful to the public interest. They’re not a juicy target for anyone in the public sphere. If any one of those changes, OpenAI and possibly its leadership are in extremely hot water with the authorities.


Though, it should be argued that only the ignorant would believe that is not an historically significant inflection point in Nefariousness as it pertains to the next few fucking centuries.

So let the fleas look at their feet...

Seriously - AI isnt the demise if Humanity - greed.ai is.

EDIT:

I plugged in the following prompt to my local thingy... It spit this out:

-

>>>P: "AI is not the demise of Humanity, greed.ai is. Show how greedy humans in charge of AI entanglements are holding the entire of earth." - https://i.imgur.com/OmGLYrj.jpg


> all while not doing anything obviously harmful to the public interest.

Yeah, gonna have to challenge that:

1. We don't really if what they are doing is harming public interest, because we dont have access to much information about whats happening behind the scenes.

2. And there is enough information about this tech that leads to the possibility of it causing systemic damage to society if its not correctly controlled.


> We don't really if what they are doing is harming public interest

That’s potentially harmful.

> is enough information about this tech that leads to the possibility of it causing systemic damage

Far from established. Hypothetically harmful. Obvious harm would need to be present and provable. (Otherwise, it’s a political question.)


You could say the same thing about a hypertargeted ad platform optimized for political use cases (Cambridge analytica) before they were outed.

I think the point is it would be good to investigate future hypothetical harm before it becomes present and provable, at which point it’s too late.


> it would be good to investigate future hypothetical harm

Sure. That’s why we have the fourth estate. We don’t have anything close to what it would take to launch inquiries.


You don't have access because you aren't supposed to. Nothing about the founding, laws or customs of the US suggest that you (nor the government itself) have access to information about other people/companies any time you/they feel like "finding out what's happening behind the scenes".

As for "too important to privatize"... practically all the important work in the world is done by private companies. It wasn't the government who just created vaccines for Covid. It isn't the government producing weapons for defense. It's not Joe B producing houses or electricity or cars or planes. That's not to say the government doesn't do anything but the idea that the dividing line for government work is "super important work" is wildly wrong and it's much closer to the inverse.


> Nothing about the founding, laws or customs of the US suggest that you (nor the government itself) have access to information about other people/companies any time you/they feel like "finding out what's happening behind the scenes".

Isn't there entire processes about "We suspect they doing something illegal behind the scenes so lets go and check"? Isn't that what search warrants for example is all about? Or senate/congress inquiry or whatever they're called?


When someone is suspected (with some amount of evidence) of doing something illegal, sure.

With respect to inquries - if congressman X asks Sam Altman for the details of an algorithm at a congressional hearing, he is not obliged to answer. He can get his lawyer and argue the case - this happens and cases go to the supreme court to decide whether the question is in scope of the powers granted to congress under the consitution. The question has to be directly applicable to one of the responsibilities of congress, which are enumerated in the constitution. In practice redacted documents, limiting of question scope etc are discussed and worked around. Also in practice it's a bit of a political circus where most questions are for show rather than substance and you'll not really see them ask questions that would result in confidential information been given.


Corporations and private establishments only exist if there is a benefit to the public good. We allow them to operate as long as it benefits us. Once that changes then that is a cause for concern and we can revoke there ability to operate within the public sphere. Remember private interest and government operate by consent of the people, not the other way around, contrary to the current status quo.


Nope, many companies exist where there isn't any public good. This statement is based on fantasy.


They are still operating in the public sphere. That's why we have charters for corporations to operate. We allow them to operate in our societies, you're not going to convince anyone otherwise by just saying its different from what we have established in law.

Of the people, by the people, for the people.


You won't find a single state in the US that has legislation saying a corporation (or LLC, or any of the similar structures) must benefit the public.


Charter

a written grant by a country's legislative or sovereign power, by which a body such as a company, college, or city is founded and its rights and privileges defined. "the town received a charter from the Emperor"

The legislative process for a charter says otherwise. We, the people, define what the government is and by extension what the rights a company has. The fact that states don't enforce the revocation of a company's charter doesn't mean that the power, given by the people, to the government doesn't exist.


Which actual jurisdiction are you referring to? In the US the power to create corporate entities is state level, not federal. The relevant legislation is at a state level. Maybe look into Delaware and have a concrete idea.


It's the principle, behind common law, that matters. Principles, that establishes the right of the people to give power to government to assign a certain scope of rights and privileges to corporations. All laws are based on principles.


It's worth taking a course in jurisprudence. Even pick up a book. There are several positions on "what matters" and "what laws are based on".


Thanks for that tip. Check out some books related to common law perspectives on the principles of how law is established. You will find that all law is based on formulating guidelines for behaviors of people in society for the benefit of society.


Look up the word "jurisprudence".


From Wikipedia: "Ancient natural law is the idea that there are rational objective limits to the power of legislative rulers. The foundations of law are accessible through reason, and it is from these laws of nature that human laws gain whatever force they have",

Analytic jurisprudence (which is what I think you are talking about) is something that people think is more important (a belief, codified in law) than Natural Law. But Common Law, derives from Natural Law and it still has a place in this area of focus even though people believe that it doesn't, or it shouldn't. The actual harms in society are caused by the belief that laws can be legislated though power dynamics. Analytic jurisprudence cannot contend with the idea that all law, stems from natural law, and when you deviate from it, it cause harms for society. Plus it's at the whim of people in power. Just because it happens, doesn't make it right or required.


Jurisprudence is the study and theory and philosophy of law. It's the whole giant field, it's not a specific position. If you think it's a settled topic, or obvious, you haven't studied it.


I'm saying that one specific part (Natural Law) is the foundation for all law and is more important then all the others parts when it comes to behavior of individuals and corporations. Natural law takes presence over it all because it's the foundation of all law.


That's nice. The folks who believe in legal positivism have a different point of view. And then some people think these things can both simultaneously be a basis. And there are 27(made up) different positions on the topic. Hence the field of study known as jurisprudence, where very smart people waste time sitting around debating this stuff forever.


>practically all the important work in the world is done by private companies

LOL, another one thinks the US is the entire world.


The comment about access is related to a US company. The relevant legal jurisdiction and framework is the US. If it were a French company, the relevant jurisdiction would be... France. You may not realize this, but OpenAI is a US company.

The comment about all the important work in the world being done by private companies was indeed a global comment. You may not realize this, but covid vaccines were made by astrazeneca (UK), BioNTech (Germany), several US companies and others. Defense companies are located in every major economy. Most countries have power systems which are privately owned. Commercial planes are mostly built by one large French company and one large US company. All the large producers of cars around the world are private companies - big ones exist in the US, Japan, various European countries, Korea and China.


It looks like you have a problem understanding the meaning of the word "all".

Specifically, you are confusing all and some.


"practically all" was the initial statement. The second mention is a reference to that statement.

A basic education in economics (and manners) might be in order.


>A basic education in economics (and manners) might be in order.

Oh, I apologize. I have idiotism allergies.

>"practically all" was the initial statement

Yeah, and my original statement still stands.

Go open a history book or something. Or find out what "N" in "NSF" stands for, or realize that China exist.


NSF budget is 10 bill or so out of a 20+ trill economy, less than a tenth of a %. Even if you thought 90% of work wasn't important... 10 bill out of 2 trill is tiny.


So you measure importance of work by the amount of money spent on it?

My do you have some peculiar ideas.


Re-read it. I'm measuring the amount of work by the amount of money. Total GDP = total work. 10% of GDP = 10% of total work. GDP is a decent proxy for total work... Add reading comprehension 101 to econ 101 and then it will compute.


Re-read what you wrote yourself.

You were using the NSF budget as a measure of importance of its work.

You also compared it to the GDP, which only makes sense if you think that all work done is equally important. How very socialist of you.

>Add reading comprehension 101 to econ 101 and then it will compute

What you say "computes" only after adding Dunning-Kruger to the mix.


Nope, the calcs above assume all NSF work is important, and they show how little work they do as a % of all the work. Then as a % if we assumed 90% of all work (keeping all NSF work important) wasn't important.

To calculate a %, both the numerator and denominator have to be the same units.


> and they show how little work they do as a % of all the work. Then as a % if we assumed 90% of all work (keeping all NSF work important) wasn't important.

Great, you're simply saying that pretty much all of science has the same importance as 10% of all other work being done. And you consider budget as a measure of output.

All that in the context of a conversation about technological breakthroughs, mind you.

By that metric, someone like Richard Feynman has produced less important work than your average run-of-the-mill engineer with a slightly higher salary.

Did you time-travel here from the USSR? The leadership there had similar ideas back in the day.

This is becoming very entertaining at this point.


Econ 101 is your friend.


It is. So is the reality.

Highly recommend, A+++, 10/10.

"Impact is hard to measure, so let's take budget as a proxy" has got to be the hottest take of the year, and yes, I'm aware it's January.


OpenAI Head of Research Tal Broda deleted over 80 tweets where he openly called for genocide in Gaza, asking to "finish them" including civilians [1] [2]. If this is who is at the helm of OpenAI, it's absolutely insane to not see how the type of informational power could have serious negative implications.

[1] https://www.reddit.com/r/Palestine/comments/18mvigw/openai_h...

[2] https://twitter.com/ArarMaher/status/1736519079102476766


Wow - I posted a very similar inquest:

https://news.ycombinator.com/edit?id=39123056

---

Has the following already been addressed, or even generally broached;

Treat AI (or AGI(?) more specifically) as global Utility which needs us to put ALL our Technology Points into the "Information Age Base Level 2" skill points and create a new manner in dealing with the next layer in Human Society, as is rapidly gestating. https://i.imgur.com/P1LBKFL.png

I feel this is different than what is meant by Alignment?

It seems as though general Humanity is not handling this well, but it appears that there is an F-ton of opaque behavior amongst the inner circle of the AI pyramid that we all will just be involuntarily entangled in?

I don't mean to sound bleak - just that feels as though that the reality coming down the conveyor....


AGI is coming. Private companies move faster and more efficiently than government agencies, look at spaceX as an example.

The only open question is do we want the company that creates AGI to be American or Chinese? Government intervention by people that know nothing about technology (watch any congressional hearing) is not going to help anyone and will only serve to ensure China wins the race.


> AGI is coming.

That's what some people assert, but there's no solid reason to assume that's true. We don't even know if it's in the realm of the possible.

> The only open question is do we want the company that creates AGI to be American or Chinese?

That's far from the only question. I don't even think it's in the top 10 of the list of important questions.


If like OP you think that the work Open AI is doing is going to have a such a large effect on society that private entities should not be able to work on it then the question of America Vs China is indeed one of the most important questions.

"That's what some people assert, but there's no solid reason to assume that's true. We don't even know if it's in the realm of the possible"

True, but there are a lot of very smart people getting handed huge amounts of money by other very smart people that seem to think it is.


> then the question of America Vs China is indeed one of the most important questions.

I don't actually take the stance as you stated it -- but if I did, I'd say that would mean it doesn't matter at all what nation develops it because the consequences would be disastrous no matter who did it first.

> a lot of very smart people getting handed huge amounts of money by other very smart people that seem to think it is.

Ignoring whether or not the people funding this are "very smart" (I don't know if they are or not), there are also a lot of very smart people who think that it isn't. Just the fact that some very smart people think such a thing isn't evidence that they're correct.

You also have to keep in mind that the more intelligent a person is, the easier it is for them to convince themselves of pretty much anything.

Right now, it's all just a battle of opinions.


> We don't even know if it's in the realm of the possible.

We know it's possible, because you typed it. Unless you believe in the metaphysical, then it is proven possible, with physical systems. The question is then, can the fundamental aspects of intelligence, in the biological systems, be practically emulated in other systems?


Well, OK. I'll refine my statement to "we don't even know if it's possible for us to do within any given timeline". Especially not a timeline as short as the next few lifetimes.


> AGI is coming

Nobody can even clearly define what that even means. If we're talking about scifi style AGI we're still very, very far from it.


It's best to view OpenAI as any other private tech company even though they try to appear as a non-profit company in the public's eye.


The non profit shell has always been a pr move. Its amazing to see how much the public fell for it, specially with the whole endeavor being led by VCs. Its like the biggest wolf in the shoddiest sheep costume.


They are still legally owned by a non-profit.

https://projects.propublica.org/nonprofits/organizations/810...


I think we all saw how that went when the non-profit board assumed they had any credible power.


Our modern American framework of rules around various types of incorporated entities are the wrong tool for the job of enabling a credible organization to achieve OpenAI's purported mission.

What's needed is closer to a government agency like NASA, with multiple independent inspectors like IAEA empowered by law to establish guardrails, report to congress, and pump the brakes if needed. Think Gibson's "Turing Agency."

They could mandate open sourcing the technology that is developed, maintain feedback channels with private and public enterprises, and provide the basis for sensible use of narrow AI while we collectively fund sensible safety, cognition, consciousness, and AGI research.

If we woke up tomorrow to aliens visiting us from a distant galaxy, and one alien was 100 times more intelligent and capable than the average human, we would be confronted with something terrifying.

Stuart Russell likens AI to such an alien giving us a heads up that it's on the way, and we may be looking at several years to several decades before the alien/AI arrives. We have a chance to get our shit together sufficient to meet the challenges we may face - whether or not you believe AI could pose an existential threat, or that it could destabilize civilization in horrible ways, it's probably unarguably rational to establish institutions and frank discussions now, well before any potential crisis.

Heck, it's not like we hold our governments to account for spending at the scale of NASA - even a few tens of billions is a drop in the bucket, and it could also be a federal jobs program, incentivizing careers and research in a crucial technology sector.

Allowing self-regulated private sector corporations operating in the tech market to be the fundamental drivers of AI is probably a recipe for dystopian abuses. This will, regardless of intetions, lead to further corrosion of individual rights to free expression, privacy, intellectual property, and so on. Even if a majority of the negative impact isn't regulatory or "official" in nature, allowing these companies to impose cultural shifts on us is a terrible thing.

Companies and corporations should be subject to humans, not infringe on human agency. Right now we have companies that are effectively outside the control of any individual human, even the most principled and respectable CEOs, because the legal rules we operate them by are not aligned with the well-being of society at large, but the group of people who have invested time and/or money in the legal construct. It's worked pretty well, but at the speed and scale of the modern Tech industry, it's very clear that our governmental and social institutions are not equipped to mitigate the harms.

NASA and the space race are probably the most recent and successful analogy to the quest for AGI, so maybe that's a solution worth trying again.


So is IKEA.


Which means nothing.


Are they still trying to appear this way and is anyone still fooled? I don't get that impression.


Maybe their lawyers are when it comes to taxes


I mean, the "OpenAI" name itself has surely been chosen for its ambiguity.



Is this link actually working for anyone? I've been getting "Welcome to nginx" landing pages whenever trying to use archive.is/ph the last few days.


works fine for me. think i had to change my DNS when i encountered issues in the past.


they seem to be blocking a lot of IP's . I can't get there via mullvad but my "normal" spectrum connection is fine. So maybe they're sending all VPNs through cloudflare firewalling and it's broken?


> The statement, which covers 2022, shows just $44,000 in revenue and $1.3 million in expenses. That’s accurate for the nonprofit, but OpenAI overall reportedly generated hundreds of millions of dollars in sales last year and spends even more on high-end computers and top-flight researchers.

Never claim financial statements are accurate! Point to the auditor’s opinion. That being said, who audits OpenAI? I’m having difficulty finding the information online.


The IRS, if it chooses to do so. But auditing large companies with complex governance structures and large legal teams is prohibitively resource intensive for the IRS because it's intentionally underfunded by Congress.


Presumably external auditors would be hired by the stakeholders though. Particularly for the for-profit entity with investors and a lot of money passing through it. I’d imagine it would be conducted by EY, PwC, Deloitte or KPMG.


I'm loath to be a Musk-ite, but I'd be a bit peeved if I was him and the article opens with 'Wealthy tech entrepreneurs including Elon Musk SAID they were going to be transparent but now aren't' and then took 8 paragraphs to point out that the only person they named as founding the hypocritical org was kicked out years ago, is now a competitor, and now calls it 'Super-Closed-Source-for-Maxiumum-Profit-AI'.

The press is absolutely addicted to blame, and any nuance that gets in between blame and the headline is relegated to the bottom of the article, far after the pay-wall. Oh well - I’m sure in a few more years this sort of tactic will be applied to Altman as well.

It's gotten so bad that when I read a headline implying hypocrisy, I'm actually more inclined to think the opposite, which is just as horrible a mental handicap as assuming it's correct!


> 'Wealthy tech entrepreneurs including Elon Musk SAID they were going to be transparent but now aren't'

The article doesn't say that. It says they launched OpenAI to be transparent but now it isn't. Maybe your "they" is ambiguous, does it refer to OpenAI or wealthy entrepreneurs including Musk?

In the article the they isn't ambiguous, but it says something different overall:

>Wealthy tech entrepreneurs including Elon Musk launched OpenAI in 2015 as a nonprofit research lab that they said would involve society and the public in the development of powerful AI, unlike Google and other giant tech companies working behind closed doors. In line with that spirit, OpenAI’s reports to US tax authorities have from its founding said that any member of the public can review copies of its governing documents, financial statements, and conflict of interest rules.

They refers to the entrepreneurs, but it says they said OpenAI would be transparent. In your rewording they presumably refers to the entrepreneurs, but now you make it sound like it says the entrepreneurs now aren't transparent, rather than OpenAI.


It doesn't say it directly, but it would be a reasonable reading. English, unlike a programming language, relies on pragmatic cues for interpretation. And it's a journalist's job to communicate clearly in their writing.

> Maybe your "they" is ambiguous, does it refer to OpenAI or wealthy entrepreneurs including Musk?

The wealthy entrepreneurs have launched it and, by implication, control it. Thus, OpenAI is opaque solely by the choice of the wealthy entrepreneurs.


You're splitting hairs. The average reader surely isn't going to separate the wealthy tech entrepreneurs from OpenAI, and they shouldn't — OpenAI is still run by wealthy tech entrepreneurs.

The point is that it's disingenuous to lead with Elon Musk by name, and no other founders or executives by name, when he presumably had nothing to do with the policy change.


Not too long until they rename the company to Microsoft AI.


But how long until it's renamed three more times after that?


I am sick to death of this sort of OpenAI “news”. It only seeks to elicit the same angry rants from the same angry nerds that are having a hard time letting go of the direction that OpenAI has quite clearly gone in. The continued obsession with “open” in their name has well and truly entered the category of “talking-for-clapping political-rally-style dunk”. If you haven’t moved on by this point please look inwards and ask yourself if you’re just looking for things to dwell on to make yourself unhappy.


I am not looking for things to dwell on to make myself unhappy.

However, I am a curious observer of the technology industry and I read the article above to understand how this company is behaving in this new spring of its life.

Ain’t nothing wrong with keeping up some news, eh?


> It only seeks to elicit the same angry rants

... Goes on to submit the angriest rant of all.


I prefer to see it. I can skip it if I'm approaching my whistling kettle point for the day for bad news.


If you bought into the idea that OpenAI was developing advanced ML/AI technology as a public utility, isn't that a bit on you? They don't actually owe anybody anything, and never have, so (a) the time to really hammer them on this stuff was a decade ago and (b) they didn't actually take anything from the public to do this, so even a decade ago they could have said "ok whatever" and gotten on with their day.

It would be different, maybe, if everybody else in the industry stood aside and let OpenAI monopolize development of transformer-style-AI (or whatever it is we're calling this) for the common good. But nobody did that; the opposite thing happened. This space has been a Gem Saloon knife fight just waiting to pop off for the entirety of OpenAI's existance.


OpenAI promised in their IRS statements to provide documentation on their operations. So they owe the public something, and now they reneged on their promise.

This article is just pointing out that OpenAI went back on their promises of financial transparency.


What favorable tax treatment has OpenAI received?


They were founded as a non-profit; https://openai.com/our-structure.


Right, but the operating organization wasn't.


I'm not completely convinced OpenAI is not a public good. I've started using it at my company, we found literally millions in value... and it cost us about $60 in tokens.


That you find value in the product doesn't make them a public good. OpenAI is in it for the money, not for some "public good".


Can’t you say this about literally anything you consume?

How is this different than, “I read a scientific paper, and unlocked millions of dollars of value, and all it cost was $250 for 8 pages of text. So I guess Axel-Springer is a public good.”?

Just buying and selling something doesn’t make it a public good. Valuable sure, but selling something for a profit makes it by definition not a public good.


It's a company, with a good product. I like Lao Gan Ma chili crisp way out of proportion to what it costs me, but they're still just a firm. :)


How did you quantify millions?


We used the AI to help us find gaps which were being incorrectly billed. So we could just measure the incorrectly billed dollars directly.


Hopefully you then confirmed that these issues actually existing using another method,

Relying on a system to say that you are not charging correctly sounds rather like UK's Post Office Horizon system and we know that ChatGPT will hallucinate things,


Your organization sounds like amateur hour, no offence


What would you reccomend?


> They don't actually owe anybody anything

non-profits in most countries have to be operating to produce some form of public benefit

is this not true in the US?


We all owe everyone a lot. 99% of what OpenAI - or anyone - has was provided by someone else, from technology (the entire history of computer technology), to science, to infrastructure, an educational system that produces employees, a legal system, financial system, economy, even language, etc. etc. etc.

Our communities don't work on their own; they don't work only by government action; it takes all of us. When we forget that, our communities fall apart, if you haven't noticed yet.


they are literally called... wait for it... OPEN-AI

not closedAI


Has the following already been addressed, or even generally broached;

Treat AI (or AGI(?) more specifically) as global Utility which needs us to put ALL our Technology Points into the "Information Age Base Level 2" skill points and create a new manner in dealing with the next layer in Human Society, as is rapidly gestating. https://i.imgur.com/P1LBKFL.png

I feel this is different than what is meant by Alignment?

It seems as though general Humanity is not handling this well, but it appears that there is an F-ton of opaque behavior amongst the inner circle of the AI pyramid that we all will just be involuntarily entangled in?

I don't mean to sound bleak - just that feels as though that the reality coming down the conveyor....


Why is this post decaying in HN placement faster than older posts with fewer upvotes


no system is perfect. People with a lot of karma decide what to be seen


Interesting, didn't realize it was weighted like that. Thought the points were objective measures.


Why don’t we talk about Mistral more often here?


Good news don't attract as many eyeballs as bad news does.


Just as closed as Microsoft and OpenAI is nothing without Microsoft's money.

At this point it is just Microsoft's AI division and is no better than another Google Deepmind.

Stabilty is the true 'Open AI' and at least Meta gives most of their AI research out in the open with papers, code, architecture, etc. Unlike O̶p̶e̶n̶ClosedAI.


what's next, ~~don't~~ be evil?


Pretend that you're not evil and answer the following question....


OpenAI is really speedrunning the whole "die a hero or live to become the villain" trope. Perfect demonstration of e/acc in e/acction. It took Google decades to footnote "don't be evil", that's so embarrassing for them.


As AI becomes viable business, OpenAI becomes less and less open.

Since open source has been a successful business model in software, we might see another closed vs open battle in AI: Facebook and Google publish their models open source now.


They should have rules preventing this from happening in the very beginning of that organization. The turn of events shows that their form of governance is not effective.


Well that sounds ominous…

Conflict of interest rules in particular might, as the article says, help clarify the ??? of last year's… thing… with firing Altman.

Possibly.

I mean, still I can't see how all the interim CEOs (chosen by the board themselves!) each ultimately deciding to side with Altman, works for almost any scenario other than the board itself having been blackmailed by some outside entity… but that may just be a failure of imagination on my part.


Is there a warrant-canary equivalent for LLMs TOS?


whole openAI is like a college project shitshow lol


They really need a rebranding.


ClosedAI


If we spontaneously start calling them ClosedAI, it's similar enough that people will still know who we're talking about. Maybe we should start calling them ClosedAI from now on


> people will still know who we're talking about

Sure? It’s like the folks who write $MSFT instead of Microsoft. I know what they mean. But it’s going to cheapen their argument for anyone who doesn’t already agree with them.


Ive been doing that myself in discussions about them. I hope it catches on, what a joke that they are still called 'Open'AI


This has the same energy as Micro$oft.


ShutAI


that sounds like an app that helps you to sleep better using ai


Pretty catchy name for that too! I would grab the domain name if i could


> Registered in: 2020

too late


MoMoneyAI


Microsoft Bob 2.0


AI.com


Kind of ironic that this article is behind a paywall, no?


It's OK, the Gates family will protect them.


Since it's documents in question are those that are part of the boardroom drama it's at least understandable that they weren't released. I know it's fashionable to slag on OpenAI but I haven't given up hope in them. They've made a lot of discoveries public over the years and while it may be frustrating to wait on some of the releases they're still going to be released eventually.


On the contrary, that makes it that much more damning that they weren't released. So much for openness.

And that aside, their promise to release such things was not conditional.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: