If OpenAI became a non-profit with this in its charter:
“resulting technology will benefit the public and the corporation will seek to open source technology for the public benefit when applicable. The corporation is not organized for the private gain of any person"
I don't think it is going to be hard to show that they are doing something very different than what they said they were going to do.
So much of the discussion here is about being a non-profit, but per your quote I think the key is open source. Here we have people investing in an open source company, and the company never opened their source. Rather than open source technology everyone could profit from, they kept everything closed and sold exclusive access. I think it is going to be hard for OpenAI to defend their behavior, and a huge amount of damages to be claimed for all the money investors had to spend catching up.
It says "will seek to open source technology for the public benefit when applicable" they have open sourced a number of things, Whisper most notably. Nothing about that is a promise to open source everything and they just need to say it wasn't applicable for ChatGPT or DallE because of safety.
I think that position would be a lot more defensible if they weren't giving another for-profit company access to it. And there is definitely a conflict of interest when not revealing the source gives them a competitive advantage in selling their product. There's also the question of if the source is too dangerous to make public, how can they be sure the final product is safe? An argument could be made it isn't safe.
It is safer to operate an AI in a centralized service, because if you discover dangerous capabilities you can turn it off or mitigate them.
If you open-weight the model, if dangerous capabilities are later discovered there is no way to put the genie back in the bottle; the weights are out there, anyone can use them.
This of course applies to both mundane harms (eg generating deepfake porn of famous people) or existential risks (eg power-seeking behavior).
I don’t think this belief was widespread at all at that time.
Indeed, it’s not widespread even now, lots of folks round here are still confused by “open weight sounds like open source and we like open source”, and Elon is still charging towards fully open models.
(In general I think if you are more worried about a baby machine god owned and aligned by Meta than complete annihilation from unaligned ASI then you’ll prefer open weights no matter the theoretical risk.)
I doubt the safety argument will hold up in court. Anything safe enough to allow Microsoft or others access too would be safe enough to release publicly. Our AI overlords are not going to respect an NDA. And for the public safety/disinformation side of things, I think it is safe to say that cat is out of the bag and chasing the horse that has bolted.
If the above statement is the only “commitment” they’ve made to open-source, then that argument won’t need to be made in court. They just need to reference the vague language that basically leaves the door open to do anything they want.
This seems to make a decent argument that these models are potentially not safe. I prefer criminals don't have access to a PhD bomb making assistants who can explain the process to them like they are 12. While the cat may be out of the bag, you don't just hand out guns to everyone (for free) because a few people misused them.
I think you make a good point. My argument was that Microsoft's security isn't that great, therefore the risk of the model ending up in the hands of the bad actors you mention isn't sufficiently low.
...What OS do you think many of these places use? Linux is still niche af. In a real, tangible way, it may very well be the case that yes, Microsoft does, in fact, run them.
I am unsure. You can't (for example) fine tune over API. Is anything safe for Microsoft to fine tune really safe for Russia, CCP, etc. to fine tune? Open weight (which I think is more accurate term than open source here) models enable both much more actors and much more actions than the current status.
You can fine tune over the API. Also, Russia and the CCP likely have the model weights. They probably have spies in OpenAI or Microsoft with access to the weights.
Interesting thought experiment! How would they best take advantage of the weights and what would be signs/actions that we could observe that signal it is likely they have the weights?
They'll train it on Xi Jingping Thought so that the people of China can move on with their lives and use the Xi bot instead of wasting precious man hours actually studying the texts.
The Russians will obviously use it to spread Kremlin's narratives on the Internet in all languages, including Klingon and Elvish.
It's very hard to argue that when you give 100,000 people access to materials that are inherently worth billions, none of them are stealing those materials. Google has enough leakers to conservative media of all places that you should suspect that at least one Googler is exfiltrating data to China, Russia, or India.
I might be too generous, but my interpretation is that the ground changed so fast that they needed to shift to continue the mission given the new reality. After ChatGPT, every for-profit and its dog is going hard. Talent can join the only Mother Teresa in the middle, or compete with them as they stupidly open all the source the second they discover anything. You can’t compete with the biggest labs in the world who have infinite GPU, with selfless open sourcers running training on their home PC’s. And you need to be in the game to have any influence over the eventual direction. I’d still bet the goal is the same, but how it’s done has changed by necessity.
> After ChatGPT, every for-profit and its dog is going hard.
After ChatGPT was not released to the public, every for-profit raced to reproduce and improve on it. The decision not to release early and often with a restrictive license helped create that competition for funds and talent. If the company had been truly open, competition would have either had the choice of moving quickly, spending less money and contributing to the common core, or spending more money, going slower as they clean room implement the open code they can't use, and trying to compete alone. This might have been a huge win for the open source model, making the profitable decision to be to contribute to the commons.
No idea, don’t know what they stand for. This is logic. What do you do if you’re Sam Altman and ChatGPT has blown up like it has, and demands resources just to run the GPU’s. What is his next move? It’s not business as usual.
The risk is that he’s too confident and screws it up. Or continues on the growth path and becomes the person everyone seems to accuse him of being. But I think he’s not interested in petty shit, scratching around for a few bucks. Why, when you can (try) save the world.
Money for resources to run ChatGPT is the tail wagging the dog, though.
If you need money to run the publicly released thing you underpriced to seize market share...
... you could also just, not?
And stick to research and releasing results.
At what point does it stop being "necessary" for OpenAI to do bad things to stay competitive and start being about them just running the standard VC playbook underneath a non-profit umbrella?
Unless the charter leaves room for such a drastic pivot, I'm not sure how well this would hold up. Whether the original charter is binding is up for lawyers to debate, but as written it seems to spell out the mission clearly and with little wiggle room for interpretation. Maybe they could go after the definition of when open sourcing would benefit the public?
Other possibility is that they claim they spent the non-profut funds prior to going for-profit? It would be dubious to claim damages if the entity was effectively bankrupt prior to for profit creation.
Wouldn't that require notification to all interested parties of the nonprofit since its effectively killing off the nonprofit and starting a new entity?
The original charter is nothing more than a marketing copy. And companies are legally allowed to change their marketing copy over time and are not bound to stick to it in behavior. The marketing was for the investors and they should be the first to know that such promises are subject to how reality unfolds. In other words a team can raise money by promising milestones but they are allowed to pivot the whole business and not just abandon milestones if the reality of the business demands it.
> huge amount of damages to be claimed for all the money investors had to spend catching up
Huh? There's no secret to building these LLM-based "AI"s - they all use the same "transformer" architecture that was published by Google. You can find step-by-step YouTube tutorials on how to build one yourself if you want to.
All that OpenAI did was build a series of progressively larger transformers, trained on progressively larger training sets, and document how the capabilities expanded as you scaled them up. Anyone paying attention could have done the same at any stage if they wanted to.
The expense of recreating what OpenAI have built isn't in having to recreate some secret architecture that OpenAI have kept secret. The expense is in obtaining the training data and training the model.
"The specific purpose of this corporation is to provide funding for research,
development and distribution of technology related to artificial intelligence. The resulting technology will benefit the public and the corporation will seek to open source technology for the public benefit when applicable."
Based on this, it would be extremely hard to show that they are doing something very different from what they said they were going to do, namely, fund the research and development of AI technology. They state that the technology developed will benefit the public, not that it will belong to the public, except "when applicable."
It's not illegal for a non-profit to have a for-profit subsidiary earning income; many non-profits earn a substantial portion of their annual revenue from for-profit activities. The for-profit subsidiary/activity is subject to income tax. That income then goes to the non-profit parent can be used to fund the non-profit mission...which it appears they are. It would only be a private benefit issue if the directors or employees of the non-profit were to receive an "excess benefit" from the non-profit (generally, meaning salary and benefits or other remuneration in excess of what is appropriate based on the market).
Does it become applicable to open source when "The resulting technology will benefit the public"?
That seems the clearest read.
If so, OpenAI would have to argue that open sourcing their models wouldn't benefit the public, which... seems difficult.
They'd essentially have to argue that the public paying OpenAI to use an OpenAI-controlled model is more beneficial.
Or does "when applicable" mean something else? And if so, what? There aren't any other sentences around that indicate what else. And it's hard to argue that their models technically can't be open sourced, given other open source models.
The "when applicable" is like the preamble to the Constitution. It may be useful for interpreting the rest of the Articles of Incorporation but does not itself have any legal value.
After all, the AOI doesn't specify who determines "when applicable," or how "when applicable" is determined, or even when "when applicable" is determined. Without any of those, "when applicable" is a functionally meaningless phrase, intended to mollify unsavvy investors like Musk without constraining or binding the entity in any way.
If so, OpenAI would have to argue that open sourcing their models wouldn't benefit the public, which... seems difficult.
No, they don't have to do anything at all, since they get to decide when "when applicable" applies. And how. And to what...
Or does "when applicable" mean something else? And if so, what? There aren't any other sentences around that indicate what else. And it's hard to argue that their models technically can't be open sourced, given other open source models.
Exactly. That's the problem. There needs to be more to make "when applicable" mean something, and the lawyers drafting the agreement deliberately left that out because it's not intended to mean anything.
Eh. Given their recent behaviour, they seem to be indistinguishable from a for-profit company with trade secret technology.
That doesn’t seem aligned with their articles of incorporation at all. If “when applicable” is wide enough to drive a profit-maximising bus through, they’re not a not-for-profit. And in that case, why bother with the AOI?
The articles of incorporation aren’t a contract. I don’t know enough law to be able to guess how it’ll be interpreted in court, but intuitively Elon seems to have a point. If you want to take the AOI seriously, Sam Altman’s OpenAI doesn’t pass the pub test.
The for profit entity is allowed to act in the interest of profits.
What is important is that the non profit must use the dividends it receives from the for profit entity in furtherance of is stated non-profit mission.
Elon does not have a point. He's simply proving that he is once again the dumbest guy in the room by failing to do basic due diligence with respect to his multi million dollar donation.
That being said, Altman is also doing sketchy things with OpenAI. But that was part of the reason why they created the for-profit entity: so Altman could do sketchy things that he could not do within the nonprofit entity. Regulators might be able to crack down on some of the sketch, but he's going to be able to get away with a lot of it.
If that's the interpretation, its cpletely open ended and OpenAI has full rights to move goal posts for as long as they wish by redefining "done".
Technologies are never "done" unless and until they are abandoned. Would it be reasonable for OpenAI to only open source once the product is "done" because it is obsolete or failed to meet performance metrics?
And is that open sourcing of the training algorithm, the interpretation engine, or the produced data model?
In case anyone is confused I am referring to 126, 132 and 135. Not 127.
"126. As a direct and proximate result of Defendants breaches, Plaintiff has suffered damages in an amount that is presently unknown, but that substantially exceeds this Courts jurisdictional minimum of $35,000, and, if necessary, will be proven at trial.
127. Plaintiff also seeks and is entitled to specific performance of Defendants contractual obligations.
132. Injustice can only be avoided through the enforcement of Defendants repeated promises. If specific enforcement is not awarded, then Defendants must at minimum make restitution in an amount equal to Plaintiffs contributions that have been misappropriated and by the amount that the intended third-party beneficiaries of the Founding Agreement have been damaged [how??], which is an amount presently unknown, and if necessary, will be proven at trial, but that substantially exceeds this Courts jurisdictional minimum of $35,000.
135. As a direct and proximate result of Defendants breaches of fiduciary duty, Plaintiff and the express intended third-party beneficiaries of the Founding Agreement have suffered damages in an amount that is presently unknown, but substantially exceeds this Courts jurisdictional minimum of $35,000, and if necessary, will be proven at trial."
The end result of this suit, if it is not dismissed, may be nothing more than OpenAI settling with the plaintiffs for an amount equal to the plaintiffs' investments.
According to this complaint, we are supposed to be third-party beneficiaries to the founding agreement. But who actually believes we would be compensated in any settlement. Based on these claims, the plaintiffs clearly want their money back. Of course they are willing to claim "the public" as TPBs to get their refund. Meanwhile, in real life, their concern for "the public" is dubious.
Perhaps the outcome of the SEC investigation into Altman's misrepresentations to investors, if any, may be helpful to these plaintiffs.
OpenAI, even the name was his suggestion from what I remember reading, wouldn't exist without him - other investors may not have invested either without his money essentially vouching for the organization, and also its primary AI developer likely wouldn't have joined the OpenAI either if it wasn't for AI; who I believe is the one who recently announced they're leaving OpenAI, and I'd speculate they're joining Elon's new AI effort.
Hard to say. Certainly his name would have been a draw, but you've also got Altman, head of YC as a founder, not to mention his buddy Peter Thiel.
Musk's influence in attracting/retaining talent is rather a mixed bag given that he poached Karpathy for Tesla around the same time he left.
I think the person you're thinking of who Musk helped recruit for OpenAI is Ilya Sutskever. The person who just left, after a second brief stint at OpenAI, is Karpathy who for time being seems content on going back to his roots as an educator.
> The end result of this suit, if it is not dismissed, may be nothing more than OpenAI settling with the plaintiffs for an amount equal to the plaintiffs' investments.
Musk's money kept the lights on during a time when OpenAI didn't do much more than get a computer to play Dota. If he wants the proceeds of what his money bought, then they should write him a check for $0, or ship him a garbage can full of the taco wrappers eaten by the developers during that time period.
no one said the values would no longer matter - they just wouldn’t be furthered by said assets.
you might think that that also suggests that the values no longer matter, but that would be to say that the only way to prove that something matters is with money or money equivalents. to “put your money where your mouth is,” if you will.
Going to the IRS and saying, "This is how we plan to benefit humanity and because of that, we shouldn't have to pay income tax." and then coming back later and saying, "We decided to do the opposite of what we said." is likely to create some problems.
Right, and when they decide to do the opposite they lose the tax benefit, I'm not really sure there's an argument that says they can't change their designation.
It matters though because they didn't change their designation before acting differently, which would make them liable. Not sure to whom they'd be liable though, other than the IRS.
True. Non profits exist, and they pay their leaders very well, and some that are probably corrupt provide very little benefit "for the greater good" or whatever the requirements are for non profit status.
It goes back to 1886 [1]. Ditching corporate personhood just makes the law convoluted for no gain. (Oh, you forgot to say corporations in your murder or fraud statute? Oh no!)
It gives rights, not obligations, which makes Citizens United so abhorrent. It's a dark money vehicle, and worse - foreign dark money. Just on the face of it its ridiculous, but alas, laws are made by the ultra rich.
A corporation has the right to "speech" but if crimes are committed, rest assured it will not go to jail, and neither will its executives, protected by layers of legal indirection of this "person corporation".
Musk didn't pay for everything. He took his money and left, upset that OpenAI wouldn't let him run it. It was precisely because Musk stopped funding them that OpenAI were forced to seek outside investors and change their corporate structure to be able to offer them a return on the investment.
Obviously this was originally presented as a non-profit, so not a normal startup by any means, but certainly it is normal for startups to "pivot" direction early on and end up doing something completely different than what they initially said. I'm not sure at what point this might upset investors, but I believe the idea is that they are usually investing in the team as much as the idea.
The idea of corporations as legal persons predates the United States. English law recognised trade guilds and religious orders as legal persons as early as the 14th century. There is nothing specifically American about the idea at all-the US inherited it from English law, as did all other common law countries-and English law didn’t invent it either, similar concepts existed in mediaeval Catholic canon law (religious orders as legal persons) and even in Ancient Roman law (which granted legal personhood to pre-Christian priestly colleges)
Yep - the very existence of a widespread concern that open sourcing would be counter to AI safety, and thus not "for the public benefit," would likely it very hard to find OpenAI in violation of that commitment. (Not a lawyer, not legal advice.)
IANAL but I don't think a court case hinges whether OpenAI is actually open; neither open-source nor closed-source are directly required to fulfill the charter. I think it would be about the extent to which the for-profit's actions and strategy have contradicted the non-profit's goals.
Yeah but has that community grown because of OpenAI, or in spite of it.
IMO the only real involvement OpenAI has had in that movement is suddenly getting REAL hand-wringy Infront of Congress about how dangerous AI is the moment OpenAI no longer held the only set of keys to the kingdom.
Unfortunately you can also easily show that they ARE doing these things too.
Open source. Check - they have open source software available.
Private Gain of any person. Check (Not hard to see it's a non-profit. People that make private money from a non-profit is obviously excluded) Now to me, personally, I think all non-profits are for-profit enterprises. The "mission" in nearly all cases isn't for the "people it serves". I've seen so many "help the elders" "help the migrants" but the reality is, money always flows up, not to the people in need.
I don't expect a case against OpenAI to be given the leeway to bring into question the entire notion of a nonprofit. There are long standing laws (and case law) for nonprofit entities, it won't all get thrown out here.
Not that I'm aware of, though its definitely not my area.
I can't think of another example of a nonprofit that was so financially viable that it converted to for-profit though, usually a nonprofit just closes down.
OpenAI being a nonprofit is like Anthony Levandowski’s "Way of the Future" being a 501(c) (3) religious nonprofit. All of which is lifted from Stranger in a Strange Land and L. Ron Hubbard's Scientology.
(It wouldn't be the first time someone made a nerd-cult: Aum Shinrikyo was full of physics grad students and had special mind-reading hats. Though that was unironically a cult. Whereas the others were started explicitly as grifts.)
From a distance that looked ok to me at first. It looked like some bloggers writing essays about what is good, with the attitude of a policy wonk. But clearly it's also provided the moral rhetoric behind such bizarre groups as SBF's Adderall-fueled crypto-grift polycule, so I wouldn't be surprised if there are others I don't know about. Maybe you have more examples.
If Musk's tens of millions in donations were in reliance on the charter and on statements made by sama, Brockman, etc., there's probably a standing argument there. Musk is very different than you or I -- he's a co-founder of the company and was very involved in its early work. I wouldn't guess that standing would be the issue they'd have trouble with (though I haven't read the complaint).
I have no idea if he has any standing or not, but from your reasoning it doesn't follow that he doesn't. If I put a box with a sign on a street "Donate to such and such charity to save dolphins", and you give me money only to later find out that I have nothing to do with that charity and your money will be spent on my new car, I scammed you, plain and simple, and you can sue me. Was this sign a contract with you? No. Do I become a stakeholder when I donate my money to charity? Obviously not. But it's a scam nevertheless. In fact, you don't even have to be a victim start litigation, but you can claim compensation if you were.
So, once again, I have absolutely zero idea if OpanAI can be held accountable for not following their charter or not, but if they do, anyone can raise a complaint, and since Musk did give them money to save dolphins or whatever, he may actually be considered the victim.
Here it's probably closer to you hanging a "give me money to help me find ways to save the dolphins and I promise I'll write a report on it" sign, someone gives you 10k but they're back a month later to sue you because you're eating pizza with the money while watching Free Willy.
There's a moral argument perhaps...but from a layman's perspective it's a really dumb case. Now, dumb cases sometimes win, so who knows.
If you make promises to someone in order to get them to give you money, depending on the circumstances, that can (but does not always) create a contractual relationship, even if the promises themselves or the document they're in don't normally constitute a contract in themselves. Proving the implied terms of the contract can be difficult, but as long as the court believes there may have been such a contract created, we've moved from a question of standing to questions of fact.
I've skimmed the complaint now. There seems to be prima facie evidence of a contract there (though we'll see if the response suggests a lot of context was omitted). I find the Promissary Estoppel COA even more compelling, though. Breach of Fiduciary Duty seems like a stretch using "the public" as a beneficiary class. This isn't really my area, but I'll be mildly surprised if that one doesn't get tossed. Don't know enough about the Unfair Business Practices or CA Accounting requirements to have any opinion whatsoever on those. The Prayer for Relief is wild, but they often are.
Not familiar with the US legal system at all, but in my country (France) a contact doesn't need to be signed or even on paper to be a contract. Saying “in exchange for your donation I'll abid to the charter” in front of witness is a contract under certain circumstances, so maybe there's something like this involved.
This ruling is fairly specific to its facts, and is about a particular cause of action (financial mismanagement). While donors don't have standing for that cause of action by statute, it appears they do for breach of fiduciary duty: Cal. Bus. & Prof. Code § 17510.8. And that's the only CoA where they're relying on CA nonprofit law.
But you can still sue them for not doing their legally required duty, the law is still above the board members. A non-profit that doesn't follow its charter can be sued for it.
I don't think there is such a thing. Once you co-found something, you are forever a co-founder of it. (Unless you have a time machine. Lacking such, nobody has ever un-founded anything, have they?)
Companies pivot all the time. You have to do more than show they are doing something different from what they originally said they would do if you want to win an investor lawsuit.
Taking into account, the reported reason Elon Musk departed from the project, is because he wanted OpenAI to merge with Tesla, and he would take complete control of the project, this lawsuit smells of hypocrisy.
But that was to be expected, from the guy who forced his employees to go work with Covid and claimed danger of Covid infection to show up at a Twitter aquisition deposition...
You are correct that I did not address the merits of the lawsuit. Reason is because I dont care about and also because a lawsuit means legal discovery, something OpenAI and Microsoft can't afford. The lawsuit can have merit but will be settled out of court...Mark my comment... :-)
Billionaires motives, their weird obsession of saving the world, and damaged psyche's that drive a never ending need, for absurd accumulation of health, have a direct impact on my daily life, and are therefore more interesting.
Can you cite the specific line and source claiming "reported reason Elon Musk departed from the project"? Feels taken out of context from what I remembering reading before.
Not sure I'd trust Washington Post to present a story accurately - whether the termination notices were relevant to the premise presented.
Did he attend the Twitter deposition via video? Seems like a hit piece.
"...And Musk proposed a possible solution: He would take control of OpenAI and run it himself.
Altman and OpenAI’s other founders rejected Musk’s proposal. Musk, in turn, walked away from the company — and reneged on a massive planned donation. The fallout from that conflict, culminating in the announcement of Musk’s departure on Feb 20, 2018..."
So you removed the context that the solution - for them betraying-violating the non-profit agreement - was for him to take it over and realign it back following its intended principles during OpenAI's formation-conception?
No I did not, but in your reply you removed the context that he argued they were lagging behind Google, not that they were not working enough for humanity...
Now why are you being obtuse on ignoring another real reason, Elon Musk was poaching people from OpenAI and argued for a conflict of interest.
From all the HN favorites memes, the two most strong that need to evaporate, are that Elon Musk wants to save humanity, and Sam Altman does not care about money...
Ever read the The Selfish Gene book? I do believe Elon wants to help save humanity, at the same time his motives can be partly selfish. He quite clearly knows that if freedom of speech is lost then the totalitarians have won, and he'll lost freedom as well - and he'll be "forced" then to aid the totalitarians in whatever they do, even if knowing they're genocidal, etc; I suspect he'd refuse to work in such conditions but who knows what blackmail - like threats of torturing or killing his children, etc - could be tried against him.
And yeah, Sam cares about money and some other things, it seems.
“resulting technology will benefit the public and the corporation will seek to open source technology for the public benefit when applicable. The corporation is not organized for the private gain of any person"
I don't think it is going to be hard to show that they are doing something very different than what they said they were going to do.