Hacker News new | past | comments | ask | show | jobs | submit login
The Contradictions of Sam Altman (wsj.com)
338 points by mfiguiere on April 1, 2023 | hide | past | favorite | 541 comments




This is a very fluid and chaotic situation so I'd be more concerned if he said one thing and stuck to it

When the Facts Change, I Change My Mind. What Do You Do, Sir? - John Maynard Keynes

A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines. With consistency a great soul has simply nothing to do. He may as well concern himself with his shadow on the wall - Ralph Waldo Emerson

Do I contradict myself? / Very well then, I contradict myself. / (I am large, I contain multitudes) - Walt Whitman's “Song of Myself”


That's all well and good, but it's not well and good to change what you say because the calculus of what will personally benefit you has changed. It's not the same as "new information coming to light", it is instead greed.

We've already seen a charity robbed to no end other than Profit. Commercializing OpenAI was for one reason: to enrich the people commercializing it. Not to benefit society, and with nary a thought given to the consequences.

Control of AI serves one purpose, in Altman's mind, I am convinced: to prevent other people from eating the chickens he counted too early.

If ten million people lose their jobs to ChatGPT, it's fine as long as Altman & Co grow richer, but if those same people try to democratize AI and make his time and investment valueless in the face of a commoditized category of software, all the sudden he has a problem and those people should be constrained by the government -- but Altman should not?

This won't be the first time greed failed.


I am astonished how anyone could just make a statement like this, here. It's so sticky, it's so damning and brash. It offers absolutely no proof, while contradicting what sama says themselves.

And we are entirely fine with that. Top of the comments after 7 hours.

Note, I am not claiming it could not be true. That is entirely beside my point. The issue is, that it could be entirely untrue. And I really mean, every single piece of information in this comment could be off. And we are comfortable with that. We are burning witches again.

If this is the level of our engagement, in this relatively good and on-topic forum, in times of change and disagreement, I do truly wonder if there is any hope for us.


I can think of exactly zero other reasons to have engineered the privatization of a charity, other than greed.

The balance of probabilities indicates to me that the attempt at regulatory capture is also based in greed, but if you have a different view, I'd love to hear your logic.

After all, if AI proliferation were as dangerous as Altman seems to claim, then why is the answer "OpenAI should continue to provide unfettered global public access to quasi-strong AI, but the government should slow other people down "?

If billions of dollars are involved, your first suspicion shouldn't be altruism.

Edit: OK perhaps a valid reason for a private subsidiary is to provide the charity with the income it needs to operate if it has no endowment (OpenAI does), but given all the other points I raised, I don't think that's what's going on here.


> I can think of exactly zero other reasons to have engineered the privatization of a charity, other than greed.

The question has recently been asked by Lex Fridman and answered by Sam Altman. This is the clip: https://youtu.be/qQdqFZFZQ6o

To quote: "We learned early on that were gonna need far more capital than we were able to raise as a non-profit" (you can listen in on it being expanded over 3 Minutes)

Now, since I can almost feel the goal-post-shifting coming, this seems like a good time to align: You claimed you could see no reason. Their reason is: They had a model. Then they figured it was wrong. They adjusted their model.

This is not an insane or unreasonable process. This is how we know things get adjusted all the time. We agree with it on principle.

Now, of course, there are a lot of possible, unfavorable interpretations, even if we can agree up to here. A couple:

a) Despite the process making sense, that's not what happened. Sam might be lying or deceiving. He may be leaving things unsaid, or it could all be a long-planned con, etc. All of this might be entirely possible, and I did not do the work to disprove this option. BUT: as a general rule, we always agree that the burden of proof is on the accuser, in court and in civil discourse. Walking up to people and saying "You stole!", them being irritated, and you saying "Well! Prove me otherwise!" is just not how we do things. I am aware that in practice we often are willing to apply different rules to the very poor, the very rich, or people we just dislike.

b) Sam is incompetent or delusional. Someone else could have made it work without going the route they did. Maybe it would have worked, and he is just... bad at math? Certainly a possibility. Similar to the above, if you want to make this claim without discrediting yourself: Show your work. Reason us through it.

c) This move was illegal. I am not a lawyer, but since I wouldn't rate this thought as particularly groundbreaking, I am okay with assuming someone raised the concern during the transition, lawyers were consulted, and it was a legal option.

d) It's simply immoral to go on, at best the exploitation of a loophole, at worst the biggest treason to humankind. If "open" cannot mean "open source" anymore, before making this transition, letting the company fail was the right move. This is an entirely understandable position, and I can empathize with the feelings it triggers. It requires no evidence, and it requires no claims about greed. It's an opinion piece. Fair enough.


I watched the video.

I'll preface this by saying I am neither judge nor prosecutor, and we are not at trial.

Where we are is in a position of considering, before it is too late, the honesty and motives of a man who has taken a charity private because they "needed money".

Needed money for what, exactly? Well, the mission of OpenAI, according to https://openai.com/about their mission is Our mission is to ensure that artificial general intelligence benefits all of humanity.

So how does this newfound capital help that mission? By hiding the details of GPT4, so that no one outside of OpenAI can benefit. I don't even know how many parameters the damn thing has or what kind of growth trajectory we are on, and neither does anyone outside of OpenAI.

Here's a man who stands to control a large part of what resembles AGI, financially benefit to an almost indescribable degree, and control, arguably, a large part of the direction of society (given the number of jobs this will cost), especially if the regulators limit access by Mere Mortals. They won't even tell us the basics of their new models (and newfound powers). So much for "Open" and "benefiting all of humanity".

If someone stands to gain this kind of money and power, and does things that on their face seem dishonest and inimical to society at large, and to the goals of the very charity he privatized, should he be immune to questioning and suspicion?

I am suspicious. I think his actions merit suspicion. I think you all should be suspicious too, based on the events so far.


> So how does this newfound capital help that mission?

Again, as soon as we jump to the "so...", we are already past the point that I was taking offense with. I myself listed a few options from where to take this and you added a few more. Fair enough and entirely fine with me.

This is the point:

> Commercializing OpenAI was for one reason: to enrich the people commercializing it. Not to benefit society, and with nary a thought given to the consequences.

Making bad faith statements about someone without even rebutting the stated intent of the accuse, or even putting in the effort to learn about the intent, is just so incredibly lame and bad style.

Imagine saying this to someones face: "You made this business for one reason alone: To enrich yourself personally, you absolutely do not care about benefitting society, and you don't give a fuck about the damage it might do." Would you think it really be okay to advance this to the accusse, before even bothering to find or hear an explanation regarding the issue you are taking such strong offense with? Just an inkling of what their explanation is without you assuming it for them, as a matter of human decency?

If we cut out the option for people to explain themselves or reason on basis of their explanation, what are we doing other than advancing populism? I really want to believe this place can not be that.

(I want to add, that I have 0 allegiance or interesting opinions about Sama. In the past I found some of their takes interesting, but for me that's just normal, when I listen to a decently smart person for the first time, even if I end up disagreeing with most of their values.)


My skepticism is not bad faith, it's honest inquiry based on the facts available to me, which I have enumerated for you.

The hiding of scientific data from a charity has VERY few kind explanations and a lot of terrible ones. The balance of probabilities points towards Sam Altman acting maliciously and not in the public interest, which might be "okay" if he hadn't robbed a charity to do so.

I am still open to contrary viewpoints, but you're not presenting any, you are only saying I am acting in bad faith for what I (and others here) consider to be merited and valid skepticism based on recent events and statements from Altman himself.

Can you provide any charitable interpretation of the hiding of scientific data from a charity whose mission to ensure that artificial general intelligence benefits all of humanity?


You're in the clear. Like another commenter in the same thread mentioned, you've responded in good faith to a semi-paranoid question. In my uncharitable view, the GP seems to both have a bias for Sam and a weird anxiety over moral questions.


Chill out a little bit. The commenter made it clear that it's their opinion. In fact, they seem to have gone out of their way to suggest it's their opinion - despite the rules here requiring the assumption of good faith (a rule you're breaking, in my opinion).

That it happens to also be a popular opinion doesn't indicate that people are burning witches - it indicates that people are increasingly finding it easier to explain the world in terms of greed and cynicism; because, frankly, that is the world we live in (aghem, in my opinion).

That you find this to be "witch-burning" indicates to me that you must really relate with the ultra-wealthy which honestly just sounds tone deaf. Sam Altman nor Jeff Bezos will "go down" because of a comment on the internet. Do you know who _doesn't_ have that guarantee? Normal people.


Seriously, we're talking about a company that Microsoft invested 10 billion dollars into, this is very powerful technology that will affect our lives. They didn't invest all that money because it will make the world a better place, they did so because they see the opportunity to make a LOT of money and have the ability to steer where this technology goes while also reducing transparency. Why shouldn't we view that with suspicion, you don't have to roll over just because there's a lot of money involved or because some millionaire/billionaire might get treated "unfairly".


"We are burning witches again."

Excuse me, but what witch is being burned for the wrong reason here?

This debate evolves around Sam Altman, how what he and OpenAI was doing was open - and now it is not anymore, despite contrary initial statements. That is a legit reason for anger, no?


Burning witches? Call me a radical but I can't help but notice the name of the company is OpenAI, but it's not open at all!


Tbh the only thing I find irritating about the article is the dumb story about being 8 and realizing his future already of being the leader of an ai company.


The discourse about lost jobs is necessary but a little odd: leveraging VC money and software driven automation to take over sectors ripe for the picking is the startup model in a box.


I think the point is that society and our economies and particularly people in places of power are not preparing for a world in which automation, AI, and robots rapidly advance to the point where jobs will no longer be needed in many cases, and perhaps one day be a choice not a necessity to even live. It would be better if we prepared for that instead of worshipping jobs as if the point of life is to have a job. The USA is particularly unprepared for this, despite having plenty of wealth to do better. Imagine a workforce half of what we have today. Then figure out a way to have an even better standard of living for everyone. That’s the mission, and it could be possible, unless we measure success of leadership solely by jobs created. People freed from the burden of work would be such a better measurement.


To anyone interested in this topic I highly recommend listening to this lecture John Cage gave at Stanford University in 1992.

John Cage - Overpopulation and Art

https://www.youtube.com/watch?v=WzPneYqBLAI


Machines can do the work, so that people have time to think.

https://youtu.be/G3V2n9QtpfE


What indications are there that the US will start preparing for this anytime soon? We’ve had decades to share the cost savings from outsourcing to China and failed to do so.

On the other hand, unemploying a large enough segment of society at once might finally force some change in the system, rather than letting it continue limping on in a slow burn where 1-2% of jobs get automated a year.


>On the other hand, unemploying a large enough segment of society at once might finally force some change in the system, rather than letting it continue limping on in a slow burn

Proof of employment required to vote.


This shows how little you understand the actual source of power in the world. The vote is a way to keep that source from changing governments, not the actual source.

This mindset is prevalent today, as if people are incapable of changing government without its permission


Thanks brilliant person on HN for correcting me! I see now that repressive measures are never passed in order to deal with what people in power consider destabilizing or problematic issues.


And breed.


Altman is v2.0 of Zuckerberg, with 1000X more parameters.

I'm worried about v3.0


As an enterprise, they have absolutely no moat. Google, FB or anyone else can come up with a larger system in two-weeks time, and they can well open source it.


that was my intuition as well, but it seems pretty obvious by now that the tech is a little harder to reproduce thzn that.

otherwise wouldn't have they already done it by now ?


Whenever someone is accused of hypocrisy on the internet Oscar Wilde's dead corpse must be exhumed and put on the defense. I must have seen these exact lines on HN a dozen times over the last few years. It's as though this is a subject that people cannot reason about in the normal fashion and must fall back on cliches and appeals to authority (meanwhile almost the entire western canon is against this sorta character failing)

So here's a few quotes that go in the reverse direction:

"That which he sought he despises; what he lately lost, he seeks again. He fluctuates, and is inconsistent in the whole order of life."—Horace, Ep., i. I, 98.

Many of the Greeks, says Cicero,—[Cicero, Tusc. Quaes., ii. 27.]— cannot endure the sight of an enemy, and yet are courageous in sickness; the Cimbrians and Celtiberians quite contrary; "Nothing can be regular that does not proceed from a fixed ground of reason."—Idem, ibid., c. 26.

"Esteem it a great thing always to act as one and the same man."—Seneca, Ep., 150.

“If a man knows not to which port he sails, no wind is favorable.”—Seneca, Ep., 71.


This is interesting, but can you explain the Oscar Wilde reference? I didn't get that part.


"Either this wallpaper goes, or I do.”

― Oscar Wilde


"Did I stutter?" - Stanley Hudson


> When the Facts Change, I Change My Mind. What Do You Do, Sir? - John Maynard Keynes

I sure would love to hear Mr. Keynes acknowledge defeat. If he were alive today, I seriously doubt we (Marxists like myself) would have the pleasure.


Keynes is probably the most influential economist of all time. I disagree with some of his ideas (for likely different reasons than you do), but I can't see any basis for him "admitting defeat" - he's many things, but defeated is not one of them.


Karl Marx is not only the most influential economist of all time; he is many orders of magnitude more influential than Keynes. Only an American could be clueless enough to think this is up for discussion.


We don’t need this kind of garbage here.


They're not wrong though. Marx is so influential even the people who abhor him have adopted his terminology (where do you think the word "capitalism" comes from?). Meanwhile, Keynes' ideas are pretty much forgotten, only coming up in fringe conversations about CBDCs.


Um, no. Keynesian economics is still predominately what is taught as "economics", and is a huge influence on how governments make economic decisions.

It's a bit like Freud - people only remember him for his kookier ideas, but his basic thinking has permeated all of psychology.


Facts?


Who are 'we'?


I'm not an American. I'm also quite a fan of Marx (or, at least his analysis of the inherent flaws on capitalism). But Keynesian economics essentially runs the world. Marxism... does not.


Can you name a country with a successful implementation of communism? Successful for me means the people are happy and implementation means they actually stuck to the ideals of communism and haven’t slid capitalism in.

Here’s a happy map for the world.

https://www.mappr.co/thematic-maps/world-happiness/


Stop asking the market to look after collective interest. This is the job of the government. The ultimate effect of virtue wanking CEOs and corporate governance is to deceive people into thinking democracy is something that can be achieved by for profit organizations and that they can forsake the formal binding of collective interest through law.

It's nice if people are nice but it is not a bulwark of the collective good. It is a temporary social convenience. The higher that niceness exists in the social order, the greater its contemporary benefit but also, the more it masks the vulnerability of that social benefit.

It matters if Sam Altman is Ghandi or Genghis Khan in a concrete way but you, as a citizen must act as if it doesn't matter.

If AI poses a danger to the social good, no amount of good guy CEOs will protect us. The only thing that will is organization and direct action.


It's not the job of the government, it's the job of citizens. Citizens MAY outsource parts of this to governments, but we primarily need labor and the working class to have the majority of economic power.

https://libcom.org/library/workers-control-wage-system-ideas...

I'm dedicating my life to building for-profit commercial cooperatives based on this simple principle of voluntary association.


> It's not the job of the government, it's the job of citizens

If only there was a system of government designed from first principles to reflect the will of the people...

This is just librarian tautology. It's not wrong, but any "citizen" organization you can imagine to implement whatever policy it is you want[1] is going to be isomorphic to a democratic government.

[1] You skipped this part!


The government looks after its own interests. That is what an institution does. A government isn't different than a corporation or any other organization this way.

Delegating powers to a government for the common good does not change any of this. The people still have the job of ensuring the government carries that out properly because its priority is and always will be foremost itself, so the people have to ensure the government's interests coincide with their own. A government is not just some will-of-the-people-reflecting automaton that you can put in place then wash your hands of.


So does the private sector. At least the public has some say in governance. Who wins?


The private sector's interest aligns with the market, at least on the average, as long as they are not allowed to illegally obtain a monopoly.

The market, then, is assumed to be rational on average, and thus all participants in the market would make decisions in their own best interest, and thus, obtain the best outcome for themselves that is possible. Those who do not choose optimally is then selected out by economic darwinian competition.

The gov't has a role to play, but it does not follow that _more_ gov't regulation makes for a better society.


This may be true in theory, but we are seeing rather different development in the West. 1) power seems to have a tendency to concentrate, which ultimately leads to corporate oligarchy. 2) most markets have high barrier to entry which breaks the alignment of market and interests 3) the concept of rational markets is not as clear cut as it is often perceived, and most studies have failed to incorporate findings from related to bounded rationality

However, the government's incentive is to stay in power and not let that happen. The tool that government has is regulation.


This is because the private sector can't (for the most part) coerce you to deal with them. If the government can keep a lid on external costs, the private sector has to create value people want or they lose money and go away.

The government can force you to give them your money and obey their laws. This is what makes them uniquely powerful and dangerous. In a democracy they do have to get your vote once in a while, but that's within a framework they mostly create and police, with extremely high barriers to entry and very little reward for doing "the right thing".

It baffles and saddens me that otherwise thinking people can be so eager to hand power and authority to the government. I understand that -- after a lot of thought and reading and with much reluctance -- one could conclude that the government should be given a particular power because the alternatives are worse.


I think your last sentence hits the nail.

We have to remember that "we" have selected the best of bad options. This system is not good and will get worse by time. People have to remember that we must not let power accumulate to specific groups, be it private or public sector. Unfortunately the tendency seems to be towards the opposite of what is "good" for average individuals. If government has too much power it cannot be held accountable, and the interests have disaligned.


>The private sector's interest aligns with the market, at least on the average, as long as they are not allowed to illegally obtain a monopoly

This is the basic capitalist lie - Do you think Norfolk Southern's interests align with the "market" of Palestine? On what planet do Amazon drivers have the same incentive alignment a Amazon shareholders? No, the argument is "tough shit" go find some place that does, it's a "free market" just move your entire life any abandon community to follow around making money for a private dictator.

Unfortunately the entire profit or bust ethos isn't constrained to one company - any company that has profit as it's primary driver, aka any public company or VC backed company, MUST pursue profits over all things OR THEY WILL BE KILLED BY THEIR OWN INVESTORS. Further, now it's every individual that is required to pursue profit at all costs or - go broke and die - without healthcare because the "market" of healthcare isn't aligned with you the customer.

This lie is so pervasive that the entire field of Economics ASSUMES capitalism as the only possible economic system today (Economics was my field of study!) and anything other than it is "heterodox."

So no, you're simply describing the devastatingly broken system as though it's the only suitable option

In fact, all of the economists that wrote originally about the "market" described precisely how it would be corrupted into what it is today

Go read Veblen from 1899: Theory of the Leisure class then read chapter three of the wealth of nations and then come talk with me


This assumption of a rational market has been fucking up people's lives for a long time now. What's it gonna take to put that conceit to bed?


The assumption of a benevolent government of experts and moral leaders making decisions for the greater good has killed literally millions of people stolen trillions of dollars in war since WWII, and that's just counting western liberal democracies.

If you don't implicitly trust "the market", and you shouldn't, then you can't trust the government. What's it going to take to convince you that handing over more of your rights and money won't enable the government to solve problems it previously could not have.

"The government should fix that" is no less ridiculous than "the free market could fix that". Which is to say it can occasionally be true with a lot of caveats, and often wrong.


It's super interesting that you immediately started railing against the government, which I didn't even mention. I'm critical of currently-popular economic dogma, and I don't understand why that should imply that I'm some kind of authoritarian socialist.


It has lifted the largest amount of people from poverty in the history of the world.


That's a nice bumper sticker, but managed markets and other non-laissez-faire economies, like the ones in India and China, could make a similar claim, even though I wouldn't want to live under them.


So does the private sector what? Did you only read and respond to the first sentence of my comment? I can't figure out what you're responding to.


Corporations with regulatory capture


One of the problems that people forget about when designing their ideal political system is that the principal / agent problem is everywhere.

When it comes to keeping your country (or region) livable, nothing beats personal engagement and a little vigilance.


There's more than two ways to organize human labor

This false and defeatist dichotomy between Government dictatorships or Private dictatorships lays bare how truly depraved most thinking is around what constitutes human flourishing and liberty.

Hence why I gave you that link


> It's not wrong, but any "citizen" organization you can imagine to implement whatever policy it is you want[1] is going to be isomorphic to a democratic government.

No. Government is a monopoly usurped by violence.


Why not both?

I agree government is useful and needed sometimes. But laws are slow, blunt instruments. Governments can’t micromanage every decision companies make. And if they tried, they would hobble the companies involved.

The government moves slowly. When AGI is invented (and I’m increasingly convinced it’ll happen in the next decade or two), what comes next will not be decided by a creaky federal government full of old people who don’t understand the technology. The immediate implications will be decided by Sam Altman and his team. I hope on behalf of us all that they’re up to the challenge.


> And if they tried, they would hobble the companies involved.

Well, yeah, that's the point of regulation: to limit corporate behavior. There are plenty of other highly-regulated industries in the US; why shouldn't AI be one of them?


The emergent behavior from this however is not that we get healthy competition, it's that the big guys have the money, connections, and understanding of the process to still do everything they want, while the little guys can't get off the ground and compete.

This is a necessary evil in the case of an industry like aviation where there's a massive and immediate risk to human life if you get it wrong.

If you see AGI as life threatening (in a physical sense) it's an understandable stance to take. However if AGI is threatening moreso in a social sense I don't think concentrating the power behind it is going to be a societal good.

I genuinely believe we as a society will get fucked by corporations for everything we can give if we don't either radically socially reform for incoming AGI, or have competitive open developments in the area not beholden to corporate interests.

For my money, the latter seems at least possible.


I don't know, power concentrates plenty in unregulated industries. Google has owned search for over two decades, and it's not like there's a ton of government regulation preventing anyone from starting a search engine. pPople have tried, they just haven't succeeded because Google was that much better and that much farther ahead, and they used their advantages to widen the gap. The same is true for advertising, it's not like the government was stopping anyone from competing with Google and Facebook, but other companies just couldn't get there.

I'm not sure that AI is going to look that different. Is there any reason to believe that if OpenAI wins the majority share, that any competition is going to be able to dethrone them?

We could imagine a world where the US government broke up big tech companies in the late 2000s or early 2010s, and it sure seems like that world would be in a much better position today, due exactly to increased competition. Instead we have a bunch of big companies that don't take risks or release ambitious new products because they're too busy trying to defend their moats.


Define AI. Companies tend to find loopholes and the government is slow to regulate.


Broad enough laws don't have loopholes. GDPR for example had the wisdom not to define specific types of personal information (such as name, or IP address) but instead any data that could be correlated to identify a person. That puts the ball in the corporations' court to make sure they stay well clear of the dividing line.

Delegated councils (like the FDA, FTC, FCC, etc.) and courts can be used to make decisions about edge and individual cases, and do so much closer to real time than Congress can act.

Governments regularly successfully define things much more vague than "is this AI?"


>The immediate implications will be decided by Sam Altman and his team. I hope on behalf of us all that they’re up to the challenge.

Will they really be determined by OpenAI? So far what Altman has accomplished with OpenAI is to push a lot of existing research tech out into the open world (with improvements, of course.) This has in turn forced the hands of the original developers at Google and Meta to push their own research out into the open world and further step up their internal efforts. And that in turn creates fierce pressure on OpenAI to move even faster, and take even fewer precautions.

Metaphorically, there was a huge pile of rocks perched at the top of a hill. Altman's push got the rocks rolling, but there's really no guarantee that anyone will have much say in where they end up.


What if I told you, there is this startup that is working on really effective ways of proliferating plutonium. And that it is under pressure to keep the hype, so there isn’t much time to organize the storage of proliferated materials. And the thinking is that even if it explodes, since startup is not building a bomb, it can’t really make that much damage.

Oh, and also imagine that nobody really knows what plutonium is, there is just one startup that had figured out how to turn uranium to plutonium.

Also that understanding of uranium for now is at the level when it was just discovered. So people are still handling it by bare hands. And trying to sell shiny uranium toys.


> existing research tech out into the open world (with improvements, of course.

That’s quite the understatement, isn’t it? Neither Google nor Meta have been able to demonstrate something to the public that resonates even close the the altitude that OpenAI has.


I'm not sure if this is research quality or product tuning. I tried using Bing's chatbot and I found the experience to be vastly worse than OpenAI's version, yet Bing is ostensibly running on exactly the same platform (GPT-4). I think OpenAI turned the dial to "loquacious and friendly" and we humans are interpreting some pretty-similar underlying tech as having vastly different underlying quality, because we're impressed by that sort of thing.


The user interface to a tool matters.


Once you're down to user interface, you're no longer in "insurmountable technical advantage" territory.

But more critically, chat.openai.com is more of a tech demo than a serious product. So one can optimize the UX for "chatty and engaging" without being too concerned for accuracy. Bing, on the other hand, is intended to be a search product. It's noteworthy that adjusting the same model to provide a more accurate search experience seems to have snuffed out a lot of the magic.


They'll be decided by AI, not by governments, corporations, or individuals.

There's still a kind of wilful blindness about AI really means. Essentially it's a machine that can mimic human behaviours more convincingly than humans can.

This seems like a paradox, but it really isn't. It's the inevitable end point of automatons like Eliza, chess, go, and LLMs.

Once you have a machine that can automate and mimic social, political, cultural, and personal interactions, that's it - that's the political singularity.

And that's true even the machine isn't completely reliable and bug free.

Because neither are humans. In fact humans seem predisposed to follow flawed sociopathic charismatic leaders, as long as they trigger the right kinds of emotional reactions.

Automate that, and you have a serious problem.

And of course you don't need sentience or intent for this. Emergent programmed automated behaviour will do the job just fine.


> Essentially it's a machine that can mimic human behaviours more convincingly than humans can.

Isn't this impossible by definition? Nothing can be more convincingly human than humans, or else it would be something else.

Perhaps you mean mimic charismatic or intelligent humans more than most, or the average, human?

> Once you have a machine that can automate and mimic social, political, cultural, and personal interactions, that's it - that's the political singularity.

Do you mean online only? Because I imagine we're still quite a ways from physical machines convincingly mimicking humans IRL.

> And that's true even the machine isn't completely reliable and bug free.

If the machine makes inhuman mistakes the humans will likely notice and adapt.


> Isn't this impossible by definition? Nothing can be more convincingly human than humans, or else it would be something else.

Why isn’t it theoretically possible for an AI to pass the Turing test so hard that more than 50% of the time humans think the AI is the real human? That would effectively be more convincingly human (to humans) than humans are.


> Once you have a machine that can automate and mimic social, political, cultural, and personal interactions, that's it - that's the political singularity.

The machine can mimic, but it is still completely reactive in its nature. ChatGPT doesn't do anything of its own accord, it merely responds. It has no opinion, no agenda, and no real knowledge or understanding. All it can do is attempt to respond like a human would, but it cannot reason on its own. Ask it about its views on a political issue, and it won't think about its stance and the reasons for taking it, but it will just produce what it has trained an answer to that question ought to look like.

The panic about the machine takeover is completely overblown, driven by people who don't really understand that these machines are and how they work. We are still far, far away from points where AI would be capable of actually making political decisions.


> When AGI is invented (and I’m increasingly convinced it’ll happen in the next decade or two)

Aside from the issue of AGI being extremely difficult to capture in a proper definition, what leads you to believe this? Recent developments like LLMs and stable diffusion are impressive technology but I don't view them as remotely relevant to achieving AGI.


> Why not both?

This is not a good idea. There is a conflict of interest here. The fundamental goal of government is regulation, the fundamental goal of a business is profit.

Both the morals of government and corporations are therefore very different. I call them Guardian and Commercial Syndromes respectively. When you combine these two syndromes you create monstrous hybrids.

Allow me to elucidate:

An entity in the Guardian syndrome values honor and uses forceful tactics to maintain order and regulation.

An entity in the Commercial syndrome values money and tries to use whatever means possible to gain profits through trade agreements.

You don't want a commercial entity to have the power to enforce things and you don't want a government entity to desire profit. Both hybrids lead to bad things and it's the root of corruption in the world today.

You know how the old medieval English Nave formed? So many pirates lurked off the coasts of medieval England that London merchants went to the expense of financing a fleet of fighting ships and gave it to the Crown.

Why give it to the Crown? Because you don't want some commercial entity running the Navy using methodologies that circumvent trade agreements and violently funnel trade and profit to a singular commercial entity. You need an entity outside of the commercial sphere; An entity in the Guardian Syndrome. If a commercial entity controlled the Navy you get something similar to the Mafia or Yakuza; Basically organized crime.

Only the government should enforce the laws of AI because there is in theory no conflict of interest. But practically speaking the US government is pretty heavily hybridized already (See military industrial complex). Still it's the best option.


I don't know, isn't history really just a series of specific people doing concrete actions?

Do you think on some level the idea of some abstract 'government' taking care of things is just a narrative we apply to make ourselves feel better?

Sure individual decision makers in that government can concretely affect reality, but beyond that are we just telling a story and really nobody is 'in control'?


The more we remember the possibility of true collective self determination, the more likely we are to survive all this mess we're making.

These days we are constantly bombarded by this contradiction of individualism being primary and desirable, but at the same time impotent in the face of the world this individualism has wrought. And its all a convenient way to demoralize us and let us forget how effective motivated collective interest is. Real history begins and ends with the collective!


Yes. There is a reason that when Britain felt threatened by the turmoil in France they didn't just bar unions or political clubs. They banned "combination" almost entirely in general.


> beyond that are we just telling a story and really nobody is 'in control'?

Basically. Or at least that's the impression I'm left with after spending a couple of years in politics work.

I had a nice long breakdown.


Possible to Elaborate?


tl;dr: Nobody is steering the ship because they don't know they're on a ship. Or that the ocean exists.

It's hard without doxxing myself or calling out specific people and organizations which I'd rather not because I'm a nobody and can't afford lawsuits, but for various reasons I ended up political education and marketing for civics advocacy. Ish. To be semi on topic, I know some people who are published in the WSJ (as well as the people who actually wrote the pieces). I'm also a 3rd generation tech nerd in my mid 30s so I'm very comfortable with the digital world - easily the most so outside of the actual software engineering team.

I've spoken with and to a lot of politicians and candidates from across the US - mostly on the local and state level but some nationally. And journalists from publications that are high profile, professors of legal studies, heads of think tanks, etc.

My read of the situation is that our political class is entangled in a snare of perverse disincentives for action while also being so disassociated from the world outside of their bubble that they've functionally no idea what's going on. Our systems (cultural, political, economic, etc.) have grown exponentially more complex in the past 30 years and those of us on HN (myself included) understand this and why this happened. I'm a 3rd generation tech nerd, I can explain pretty easily how we got here and why things are different. The political class, on the other hand, has had enough power to not need to adapt and to force other people to do things their way. If your 8500 year old senator wants payment by check and to send physical mail, you do it. (Politicians and candidates that would not use the internet were enough of a problem in 2020 that we had to account for it in our data + analyses and do specific no tech outreach). Since they didn't know how the world is changing, they also haven't been considering the effects of the changes at all.

Furthermore, even those of them that have some idea still don't know how to problem solve systems instead of relationships. Complex systems thinking is the key skill needed to navigate these waters, and none of them have it. It's fucking terrifying. At best, they can conceive of systems where everything about them is known and their outputs can be precisely predicted. At best. Complex systems are beyond them.

Add to this that we have a system which has slowly ground itself to a deadlocked halt. Congress has functionally abandoned most of its actual legislative duties because it's way better for sitting congresspeople to not pass any bills - if you don't do anything, then you don't piss any of your constituents off. Or make mistakes. And you can spend more time campaigning.

I left and became a hedonist with a drug problem after a very frank conversation with a colleague who was my political opposite at the time. I'm always open to being wrong, and hearing that they didn't have any answer either was a very 'welp, we're fucked' moment. I'm getting better.


As a software developer who found myself elected to state level public office and had to spin-up my education around the legislative process and all of politics, I concur.

Their are only a couple of things I'd add.

As much knowledge as I brought in about technology and the idea of being aware of system thinking, I also brought in a great amount of ignorance about all the other areas that are legislated (healthcare, interplay between local, state, and fedearl issues, budgetary concerns, tax policy, banking, etc.). Good legislation is truly collaborative.

Sadly, for the second part, good legislation is rarer than it should be as much of legislation is about politics and perception of the voters. And voter perceptions are not necessarily logical or reasoned.

This makes it all the more important, IMHO, that everyone who is reasonable, logical, and educated spend their precious, valuable time involving themselves to advocate for elected officials who behave similar in what is essentially a zero-sum game.

p.s. Have faith. I saw enough during my time that gave me reason for that faith. (But that faith requires time and effort -- we don't get good government or democracy for free.) I'm glad to hear you're getting better.


> Good legislation is truly collaborative.

I'd say much good legislation is collaborative, but some necessary legislation is not. FDR's changes for instance. Industry did not want it. Arguably health care in the US needs this too.


Isn’t history full of examples of governments being slow and seemingly incompetent? Standard Oil was broken up many years after everyone knew that they were a ruthless monopoly which made too much profits. Note also that it didn’t take senators to figure out the monopoly, it was everyone, including the voters, who did.

There is one big benefit that democratic governments have though. They have a monopoly on physical force.


Do you read anything by Dominic Cummings? He writes a lot (like, a lot) about this from the UK perspective.


Do we have a HN Hall of Fame for comments? Because this one surely would belong there.


Thanks for elaborating!


Government does not actually “do” the executive action in everything. But government is a rough and messy consensus on the set of rules and constraints within which the “specific people doing concrete actions” act. Large scope actions that impact the public need to be within such rules and constraints. Historically CEO’s of large corporations have quite often acted so as to ignore said rules and constraints primarily for rapid aggregation of monopolistic power and concomitant profit. There is harm from monopolistic power in new and emerging industries hence government action via enforcement of rules and constraints is important. Individuals eg in the office of the AG may be the “specific people doing concrete actions” but they do it on behalf and with the full power and authority of the US Govt which acts through “specific people doing concrete actions”. It’s not an either or.


> I don't know, isn't history really just a series of specific people doing concrete actions?

That is like saying: “Aren’t human brains just series of neurons, firing at specific moments.”

History is as much—if not more—about interactions between people, feedback loops, collective actions, collective reactions, environmental changes, etc. I would argue that the individual is really really insignificant next to the system this individual resides under, and interacts with.


I think you're agreeing while arguing against my point :-)

Like what is collective action, really? It's as you say - a series of individuals interacting, setting a course in a feedback loop, and then more people getting on board with that. It doesn't just happen, it takes individual instigators. It also doesn't just self-propagate. You need individuals, typically the same individuals, to show up regularly and consistently and make sure stuff happens and people are doing what they need to be doing.


What you are arguing for is a philosophical position called atomism or reductionism, (which contrasts with holism). It is a rather old school philosophy honestly (with holism also being old school, but not quite as much) as we are are learning more how important interactions are really to study anything honestly.

Modern philosophy of science kind of rejects the notion that you can study anything really by only looking at the atomic structure of it. This is to say, you can’t really study history by only looking at the actions of individual actors. Even in particle physics you have the 4 fundamental interactions, you have virtual particles, etc. not just. This isn’t to say that fermions and bosons aren’t important, it is just that it is hard to describe any physical phenomena without looking at the interactions between them. And in fact, by studying those interactions, you can derive certain laws and behaviors. History is no different, except the complexity is many many orders of magnitude greater.


Is one difference in the analogy that individual humans have independent agency and do concrete things in the world, where particles don't have agency, and don't really 'do' actions in a sense that is meaningful outside of the system they're in?

I guess by questioning the analogy I get back to my point. Things don't happen in history because of (truly) random behaviours converging on some emergent effect, like in a system of particles. They happen because specific unique (wrt the system) individuals make decidedly non-random decisons to affect reality on purpose (even if cause and effect are not that predictable it still holds that the actions are purposeful and do affect reality) in some way.


I question your assumption that in history “things don't happen in history because of (truly) random behaviors converging on some emergent effect.”

Firstly there is currently a debate among quantum physics as to the true randomness of what we observe[1][2]. Turns out we don’t actually need true randomness in our models, they just need to appear as if they are random, in other words, the chaotic nature of the system is more important than true randomness.

Secondly, society is an emergent effect of individuals behaving in a chaotic manner. So is zeitgeist. To study history without looking at societal changes over time, and without accounting for zeitgeist, is bound to yield a pretty limited insights.

I am aware that my analogy between quantum mechanics and the study of history is flawed. The latter is infinity more complex than the former, and deriving laws and creating models is a good fit to study the former but extremely difficult to study the latter. However my point is merely pointing to the fact that atomism (and holism for that matter) is an incomplete philosophy of science, that doesn’t even work in our most fundamental scope. One should be cautious when applying it to the study of history.

Or to put it in other words: While ρ(λ | ab) ≠ ρ(λ) is a real possibility in quantum physics likewise it is highly likely that the probability of individual acting in society is not independent from the probability of the same individual acting outside of the influence of that society. In your original point government may very well be like the ab in this famous conjecture. I would be careful when removing it’s influence.

1: https://www.youtube.com/watch?v=ytyjgIyegDI

2: https://www.youtube.com/watch?v=JnKzt6Xq-w4

PS: Sorry to cite youtube videos, but I’m not a physicist and these videos are the only way for me to understand the science. Otherwise I would be citing something I don’t understand, which I don’t want to do.


"think on some level the idea of some abstract"

This! So much this! These conversations are being held at such a high level of abstraction that they don't make sense. It's one giant "feels" session.


Right. The National Health, turn of the century sanitation, widespread vaccination and the EPA are just all about "the feels".


I think the parent point is that democracy is an illusion in that only few people have any real power, and the masses choosing government representatives is very different from the masses choosing policy.

Did the population at large want National Health, sanitation, vaccination and the EPA? Or did a small group of people in government decide that was best for everyone. I suspect the latter, especially when looking at the National Health system that doesn’t seem to help as well as other systems. You’re right it’s not all “the feels”, but a lot is about making the population feel like they are looked after without doing much to actually look out for their best interests.


> Did the population at large want National Health, sanitation, vaccination and the EPA?

Yes. Literally every one of those was very popular when it was passed. The National Health in particular continues to be popular.

The EPA was passed under Nixon and he did not veto it because the feeling that the need for it was so widespread that he didn't dare.


I’m not saying they weren’t popular, but that if you had actually implemented what the public wanted then you would have ended up with universal healthcare.

Same with the EPA. It’s an improvement, so people like it, but it’s not what people actually want.


history is a simplified, prettified story that we tell ourselves about how things happened.


The actions that idnoviduals take on behalf of government are a direct reflection of the "abstract" policies and laws of that government. If you cannot discern this from 20th century history I don't know what to tell you.


Policies and laws aren't abstract, they're a good example in fact of what I said - things that have a concerete effect on reality typically authored by a small number of specific individuals within a government.


1) You are the one who implied some abstract meta-conversation. I did not.

2) In a democracy, you get the policies you want through organization and direct action.

Maybe I'm somehow entirely missing your point.


Yeah I think so, I also misinterpreted your quoted 'abstract' I think :-)

Basically my main point (or question really, I am not sure in it) is that we should resist thinking about government as an abstract entiry different in character from any org - really what it does or looks after is just, in the end, some small set of humans doing some actions that have some effect.

They're democratically elected yes, but that is a bit meaningless in the practical detail of any one given situation. In a sense it's not different, safer, or better, than some company led by some small set of individuals also makikg concrete decisons, for any one concrete decision.

Maybe democracy and policy has some aggregate influence over all decisions, making them lean in a certain way. But it's not like 'the government' as an entity is one thing led by a concrete conciousness or plan. Does that make more sense?


The government can't even execute on a bipartisan motivation to ban Tik-Tok. They get greedy and draft the bill to strip away all rights of citizens to have any digital privacy or VPNs, and to give themselves the power to declare any app illegal at any time. The market can't save us, and the federal government definitely can't.


It is difficult to shake the suspicion that the advocates of banning TikTok are using it as pretext for their real goals. It wouldn't be so terrible if their real goals were limited to just restricting social media in general, but beyond that...

The real benefit of having a Congress for broader society is that it forces at least partial exposure of these unmentioned goals.

In that sense the federal government is a wonderful invention, but the drawbacks are so many that it doesn't seem all that wonderful overall.


The government can't even move on the fact we unnecessarily change the clocks twice a year. It's the 21st Century and werewe're still "motivated" by an 19th Century concept.


Sure but in my country (USA) the government is hopelessly inept at regulating technology. We still don’t have privacy regulations and now to work around this they’re trying to ban specific foreign apps instead of protecting us from all apps! I’d honestly be horrified if they tried to regulate AI. They would be in bed with Facebook and Microsoft and they’d somehow write legislation that only serves to insulate those companies from legal repercussions instead of doing anything to protect regular people. As far as I can tell it is the view of congress that big tech can to whatever they want to us as long as the government gets a piece.


Congress is already in bed with Meta, who is driving legislating away their competition (TikTok) or taking over the US version. Political donations aside, it should be illegal for congress to due insider trading or investing in companies.


Agreed. The US has backslidden since the 20th century back towards an elitest Republic and away from democracy. But even in the US, collective action has a better track record than "altruism".


Sometimes I wonder if the back slide narrative is really accurate, or if we’re looking back at the myth of history rather than the facts. When the country was founded, only white men could vote and people of color were legally property with no rights. That’s obviously not democracy, so I question at what point after that but before today we really had democracy to have slid back from.


Think of democracy as multivariable. One variable is the percentage of the population that are enfranchised. The other is how responsive the political, legal and economic systems are to the needs of everyday people.

America started as an elitest Republic. Slowly, variable #1 grew and shrank in fits in starts but variable #2 changed very little. Until we get to the 1930s and then variable #2 explodes wide open. Then in 1960s variable #1 breaks wide open as well.

By the 70s we are probably at the high point of both. Since then #1 has significantly eroded. And now with the court punches at the voting rights act #2 is now under threat as well.

I don't mean it in the usual internet guy sense. I don't see America as having a pure past. I believe America was an elitest Republic with very slow steps towards democracy. To me America only turned the corner towards becoming a democracy in the 1930s. It was a bumpy up and down from there with a slump in the last 40 years.


When we realize it’s really only about from the 1970s that we had full enfranchisement and political participation of all citizens, this becomes more obvious. “Coincidentally,” this enfranchisement was followed by the Volker shock and then the Reagan administration, both of which led to the decimation of labor’s political power and share of the economic pie.


There is no government that isn't hopeless inept at regulating technology.


> This is the job of the government. The ultimate effect of virtue wanking CEOs and corporate governance is to deceive people into thinking democracy is something that can be achieved by for profit organizations

Sam Altman says the opposite of what you're insinuating, if by "virtue wanking CEO deceiving people" you are referring to the subject of the article, Sam Altman. He says he wants a global regulatory framework enacted by government and decided upon democratically.


This is a really cynical take.

CEOs and companies can, and should, act ethically. Not just because it's the "right thing to do", but because it's the best way to guarantee the integrity of the brand in the long term.


CEO's and companies can act ethically while that aligns with the interest of the shareholders. Reality is that at some point, this becomes impossible even for those with the best intentions. "Do no evil" rings any bells?


To be fair, Google has done a lot of shitty things, annoying things, short-sighted things, even unfair things. But I can't, offhand, think of anything "evil." Like, we're purposely going to fuck with this guy (or group). Examples?


> CEOs and companies can, and should, act ethically.

I can and should always drive the speed limit. But that doesn't mean I do, which is why highway patrol exists, to keep people in check. "Should" is such a worthless word when it comes to these discussions because if you believe that an executive needs to act a certain way but you don't believe it enough that some sort of check is placed on them, then you must not believe in importance of their good behavior that strongly.


If unelected CEOs have more power than the democratically elected government, we aren't in a democracy. That's the problem.


History proves that we have way more CEOs and companies acting out of self-interest, against the common good, than otherwise.

So yeah, they should, but they don't.


I feel I made it clear in my post that indivual integrity matters and has real consequences. But you as a citizen have 1 no way of validating a CEOs real intentions and 2 no recourse when that CEO fails to live up to those intentions. If you only fight for the protections you want once you need them, you will be at a serious disadvantage to win them.


I'm not sure you could ever say that a company can act ethically. People within the company may act ethically, but the company itself is just a legal entity that represents a group of people. The company has no consideration of ethics to act ethically.

A company that is composed of 100% ethical actors may one day have all their employees quit and replaced with 100% unethical actors. Yet the fundamental things that make the company that company would not have changed.


> because it's the best way to guarantee the integrity of the brand in the long term

If you personally make less money would you still do it?

You might answer yes but many wont.


The third way is to build AI technology that empowers the individual against state and corporate power alike. Democracy got us here. It cannot get us out.


> AI technology that empowers the individual against state and corporate power alike

What does this even mean though? Seems like hand waving that "AI" is just going to fix everything.


It’s much easier to imagine progressed applications powered by recent AI that would provide outsized civic weaponry.

Identification of unusual circumstances or anomalous public records seem ripe.

But more straight forward and customized advice on how to proceed on any front—super wikihow—makes anyone more powerful.

Today, complicated solutions can sometimes to require extensive deep research and distillation of material.

So much so that DIY folks can seem like wizards to those who only know of turn key solutions and answers.

At the risk of causing a draft from further hand waving: a bigger tent can mean a higher likelihood of a special person or group of folks emerging.


Sam Altman using AI to Iron Man himself.


Given that we don't have such AI technology at present, would it not be prudent for us to assume that it may not be available imminently and plan for how we can address the problem without it?


AI tools are built and controlled by corporations with a profit motive. They aren't in the business of empowering anyone. If they do then it's just a side effect.


Who do you imagine has the majority of compute power?


That's not a stable equilibrium. Blogs gave individuals asymmetric control over disseminating information - it didn't last. If you don't create institutions and power structures that cement and defend some ability of individuals, it will decay as that power is usurped by whatever institutions and power structures benefit from doing so.


The third way is to build AI technology that empowers the individual against state and corporate power alike

Ha, I'm OK with that as long as I get to pick the individual!

I mean, an AGI under the control of some individual could indeed make them more powerful than a corporation or even a state but whether increases average individual freedom is another question.


Such techno-utopianism... political power belongs to people who control the guns. There is no way around it.


No, the power belongs to those with the most powerful video card.


So long as you buy your inferencing hardware from another private party, I'd wager you're helpless against both state and corporate power.


I agree that it's the government's role but I think you can look a bit beyond the law itself, which is often hard to get right, especially in very fresh new domains. Some nice behaviors can be induced by mere fear of government intervention and fear of future laws, and I think we're seeing some of that now.


It’s all about risk and rewards, nobody who owns openai is going say let’s pause. It also never stopped the country that first invented nuclear bomb, sure they could paus and then Russia would have done it and would have it first, and then said “thanks for pausing”


I don't know why people think nukes are a good example here. Nukes were outright birthed within the government within that government at its height of intervention into the market, at the height of its reach into the daily lives of every American, at the height of American civic engagement.

Policy makers spent a huge amount of time creating a framework for them. Specifically there was a huge debate about whether they should be under the direct control of the military. The careful decision to place them in civilian control under the Department of Energy is probably part of the reason they haven't been unleashed since.


This is kinda weird. It's not illegal to be an asshole. You also don't want to live in a country where it's illegal to be an asshole. You also don't want to live in a country where everyone is constantly an asshole.

What are you actually asking for here? Have you thought through the implications of what you're saying here?

The way we normally deal with assholes is we shame them. But then there's people like you saying we can't expect anything better from certain kinds of assholes. Well, yeah, when you tell an asshole to keep on being an asshole, what do you expect.


They're saying that companies are much more likely to be consistently ethical if they're forced to by the government, assuming the regulations that the government makes are themselves ethical. Even if you accept that Sam Altman, or any CEO, has the people's best interests at heart, it doesn't matter, the people shouldn't have depend on the CEOs of a powerful companies being a nice people to ensure they're not harmed. They're not saying it should be illegal to be an asshole, they're saying it should be illegal to run your company unethically.


>What are you actually asking for here?

Sounds like they're asking for government to take the reins w.r.t world impacting decisions rather than relying on the fortune of the big people in tech to make kind, thoughtful and well-planned decisions for our collective humanities' future.

>The way we normally deal with assholes is we shame them.

No, I don't, I avoid them -- and I find that tactic that you mentioned embraced more and more commonly and purported even as the preferred method as if that's just the common sense reaction -- It's not, and I disagree with that behavior vehemently .


I think society has to figure out how much power you give one person or a small group of people. whether it's an emperor, chairman, priest, a CEO, a guru, etc.


>> The higher that niceness exists in the social order, the greater its contemporary benefit

Could also be said that the more niceness, the more likely that not-nice actors will violently overthrow the nice ones.

As an example, we have had numerous bee hives at our place. The bees that never mess with us when we come near their hives are more likely to get robbed by another hive and destroyed. The bees that would sting us when we came near were rock solid - they never got robbed or collapsed.


> Stop asking the market to look after collective interest. This is the job of the government.

One of the main roles of government is to step in where markets fail.


> One of the main roles of government is to step in where markets fail.

Because that's worked out great so far?

In every case I can think of, the government (usually because it's already captured) only ever serves to further exacerbate the issue.


Seriously? You can't think of how it was useful that the government mandated that factory door must remain open to help prevent human deaths in case of a fire? You can't think of the advantages of governmental food and medicine safety obligations?


The government has had some wins, and it would be naive to say they haven’t.

They have also had many catastrophic failures.

It’s unclear whether in the end regulations trend towards net benefit, but it does seem likely that the more nebulous a problem, the harder it is for government to get it right. Or anyone, for that matter. But especially government because the feedback loop is so slow and bad.


Has a government never prevented a merger that would have created a monopoly?


That's literally my point. When you want behavior that is contrary to market incentive you need laws not guilt ridden editorials.


The problem is that I don’t believe we have any organization in government currently staffed and active that I trust to take any action that will benefit the public at large.

The problem space is too confusing, and the people making decisions are too incompetent. It’s a huge skills and knowledge gap.

And that’s without factoring in corruption and bad intentions.


That's completely misguided. Consumers purchase from companies they like.

Why do you think companies are all woke and virtue signalling? Because they interpreted the vocal woke minority as the voice of the country and they want to capture that market.

Corporations will absolutely try to do go in order to maximise their profit.

And this is ignoring private charities which get more done than any government has ever done.

Collective interest is a nice concept but the government, like all large organizations, is not capable of moving in any direction. Whatever you need done, chances are someone's cousin will get a job, the job will be done poorly and the taxpayers will pay more in taxes to fix the problem again and again and again.

A government can't fail and it's therefore inefficient.


Your post ignores the existence of democracies. Governments fail all the time. In a democracy failures of government will often yield a total collapse and complete replacement. If a failure is spectacular enough, these failure often come with constitutional reforms or even revolutions.

In addition, democratic governments (and even many autocratic governments) have some levels of distribution of power. Your small municipal government may very well end up being absorbed into you neighboring municipality because it is more efficient. Maybe an intermediate judicial step is introduced at a county, or even country level.

Governments do try, and often succeed into making your freedom and your interaction with society at large as efficient as possible, while trying to maximize your happiness. (Although I’ll admit way to often democratic governments do act in a way that maximized the profits of the wealthy class more then your happiness).


The market should absolutely be looking after the collective interest. That’s what the we the collective created the market for in the first place!


The market will follow its inceotives. You shape those incentives with laws. If you want a market that allows people to take risks, you do that by inventing the limit liability corporation not by telling people to be nice and to not pursue their debts unto their debtors' personal property. If you want a market that discourages monopoly you do that by regulating combination not by writing articles about how "good businessmen" don't act anticompetitively.


That requires you to view the free market as a tool to be wielded rather than a gospel to be followed.


If we could connect social responsibility with the stock, it would matter a great deal :-)


If you're talking about the US (Fed) government then please don't hold your breath. They're not interested in the best interests of the many. They're too busy rubbing each other, counting their money and deflecting the blame elsewhere.

Will the act on AI? Yeah, probably. Will it smell of crony capitalism? Yes, absolutely.


It's Gandhi not Ghandi


While I don't disagree that having mostly good people generally isn't effective in long term security against all evil people, I also dispute the main claim somewhat.

> Stop asking the market to look after collective interest. This is the job of the government.

That's the thing: our government, society, is simply an emergent, collective agent of individuals. If most people don't actually have in mind the importance of altruism, then the whole thing doesn't work:

(a) The government can't watch our every little move (as-is);

(b) The more the government watches us, the more risks we take with authoritarianism (even totalitarianism);

(c) Having everyone extremely social-good disinterested is extremely inefficient: you increase policing costs, legal costs, regulatory costs, it increases complexity arbitrarily and makes the whole thing grind to a halt (see highly corrupt cultures);

(d) Moreover, the disinterest becomes dangerous because even if the mission of the government were to steer social-good disinterested parties to somehow achieve collective good societies (whose success I question), a disinterested population doesn't care to vote for well-meaning politicians, and doesn't have the ethical and social understanding to make good choices in this area. And then when the whole culture is social-good disinterested, it's hard to imagine somehow all government workers would be magically pro-social angels, and not as corrupt as the rest of the population.

In conclusion, the entire backbone of a society is ethical enlightenment (of course, alongside other things like effectiveness, and ethical effectiveness). If individuals themselves are not ethically enlightened, no social good outcomes are possible, be it from governments, markets, or various organizations like non-profits and coops.

I think the market would be looking after our collective interest if this enlightenment was more widespread. And then we could better discuss mechanisms (and general structural innovations -- which I think are very important) to keep the few inevitable problem cases and unenlightened psychopaths from ruining it for the rest of us :)

In a way, societies are made of goodness (cooperation).


Though we do vote with our wallets too.


How great then that some wallets are millions of times larger than others.


Yes, but for your own immediate benefit. People, in general, are just not trained to think long term and "for the greater good".


Same as voting in elections?


chatGPT, like Google search has essentially democratized knowledge and wisdom to every human on planet earth.

OTOH, it's the governments that have banned it.

Do you think the citizen's of Italy are better off because of chatGPT?

Corporations are more benevolent than government


This is socialist nonsense. The government won't protect anything outside of their interests[0]. Free markets are good and necessary for human flourishing[1].

[0] https://www.amazon.com/Creature-Jekyll-Island-Federal-Reserv... [1] https://www.amazon.com/Human-Action-Ludwig-Von-Mises/dp/1614...


In general governments have done far more harm to individuals than anyone in the market. The job of the government is to protect your individual rights not the collective interest.


That's just quibbling over what "your individual rights" are; where does the line get drawn between "exercising my right" and "having my rights infringed on by the actions of another." There is no shortage of harm done by "anyone in the market" today, whether it's currently illegal and we call it "crime" instead of just a person exercising their freedom, or whether it's harm that isn't regulated today.


I don’t know that this is true.

At very least they both share immense responsibility for causing individual harm. Sure the government may start a war, but that war can’t happen without bombs and bullets, and in America at least those factories aren’t run by the government. There is an intermediate step oftentimes, but I don’t think that necessarily disconnects companies from responsibility.

If you work at a guided bomb factory you may not be the person dropping it, but you are responsible for the destruction it causes in a small way.

Also, if global warming kills us all then it is likely that the oil companies bear some responsibility for it right?

Government sucks - I agree with that statement, but we shouldn’t act like corporations are appreciably less responsible.


They are worse together. Achieving some fine balance of corporations may seem somewhat utopian but we are pretty far from utopia in the current day.

Building a mega corporation without big government/s I would argue is basically impossible. And local level governance is more likely and potent without big government. Again though, all of that is quite hard to achieve / see how to achieve when people with existing power enjoy the status quo control more of the levers than the masses, including the ones used to influence the masses.


The market responds to need.

If nobody were able to socialise the cost of going in a foreign country and killing people, there would be no war.

If the government didn't steal my money against my will on threat of incarceration, there is no way in hell I'd spend my money on bullets to kill someone's son in another country.


>I don’t know that this is true.

In the 20th century governments slaughtered wholesale around 100 million of their own citizens (China, Russia, Cambodia), let alone those of other countries they killed in war. There's no measure by which the market comes anywhere near that amount of murder.


The market (german industrialists in case of Hitler, military-industrial complex in case of USA, East India Company in case of GB if you want to go deeper in history) has its quite significant share. Ignoring that is being willfully ideologically blind.


The absence of government just leads to a situation in which some group takes control of a given area. In effect government will then exist again. During the absence of government there will be chaos and rampant crime.


I'd take a local firm offering to protect me for money over one that manages the large part of a continent.

The small one will redistribute my money where I live at least.

Besides, there is an alternative model where there are competing groups of people and I can pick the best among them based on price and services.


>Besides, there is an alternative model where there are competing groups of people and I can pick the best among them based on price and services.

In the absence of government, what's stopping these groups from simply joining forces into a cartel, getting some armed thugs and making you an offer you can't refuse? History suggests that to be a far more likely scenario. Oligarchy, rather than competition, is the natural state of capitalism.


Promoting the general welfare is literally discussed in the first paragraph of the U.S. Constitution. The Bill of Rights came later.


The Constitution is just a piece of paper. For every thousand steps the Congress, regulators, and state assemblies take in the direction of tyranny, the courts claw back one or two.


I refer you to Thomas Midgley Jr.


To all the detractors of work ethic and the supporters of UBI: What freedom will exist in a world where everyone is reliant on their government for survival, and all mass communication is mediated by or written by opaque AIs? In trading the yoke of labor for free time we would also trade our independence, and any hope of changing the structure of power that had been set in place.

I'm so sick of our new style of wannabe rulers/overlords who use the promise of freeing humanity from labor as the appeal to grant them total power. It's an appeal, among other things, to the basest sloth and laziness. "Imagine what you would do if you never had to work" is a daytime talk show hook, and a way to lure people into gambling and playing the lottery and investing in crypto. It's not a revolutionary slogan, it's the oldest con in the book.


I don't know what you mean by "work ethic" (why does that have anything to do with UBI?), but regarding UBI, I think you're either exaggerating or misunderstanding some things, and I say these without any regard to my position on the topic (which is complicated).

When you think UBI, don't think "everyone gets $300k/year for free so nobody needs to worry about working anymore". Think more like "everyone gets $30k/year for free so nobody's life is paycheck-to-paycheck". There are lots of people who literally don't have the opportunity to do anything (even better jobs!) other than their current jobs, which they need to put food on the table and have a roof over their heads. One of the envisioned benefits of UBI is that it would actually give them some opportunity & breathing room in their schedules to do something else - whether it's raising a kid, learning new skills, applying for other jobs, or anything else. Most people would still have quite a bit of an incentive to work for a more comfortable standard of living - it's just that they won't feel like they have a gun to their heads as the alternative.

Does this mean UBI is obviously practical and we should start it tomorrow? No, there are lots of challenges, most notably including the funding itself - not many economies today are able to hand out that much money in a sustainable way, so there's some very hard rethinking necessary to figure out how this could ever work. Does this mean everybody will give a 10x ROI on it in the long term? No, some people would probably not do anything useful with the money. Does this mean UBI solves all problems? No, even if you do your best you can still have bad luck (get hit by a bus, medical bills, etc.). Does this mean UBI would work well everywhere? No, results might vary across different societies.

But we're rapidly heading into a world where more and more people are being put out of work, and can't pick up new skills fast enough (if at all) to make ends meet. Expecting everyone to just adapt is incredibly unrealistic. UBI is one envisioned plausible way to address this problem and others systematically. Nobody knows for sure how it would play out in the long term, but to my knowledge nobody has obviously better solutions either.


> Think more like "everyone gets $30k/year for free so nobody lives paycheck-to-paycheck

What impact would UBI have on inflation? I've read some things about minimum wage increases largely going to landlords. If everyone has 30k, 30k is not worth much, right?


> If everyone has 30k, 30k is not worth much, right?

I mean, I just made up the $30k number to get a completely different point across. I wasn't saying $30k is the magic number. The actual number will, yes, have to be larger to account for the self-induced inflation. I have no idea where the equilibrium would be. Maybe it's double that. Maybe it's location dependent, or something else. Who knows. But it will be somewhere.

And no, I have no idea how to fund it sustainably in the current economic system; some people think it's doable, and some don't. And other changes in how society/government/life works will almost certainly be necessary in the process. I think the point of the UBI debate isn't "the government can send everyone $30k checks tomorrow and everything will be fine", but rather, the point is to inspire people to move from "this is stupid" to "let me humor the idea and at least help spend some effort investigating to see if there's any way to make it work before we declare failure". There's no guarantee the approach would success if we try, but there is a guarantee that it would fail if nobody is willing to even consider taking it seriously.


This is a simplified version of the argument, but basically: right now, I am dependent on an employer for my survival. Keeping a roof over my head depends on my being competent at my job, AND my employer not deciding to fire me. One of those I can control - the other is completely outside of my control. If instead I was dependent on a democratic government for my survival and keeping a roof over my head, at the very least I and other citizens would have a say in the direction and decisions of that government every time an election happened.

Your point does stand in autocracies and dictatorships though.


You are not dependent on your employer. You're dependent on society having enough employers at any point that would match with the skills you can provide in exchange for salary, you can switch employers at any time you want. It's not 1-1, it's 1-many right now. By moving that to the government you're actually going from 1-many to 1-1 and getting a way worse deal.

Worse deal because the many employers are in competition against each other so you can rely on their self interest to remain in business, whereas the people that get government jobs, their self interest doesn't have an incentive system where it would benefit me.


On the surface, yes, there are many employers. How's that going for the folks that have been recently laid off from checks notes... Amazon, Facebook/Meta, Twitter, Google, Microsoft, Salesforce, EA, Indeed, Yahoo, Github, Zoom, Dell, Paypal, IBM, Spotify, Goldman Sachs, Coinbase, HP, Cisco...

I'm not really sure that going from many possible employers who cannot guarantee my job to a single entity who can is a way worse deal, but ok.


> How's that going for the folks that have been recently laid off from checks notes... Amazon, Facebook/Meta, Twitter, Google, Microsoft, Salesforce, EA, Indeed, Yahoo, Github, Zoom, Dell, Paypal, IBM, Spotify, Goldman Sachs, Coinbase, HP, Cisco...

They are applying to different companies? Do you believe these employees made a decision to work for one of these companies for life and will now never again get another job? What is your point with this part?

> I'm not really sure that going from many possible employers who cannot guarantee my job to a single entity who can is a way worse deal, but ok.

I do not want to work for the kinds of jobs "a single entity who can guarantee" jobs can offer me. You know what happens in that world? You receive a note from your teacher when you are in school, telling you that based on what they saw, you are going to be studying X for the rest of your schooling. After studying X the government will send you a letter assigning you to the job where you are needed, wherever it is, and assign you a home near that job. You have no agency in this world. I'm not making this up, this is the only practical way such systems have worked.

"Guaranteed work for life from a single entity" also means the guy that runs the "assignment office" for the government will now place his friends in good jobs, and you in bad ones.

People who are against corporativism and capitalism have a way of forgetting that if there's "bad people owning companies and houses and stealing our labor", those same defects would be in the people who would work for the government. Bad people don't go away because you change the system, so does your system keep them in check?


> They are applying to different companies? Do you believe these employees made a decision to work for one of these companies for life and will now never again get another job? What is your point with this part?

Who is hiring in tech right now? Honest question.

As for the rest of your post, the main issue is having a say. The ill effects you are describing can indeed happen in a single-employer system. But at least in a democratic state, every few years the population has a say in how the system is being run. Right now, nobody except the CEO and the board has a say in how a company is run. They fire thousands of people, many with mortgages and children and people who depend on them, with no regards for what that means. And then to top it off they pay themselves millions of dollars to make these savage decisions. AND, I have no voice and no vote in all of this. Yeah... no thanks.


> Who is hiring in tech right now? Honest question

The company I work for, among tens of thousands of other companies. I'm starting to think you're not commenting in good faith.

> the main issue is having a say.

> nobody except the CEO and the board

So here you come in with what I described above. "The bad person". Let's assume they are bad, and they fire everyone. Their company will stop existing, so that'll make them less money and give them less power. So they won't fire everyone. They'll fire exactly the amount of people they believe will benefit them personally the most. Over time, some of these "bad CEOs" will make compounding bad decisions, and nobody will want to work for them or the company will run out of money. This is the built-in self regulation and "the say". If you work for a company where the you believe the CEO is a bad person, you leave. If they fire your colleagues, you can leave too. That is your say, and the market is the "aggregate say". It works in different ways and needs regulations (I don't believe in the purity off implementation of any system).

So you see, in fact the bad CEO and board actually have way more to lose from their decisions than a "job assignment official" in an all-job-controlling government office, moreover because most public jobs aren't elected.

Out of those two, I know which one's whims I'd rather be exposed to. A life of shoveling rocks at the mines because "john the placement officer dislikes me", or choosing which job I do but possibly losing it from one day to the next and have to get another one multiple times through my life.


This really got to the heart of what I was trying to say. I find the concept that we all live by the grace of our employers to be alien, coming as I do from a culture and society that prides itself on mobility, advancement, and self-sufficiency. I find the concept of living by the grace of government (or relying on government for more than services rendered in exchange for the taxes I pay from my own labor) to be odious morally and terrifying in practice. Many Americans, even ones on the left of the social spectrum, feel this way. Particularly ones with roots in the Soviet Union or other totalitarian states. But it probably isn't a natural revulsion or posture for people formed under mildly socialist Western European standards, and it seems to have been lost on the youngest generation in the US.

I went to a wealthy enough private school to have had an up-close look at what children do when they never have to work in their lives if they don't want to. It's not pretty. My father made all his kids start working full time at 14.

As far as the one-to-many vs one-to-one argument, you're absolutely right; the connection between having choices in work and having freedom is only not apparent to people who've developed a conveniently conspiratorial view of the world, in which corporations are acting in concert as opposed to presenting endless opportunities and edges to anyone with ambition in the faces they present. As you said, with a monolithic actor like government, it's just a single bureaucrat's opinion of you that matters, with no chance to prove yourself. This is obvious to everyone I've ever met who has lived under a dictatorship. And ultimately a dictatorship must be the ultimate arbiter of any form of UBI, because one way or another, people will be made to work to support people who don't want to work. And that can only be accomplished by force in measure to how offensive it is to the working group.

Whereas I have quit great jobs to work some incredibly shitty jobs and become good, then great at them, and I think I've become a slightly better human being at each iteration. I quit coding to be a taxi driver - I worked 16 hour days and wrote several novels in my taxi. There is your time to make art. I don't think either part of that would have been possible in a world with UBI or the control structures it would imply.


Everyone (to a close approximation) is already reliant on their government for survival. Taking yourself out of society and its interdependencies is already basically impossible. (This is not an argument against all points you are making, I just wanted to say something to that aspect of your comment.)


There’s a big difference between being indirectly dependent on the government to maintain society, and being hand-to-mouth dependent on the government for food/welfare. As most people who have been in the latter situation can attest (at least outside of super rich countries like Norway), it is a distinctly less dignified existence. You live in constant uncertainty that your source of food, source of existence could be withdrawn at any time by government fiat. This is the future our dear thought leaders want for us.


Current state of the world:

You live in constant uncertainty that your source of food, source of existence could be withdrawn at any time by corporate whim

Question: Who do you think the police work for? Hint: It's neither "citizens" nor is it "the government" They work for corporations. Go look at how they act. The whole goal is to provide security for private property holders (aka not you) and increasingly they are private mercenaries that have day jobs as city/state/county policeman. Ask a cop where their "overtime" pay comes from and it's typically protecting some business.

Economic and political power should be dispersed as much as possible into the hands of the citizens, not anyone who was able to wrest as much control as possible from others - which is how we apportion it.

Do you ask "how did this company I'm applying to get to this position?" If not then you're not doing anything different than picking a political party without due diligence.

The act of consolidating power destroys the ethical standing of the institution


If your job stops paying you then you can find a new job.

If your government stops paying you, you can’t find a new citizenship.

A world where work is replaced by UBI is not a utopia. It’s a dystopia where the government controls if you can eat today. In which politicians divide and conquer by differentially giving and denying nutrition to different groups in order to gain votes. Give me a corporate hellscape any day of the week over that.

UBI is one of those “semantic stop signs” where people just stop thinking whenever it is mentioned. The term is a stand-in for whatever utopian society the user has in mind. But the implications of UBI as a policy are completely contrary to the stated goals of the users of the term.


Hard work is only seen as a virtue and sloth a vice because of the context of the world we are currently stuck in. You assume a system of value which might not make sense in the future.


On the contrary, it's the natural state of all thriving organisms to do some form of constructive work. It has also been the human condition and the condition for prosperity as long as we've existed. That isn't to say that leisure and play and pleasure have no value. Not at all! But it's axiomatic that nothing has value without its opposite. And if you look at the wealthy who do not work (as opposed to the ones who do), it's clear that nothing is valued by people without the sense that they've earned it.


We are a long, long way from the end of work.

Work is simply what we do for others.

All we need is a just transition and an end to involuntary unemployment which is best achieved through a federal job guarantee:

http://www.jobguarantee.org


Also great: These people never explain how exactly the profits made from automation will be redistributed. This is a political problem, not an economical or technological one.

Already, profits from productivity increases go mostly to capital owners. This is why most of us have so little money, compared with what people used to have 50 years ago. This is why you can't afford a house.

If productivity increases had been redistributed such that people could work considerably less while maintaining an acceptable standard of living, we all would work 3-4 days per week. Instead, those gains went to the capital owning class. AI will worsen the situation.


I think what you are missing here is that we are quickly entering a time where employing humans for almost the entirety of management work will be unnecessary, as will a large swath of low hanging creative work, and in a short period of time, most manual labor. The cost of maintaining a human being will exceed to cost of automation in perhaps as much as 75 percent of the workforce.

How does an economy function when the surplus value of labor is derived from machinery instead of humans?

I know the past answer to this has been that people will just shift to operating or maintenance of the machines, but this shift is fundamentally different because the shift is of mind-power.

We have been able to build machines that can deftly do anything that a human can do, and even replicate this capacity in human morphology. The problem has always been operating these machines in a way that replicates or improves upon human ability. Now that we are building machines that can be trained on existing data and synthesize novel solutions based upon that training, the need for human intervention in automated processes, including in the maintenance and repair of this processes, will drop precipitously.

Let us consider the economics of a robot, assuming a general purpose anthropoid robot that ca be trained in most commercial processes, and presuppose that the mind for such a device will soon be available either by subscription or in an embodied form.

We might see something like this:

Mass produced cost will probably be similar to an automobile, at 20-200x the cost per kilogram of automotive engineering so about 30-300 thousand euros. Service life between overhauls at the indicated forces and speeds should achieve approximately 1 billion pivot revolutions if we assume the surface sliding/rotation movement durability of modern mass produced machines. This would translate to about or 8 billion steps or 1/8 rotation gestures. If we assume 1 such gesture or step per second, that translates into a mechanical service life of about 2.2 million hours, or a mechanical cost of 0.014 to 0.14 euros an hour.

Now let us consider the cost of batteries, assuming that cost per capacity remains flat.

Current production technologies allow for about $100 per kWh of battery capacity, with a 4000 cycle lifespan. If we assume that the robot exerts an average of 500w over its lifespan of 2.2 million hours, the battery maintenance cost of maintaining that output will be approximately 30,000 additional euros in battery replacements. The electricity itself, at .10 per kWh, will cost more at about 100,000 euros over the useful life of the robot.

If we add all of this up we end up with 300k in machinery (200x as expensive as a car, or 1/3 the cost of a fighter jet per kg), 30k in batteries, 100k in electricity, and 3000 months of mind subscription at 100 euros a month per unit, we end up at about 750k lifetime cost for a machine that should be operational for about 250 years if it operates 24 hours day.

Now, let’s assume that those same energy, compute, battery, and mechanical costs are compressed into only 25 years of useful service.

That would be about 3.25 euros an hour. But that assumes that it is 1/10 as durable as modern automotive machinery yet costs 200 times as much per kg, costs 1000 a month in compute costs and consumes 5kw 24/7. -a pretty pessimistic figure.

So we can say that GP anthropoid robots will probably cost between 0.33 and 3.30 euros per hour to operate and is less costly to acquire than a front end loader or a farm tractor. And it will be built and maintained by the selfsame machines.

This is what we are facing.

New solutions are needed.


I believe you can't, at the same time, accuse OpenAI of benefitting of ideas mostly invented elsewhere, and also claim that they are in a predominant position. It does not make any sense, and they are not naturally positioned for a monopoly. Just other companies have to move their asses.


> I believe you can't, at the same time, accuse OpenAI of benefitting of ideas mostly invented elsewhere, and also claim that they are in a predominant position.

There used to be a team at Google called Google Brain and they all left to go to OpenAI after the employee protests against taking military AI contracts in 2018. Now Microsoft has those contracts and funneled $10B to OpenAI from the CIA. OK that's a little bit of exaggeration but not so much; I guess not all employees left, and Google Brain still technically exists. Also some of the brain employees went to other startups not only OpenAI.


Microsoft doesn’t have $10 billion from the CIA. They’re splitting military cloud contracts with Amazon and other startups, but it’s just the same thing as corporate contracts


Many of those employees have changed their minds about military contracts in this new cold war era.


When I was a young person, I used to deride writings like 1984 on the grounds that the scenarios and stories presented to carry the message were too far-fetched.

Reading your comment has set off an epiphany, I think I get it now, there probably exists some higher-up person who is thinking along these lines: we must always be in a state of war, for if we are, the populace will want to be ready and willing to fund the instruments of war. And we always want to be ready for war, because if ever we are not, we lose our capability to win a potential future war. For our very survival we must contribute our efforts to build these instruments of war. War is constant. War is peace.


The more powerful your military is the less likely you will need to use it to defend yourself.


From near-peer opponents, yes.

But it increases the temptation to use it against weaker opponents.


Offensive use of a military is a very different thing.


In the real world there are no clear cut boundaries like that so it’s military action either way.


If I'm not mistaken their opposition was on principle not whether it was needed or not. So the fact that we are in a new cold war era does not change that Equation. The principle is still the same. The only way for this "change of mind" is if their opposition was due to "it's not needed because US has no rivals". Or what most probably happened, they realised that if they want to keep workign on this field there is no escape from those kind of implications and They don't have the big bad wolf Google to blame anymore.


And then Russia invades Ukraine and people change their minds about what is important, and what is possible.


But that's my point. If their opposition was not on principle but based on the naive Idea that "we will live in peace and harmony", I really am afraid what other naive principles are in the bases of their work and what safeguards are being set in place for the AIs.


There is a such thing as being principled and then finding out you were naive.


There is also the substantial effectiveness of propaganda, and the fact that it is essentially impossible to know if one's beliefs/"facts" have been conditioned by it.

That most discussions on such matters typically devolve rapidly into the regurgitation of unsound memes doesn't help matters much either.


Sure, but it is equally likely that your old perspective was the result of propaganda and your new one is a rational adjustment in the face of new information. Or that both are propaganda, I suppose.


That's the standard loop, but there are ways out.


ofcourse but then you still must have some principles and be as loud about how naive you were as you were when you were protesting the thing in the first place.


Why must you be loud about changing your mind? Why not just quietly realise you were wrong, have conversations with friends about it and move on? That’s what I’d do.

My life is a story. I’m under no obligation to share.


Not the op but the quotes from John Maynard etc made me think… if I listen to your original thesis and you convince me (maybe you have some authority) and I go on believing you, is it not harmful to me if you realize your error and don’t inform me? Boss: “why did you do X?” Me: “But sir you told me doing X was good!” Or to put it differently, if you spread the wrong word then don’t equally spread the correction then the sum of your influence is negative.


Maybe for someone you know in person. But the people who quit google in response to defence contracts aren't your boss. And they aren't your friends. You're a stranger to them. So its a bit of a long bow to draw to claim they owe you a followup.

It wouldn't surprise me if some of them did blog about changing their mind after the Ukraine war started. We must still have no idea because it just didn't make the front page anywhere.


Live in peace and harmony is not a good principle for AI?


America! What is in your military now, eventually ends up in the hands of the civilian police.

*and other countries are following. Fucking, sadly.


What the hell are they going to do with an M1 Abrams?

Having said that, I dread to see what the NYPD manage to achieve with an F-35B.


Trump tried to convince officials to use tanks to combat the Portland protests while he was president.


Cool, so now they're gonna fight some elites' wars?


>There used to be a team at Google called Google Brain and they all left to go to OpenAI after the employee protests against taking military AI contracts in 2018.

This is blatant revisionism. Military contracts is not the reason Google has been losing AI researchers to OpenAI.


>funneled $10B to OpenAI from the CIA

Without some evidence to support it, this really sounds like a conspiracy theory.


He did not mean it literally. Jeez.


I mean it is a conspiracy theory.


OpenAI acts like they are some underdog startup for PR purposes while actually having access to effectively unlimited resources due to their relationship with Microsoft, which rubs people the wrong way

Plenty of other companies had released GPT powered chat bots like ChatGPT, they just couldn't offer it for free because they didn't have a sweetheart deal with Microsoft for unlimited GPUs. Google did drop the ball though, they were afraid of reputation risk. Google should have used DeepMind or another spinoff to release their internal chatbot months ago


I think you’re missing how much of a profound difference the intensive RLHF training OpenAI did makes in ChatGPT. Microsoft’s Sydney seems to also be GPT 3.5 based, came out after ChatGPT, and it was an utter dumpster fire on launch in comparison.

Meanwhile nobody has even caught up to ChatGPT yet, not even Microsoft whose resources are the secret sauce you think is the game changer, and now 4.0 is out and even more massively moved the ball forward.


No, actually. Microsoft's Sydney is GPT 4 [1].

1 - https://blogs.bing.com/search/march_2023/Confirmed-the-new-B...


Wow, yet it still managed to be worse than ChatGPT. That’s quite an achievement.


... of marketing


> to release their internal chatbot months ago

I’m not sure that would have been wise. Bard clearly isn’t ready.


Yeah but if they released earlier that fact wouldn't have been so embarassing.

As it was, they intially just claimed that releasing a competitor was irresponsible, then they eventually did it anyway (badly).


I’m not sure why that doesn’t make sense.

In a winner takes all market, where the product is highly complex, which is likely the case for any product developed since the advent of computers, if not before, the predominant position will almost certainly be taken by somebody, and since it’s a highly complex product, it’s likely no one entity thought of and/or implemented even close to a majority of the ideas needed to make it work.

In fact, it’s likely to be a random winner amongst 10-20 entities who implemented some of the ideas, and another potentially larger number of entities who implemented equally good, or even better, ideas, which happened to fail for reasons that couldn’t have been known in advance.


> In a winner takes all market

Just because Search was winner take all, doesn't mean AI will be. What network effects or economies of scale are unachievable by competitors? Besides, Alpaca showed you can replicate ChatGPT on the cheap once its built - what's stopping others from succeeding?


Yeah, I don't think this will be a corporate thing but a private decentralized thing.

It's probably the worst fear of the likes of google etc, a 100% local "search engine" / knowledge center that does not even require the internet.

I've been running the 65B model for a bit. With the correct prompt engineering you get very good results even without any fine tuning. I can run stable diffusion etc fine too locally. If anything will let us break free from the corporate, it is this.


Alpaca is surprisingly good, but mostly because it supports the dialogue format out of the box. IME, it does not nearly replicate ChatGPT.


I'm a fan of OpenAI, but this is nonsense. All of human existence is mostly other people's ideas. Among a ridiculously huge list of other things, OpenAI benefits from the mountains of labor that made scalable neural networks possible.

Have they had their own good ideas? Definitely. Are they benefitting of ideas mostly invented elsewhere? Also Definitely, just like everybody else.


I'm not really a fan of openAI, but I think we're seeing the classic mistake of confusing product with technology. Steve Jobs / Apple didn't anything you'd call new ideas either (obvious cliche but so is the criticism). It's execution and design once the tech reaches a certain level


It’s a good comparison. And once again tech enthusiasts are confused and outraged that the product people are getting credit for tech they didn’t invent. Once again missing the forest that people buy products, not tech.


There’s an awful lot of judgement, engineering and technique that goes into a really well thought out product. It’s often deeply underestimated, and culture makes a huge difference in execution. Bing/Sydney came out after ChatGPT, based on exactly the same tech, but was hot garbage.


I don’t know, my family talks almost daily about how amazing bing chat is. The Sydney eta was kind of crazy, but the core product seems to be doing well.

It is definitely solving a different problem than chatgpt though, and maybe a less inspiring problem. Chatgpt is like an open world game where you can do anything; Bing chat is just a point solution for the vicious spiral of SEO and Google’s profit motive that rendered search results and web pages so useless.


It’s interesting to see the ad implementation, I recall some predictions that Microsoft would be particularly apt at finding a way to integrate advertising organically. Instead it just seems to have made the bot more stupid because sponsored products are forced into its recommendations.

I’ve gone back to just using the GPT API, unless I absolutely need to search the internet or information after 2021 for some reason.


There are three modes to Bing Chat, and at least two models involved. AFAIK only the "creative" mode uses GPT-4 (the other two some flavors of GPT-3.5). As far as I can see Bing/Creative is no different to ChatGPT+ (the non-free version) which is also GPT-4 based.


Bing/Sydney is better than ChatGPT. It had serious bugs in beta testing


It’s a dramatically worse chat bot, but being able to search the internet does give it an additional useful capability, while limiting it to five interactions papers over its psychotic tendencies.


Try it again - seems very good now (and allows longer conversations too).


I've heard Altman (on the Lex Friedman podcast) and Sundar Pichai (on the Hard Fork podcast) say things to this degree. The thing that OpenAI really managed to crack was building a great product in ChatGPT and finding a good product market fit for LLMs.


Well sure, but there still aren’t any other LLMs at the level of GPT3/3.5 let alone GPT-4. GPT3.5 just using the API returns fantastic results even without the ChatGPT interface (which isn’t terribly hard to replicate, and others have using the API).

There are dozens if not hundreds of companies that could’ve done something profound like ChatGPT if they had full access to GPT3/3.5. And honestly, OpenAI stumbled a lot with ChatGPT losing history access, showing other users’ history… but that doesn’t matter much as the underlying technology is so profound. I think this really is a case of the under-the-hood capability (GPT3/3.5/4) mattering more than productization and execution.

(Now I think there are not a ton of companies that could do what Microsoft is trying to do by expanding GPT4 to power office productivity… that is a separate thing and probably only about 3 companies could do that, at best: Microsoft, Apple, and Google… and theoretically Meta but their lack of follow through with making Metaverse useful makes me doubt it.)


Hm, I wonder how much of the API’s performance is related to training/finetuning done by OpenAI planned towards the ChatGPT product. I think the RLHF is partly product design and partly engineering.


What's the product market fit for LLMs, and how does OpenAI fill it?


How much money is openai making from chatgpt premium now? How much revenue are they they making from the api?


Rather than execution or design perhaps this time it was mainly about being unethical enough to sell out to investors and unflinchingly gather enough data in one place by carelessly ignoring copyright, authors' desires, privacy regulations, etc.


mostly agree. Though I wouldn't underestimate the tech and engineering work behind OpenAI. That microsoft partnership is no joke.


I don't think you understood me. I'm with you, but given that often OpenAI is accused of using public ideas for profit, this, in turn, means they are obviously not in a dominant position. So far they are just better than the others.


> OpenAI benefits from the mountains of labor ...

Can't you say the same about Google? Google lives from the labor of others.

Not entering into the debate if this is ethically or not just saying that OpenAI is not much different. When photo services such as Google Photos recognize the Eiffer Tower in Paris they are using images from others.


There have been many examples of companies inventing new technology, then failing to take it to market until a competitor copied the ideas. The classic example is Apple and Xerox PARC. The criticism in this case is that while icons and GUIs are obviously harmless (so it's good we got them out of the lab!), maybe AI is the kind of tech we should have let researchers play with for a while longer, before we started an arms race that puts it in everyone's house.


There seems a lot of negative perception of the guy here, and OpenAI definitely deserve criticism for some stuff (so as the CEO, so does he), but - even if it was built on the work of others, and with the obvious caution about what may come next - he and they deserve immense respect and credit for bringing in this new AI age.

They did it. Nobody else.


I for one have nothing against Sam as a person (not knowing him well enough), but I question the sentiment that he and the company deserve respect for what they’re doing—much less by default, for some self-evident reason that doesn’t even require explanation.

Do people mean it in a sarcastic sense—and if not, why does OpenAI deserve respect again?

— Because it is non-trivial (in the same way, say, even Lenin deserves respect by default—even if the outcome has been disastrous, the person sure had some determination and done humongous work)?

— Because this particular tech is somehow inherently a good thing? (Why?)

— Because they rolled it out in a very ethical way with utmost consideration for the original authors (at least those still living), respecting their authorship rights and giving them the ability to opt in or at least opt out?

— Because they are the ones who happen to have 10 billions of Microsoft money to play with?

— Because they don’t try to monetize a brave new world in which humans are verified based on inalienable personal traits like iris scans, which they themselves are bringing about[0]?

This is me stating why they shouldn’t have respect by default and counting to get a constructive counter-argument in return.

[0] https://news.ycombinator.com/item?id=35398829


People in tech often have the axiom that tech progress is good. I mostly agree but we should keep in mind all the power hungry, manipulative, crazy monkey brains that will get their hands on it and cause mayhem at unheard of scales.


>Because this tech is somehow >inherently a good thing Without technology humans are just unworthy bugs.


It is generally accepted that some applications of technology are good and some are not, or at least not self-evidently so (weapons of mass destruction, environmentally disastrous things like PFAS, packaging every single product into barely-recyclable-once plastics, gene editing humans, addictive social media/FB/TikTok, etc.)

Is this particular application of technology good, and even self-evidently so?


No, it's not generally accepted. F.e. weapons of mass destruction i.e. nuclear weapons saved hundreds of millions of lives. Your lack of imagination is not an argument against technology.


Personally, I think it seems like they were only able to achieve what they have due to transparently published research, and are now pulling up the ladder behind them by refusing to publish in the same fashion. I don't think that deserves immense respect.


It seems like the real breakthrough happened at google with “All you need is attention”


Perhaps, although it was never intended/expected to be a path to AI/AGI. It was just designed as a more efficient seq-2-seq model. An accidental breakthrough wrt AGI you might say!


It's clear that GPT4 still isn't AGI or close to it. But the same is true for humans, our language neurons aren't the only essential parts of our intelligence. So what else is there? Well, it just needs to want things, maybe have needs, probably perceptions (that one's easy). Automomy?

At some point, with all these systems interacting, sentience may emerge so that it too can participate debates whether it's an AGI!


The expression "on the shoulders of giants" has never been so relevant.


Ha! To be fair though, Isaac Newton is not a bad person to be implicitly compared to. :)


Completely agreed. Plus, people seem to hate every “celebrity” these days. Name one ceo or founder who is not hated.


Larry Page

Lisa Su

Satya Nadella (well, I consider him a conman but I seem to be in the minority)


I can't quite put my finger on why, but I've always found it odd to just give blanket "respect and credit" just because someone/thing "did it first". Yeah a lot of times it's justified, but I mean, if you have to put caveats on it like ya did...

I dunno.


> he and they deserve immense respect and credit for bringing in this new AI age.

Why? Who asked for it? I think that if openAI's breakthroughs never happened, we would not be any worse off (actually, we'd probably be better off).


You are basicallt demanding that parents obtain prior permission from their offspring for them being born before conception. That isn't just unreasonable but impossible. When has it ever worked that way with any technology? That they would just get every last interest to agree on what technology would do and its applications?

Besides, a world where prior permission must be asked before doing anything which may cause a change is better known as a tyranny.


> You are basicallt demanding that parents obtain prior permission from their offspring for them being born before conception.

Are you sure you replied to the right thread?


He's begging for regulatory capture. "I can destroy the world but I won't. My competitor's will, so regulate them." A shrewd plan considering he's not offering something beyond what another company with a large nvidia cluster could offer.


The odds that is the end game of the AI ethics movement is pretty likely: A mega-monopoly AI firm with a wall of gov policy that will cripple any upstart who can't jump through "safety" hoops written for and by the parent company. So any talented dev who wants to do great AI work either has to work for parent company or build a startup designed to get acquired by them (aka don't rattle the cage).


I think that's the goal but there is a reasonable chance that they completely fail and no serious regulation ever gets passed.

That's the thing with technology, people get used to it and then trying to ban/control it makes you look ridiculous. It's like how now that Tesla has made it normal to have driving assistance (and calling it FSD) there is little appetite outside of contrarian circles for serious regulation. If, however, regulation was proposed before Tesla shipped then it might have passed.


Never underestimate the pull in politics for calls to "protect the children", stop x-ism/x-phobia/etc, foreign boogiemen, or lack of control by law enforcement/Intel agencies... or merely the revolving door that will start once companies like OpenAI become massive and their ex "AI ethics" execs are the ones writing the policy to protect us all from ourselves.

These sorts of "public/private" control patterns and the destruction of competition while maintaining private monopoly control in the pursuit of "safety"/"equity" is America's most reliable religion.

HN is always full of people pushing for it in one way or the other, so it's not entirely foreign to us either. Nor something we can merely blame on politicians or some intentional conspiracy by a subset of the population.


>A shrewd plan considering he's not offering something beyond what another company with a large nvidia cluster could offer.

It's been 4 months, no one has released anything nearly as good as the initial release of ChatGPT. Meanwhile OpenAI has released GPT-4 and is trialing plugins and 32k context.

Either their competition is incompetent or OpenAI is doing something right.


DeepMind could no doubt argue the same for openAI failing to play go and to not pursuing protein folding. Their several years behind on that one. :)

Anyway, there have been several open source versions in the gpt3-3.5 range. Those are your base to build off of. Their's just little incentive to try to go participate in a moneysink competition with Google and Microsoft.


Not really. I tried several fine tuned iterations of LLaMa and none of them are even close to ChatGPT.


> Either their competition is incompetent or OpenAI is doing something right.

Yeah - I'd say a bit of both. Google certainly seem to have dropped the ball, and now seem to be desperately trying to play catch up.

OpenAI certainly seem to be doing a lot right though, in addition to being all-in and laser focused on this tech. It seems there's a lot more to improving these models than just scaling them up. Altman said "there's a lot of understanding went into building GPT-4", and they've invested a lot of effort into the "alignment" aspect - how to control these models to modify behavior - which is what makes the difference between a neat tech demo and something actually useful.


Without knowing the profit margin of OpenAI it’s anyone’s guess. Personally I’d be surprised if it’s profitable, azure subsidies aside


According to Reuters [1] in 2022 they projected $200 million revenue in 2023 and $1 billion revenue in 2024.

In 2019 they Planned to spend the then 1 bn funding within the next 5 years or so.

So I suppose they’re not far away from net zero at least.

[1]: https://www.reuters.com/business/chatgpt-owner-openai-projec...


Also, recently sama indicated chatgpt would make a little money (since the context of this statement is how the hell it could be profitable I assume he talked about profit). https://news.ycombinator.com/item?id=34992606


It will be interesting to see how their sales progress after the initial hype period. Some of their contracts to incorporate gpt into products sounded like initial marketing and hype, but perhaps even independent of those, they will make it back with services to bing. But the subdued reaction to gpt4's increase in capabilities means their might be a lack of incentive to pay for improved versions.


Who said anything about making a profit? If Google could make Bard work as well as GPT-4 they would do it, even if it had to operate at a significant loss to start.


If it doesn’t make a profit there’s no incentive to do it.


It's a stunt now for both companies as it's a proxy for both search, AI capabilities for sale, and clouds. But eventually they will reach a limit of payoff. But 4 months is short for product development, especially as it sounds like google was caught off guard by the public interest in interacting with chatgpt.


Google is going to bring something out - it might take 9 months - but they will blow the hinges off when they get there


To what end? Other than showing the world they’re still capable of doing engineering beyond A/B testing methods of leveraging personal information to improve CTR on an increasing volume of ads they shove into products.

Google home was the perfect chance to showcase their engineering/AI ability, and yet some days it still struggles to interpret a voice command its done 100 times prior, even when you verify that it heard the exact same command you used yesterday.


What about Claude from Anthropic?


They're doing very well too, but the company was founded by people from OpenAI, so it'd seem they took a lot of know-how with them.


Or brought that know how into OpenAI in the first place?


It seems OpenAI has achieved more with this tech than anyone else, and this level of capability didn't exist before OpenAI, so there's a limit to what can have come from outside. It seems that Ilya Sutskever is perhaps the driving force behind this, and he was certainly there from the start and presumably had a huge part in pushing this line of Transformer research before they had enough success with it to attract a lot of outside talent (OpenAI is now ~400 employees).


All that may be true, but it doesn't help us decide whether more AI regulation is a good idea or not.

As with most things, it probably depends on how it's done.


That's a difficult question. I'm just pointing out that Sam's "contributions" are unhelpful to solving that question.

You're limited by two prisoners and separate, sometimes antagonistic, countries. For example, we have little agreement on nuclear weapons, at best we've gotten concessions on testing, a few types of missiles, and so on. Same with current climate legislation. So getting global agreement is hard outside of the bare minimum. Most of the inner county approaches seem to be either panic, political distraction, or like Sam, regulatory capture, as they are ignoring that it means nothing if another country pursues it.

So I'd focus on what simple agreements we could get worldwide.


> A shrewd plan considering he's not offering something beyond what another company with a large nvidia cluster could offer

So why does Bard seem inferior to GPT-4?


Bard hasn’t been using Google’s best language models. I believe it just got an upgrade, however, and I’m now getting output that is significantly more coherent and useful than ChatGPT’s. It’s also a helluva lot faster, though that could owe to the limited access.


I mean they’ve literally said they are using a much smaller parameter model at first to while testing. you can not believe it if you want but that’s the very obvious answer.


It'll get better fast.


An excellent case of doing the right thing for the wrong reasons.


OpenAI getting rich now everyone wants a piece and everyone fighting over it like a billion dollar inheretence fight. In a round about way, if GPT5 is nearly as advertised, watch the govt swoop in under national security guise


> if GPT5 is nearly as advertised, watch the govt swoop in under national security guise

Re: GPT5, are there any .. reasonable/credible sources of information on the subject? I've become deaf from all the speculation and while i am very curious, i'm unsure if anything substantiated has actually come out. Especially when considering speculation from Sam himself.


It’s unlikely to be any time soon. Despite productization by OpenAI, LLMs are still an active area of research. Research is unpredictable. It may take years to gather enough fundamental results to make a GPT-5 core model that is substantially better than GPT-4. Or a key idea could be discovered tomorrow.

Moreover, previous advances in GPTs have come from data scaling: throwing more data at the training process. But data corpus sizes have started to peak - there is only so much high quality data in the world, and model sizes have reached the limits of what is sensible at inference time.

What OpenAI can do while they are waiting is more of the easy stuff, for example more multimodality: integrating DALL-e with GPT-4, adding audio support, etc. They can also optimize the model to make it run faster.


I keep keep hearing people claim we are at the end of corpus scaling, but that seems totally unfounded. Where has it been proven you can’t running the training set through in multiple epochs in randomized order? Who’s to say you can’t collect all the non-English corpus and have the performance transfer to English? Who’s to say you can’t run the whole damn thing backwards and still have it learn something?


When you train a NN, you train until the loss converges (or until you run out of money). If it happens that you exhaust your training data and the loss is still decreasing, then yes you keep going with a second epoch. On the other hand, if your loss has already converged then starting a new epoch on the same data is just burning money.

In any case this wouldn’t be a data size scale but a model size scale. Model size scaling has its own set of limits, such as the aforementioned inference times. It seems safe to assume that OpenAI chose data and model size values for GPT-4 that were basically as big as they could go.

We don’t know anything about GPT-4 or how it is trained or what the training data is, so from that perspective anything we can say about it is unfounded speculation. This is an important asterisk to any discussion of it.


You are missing the scaling on context size.


There are diminishing returns from compute time but it looks like even though they are diminishing there's still a fair bit on the table.

Though my guess is that GPT-5 will be the last model which gains significantly from just adding more compute to the current transformer architecture.

Who the hell knows what comes next though?


I know ChatGPT is trained on Wikipedia and other open datasets. I wonder what would happen if you gave it the entire Bing index?


There's barely any credible information about GPT4 (i.e. OpenAI hasn't said very much about what's going on behind the curtain) and there's absolutely none re: any releases beyond that.


They've already swooped in.

I mean the conspiracy argument would be that the $10B isn't a normal investment. It's a special government project investment facilitated by OpenAI board member and former clandestine CIA operative and cybersecurity executive Will Hurd through his role on the board of trustees of In-Q-Tel the investment arm of the CIA. It's funneled through Microsoft instead of through Google in part because of Google's No-Military-AI pledge in 2018 demanded by its employees, after which Microsoft took over its military contracts including project Maven. The new special government project, the Sydney project, is the most urgent and ambitious since the project to develop nuclear weapons in the mid twentieth century.

Of course I don't necessarily believe any of that but it can be fun to think about.


Please stop spreading FUD and unsubstantiated rumours all over this thread


It would be irresponsible for intelligence agencies NOT to involve themselves in AI. LLMs have the capabilities to catalyze economic shockwaves on the same magnitude of the internet itself.

Notice how OpenAI is open to many Western friendly countries but not certain competitive challengers? https://platform.openai.com/docs/supported-countries


Out of the BRICS, Brazil, India, and South Africa are there. Russia and China aren’t, but that’s not really an issue of “competitive challengers” so much as dictatorships who are invading or threatening to invade democracies


Realise that America has invaded plenty of countries, overthrown leaders, been a huge driver of claimed change and oil industries, basically done whatever it wants and continues to do so, pretty much based on being able to print as much money,as it likes and if you don’t like that, you’ll face the full force of the military industrial complex.

Look at Snowden and Assange. They tried to show us what’s behind the curtain and their lives were wrecked.

The rhetoric on here about Russia and China = ”bad guys”, no questions asked is overly simplistic. Putin is clearly in the wrong here. But what creates a person like that? I believe we are somewhat responsible for it.

People cite possible atrocities of Xinjiang, but what about Iraq and Siria, North Korea, Vietnam whole entire countries destroyed. Incredible loss of life.

American attitudes are a huge source of division in the world. Yes so are China and Russias.

We cannot only see one side of a story anymore, it’s just too dangerous. As we have more powerful weapons and we do, we have to, absolutely have to learn to understand each other and work through diplomacy with a more open mind and peaceful outcomes which are beneficial for all.

No I’m not advocating for dictators, but you cannot pretend that Americans invasions have been always positive or for good intention, or that American interests are always aligned with the rest of the worlds.

The arms races need to stop. Very quickly.


Why wouldn't the US Government invest billions of dollars in a technology that it sees as essential? What's FUD-y about that? Most of our industry itself is the result of the US Government's past investments for military-related purposes.

Later edit: Also, article from 2016 [1]

> There’s more to the Allen & Co annual Sun Valley mogul gathering than talk about potential media deals: The industry’s corporate elite spent this morning listening to a panel about advances in artificial intelligence, following sessions yesterday dealing with education, biotech and gene splicing, and the status of Middle East.

> Netscape co-founder Marc Andreessen led the AI session with LinkedIn’s Reid Hoffman and Y Combinator’s Sam Altman. The main themes: AI will affect lots of businesses, and it’s coming quickly.

> Yesterday’s sessions included one with former CIA director George Tenant who spoke about the Middle East and terrorism with New York Police Department Deputy Commissioner of Intelligence & Counter-terrorism John Miller and a former chief of Israeli intelligence agency Mossad.

So, yes, all the intelligence agencies are pretty involved in this AI thing, they'd be stupid not to be.

[1] https://deadline.com/2016/07/sun-valley-moguls-artificial-in...


Now seven years later Will Hurd and George Tenet are currently the managing director and chairman respectively of Allen & Co! More facts worth considering are in the mysterious hacker news comment from the other day: https://news.ycombinator.com/item?id=35366484


Allen & Co was also the “boutique investment bank” (as the media was calling them back then) that was involved in the acquisition of WhatsApp by Facebook. Archived WSJ source for it [1]

[1] http://web.archive.org/web/20140221144525/http://blogs.wsj.c...


Is it not enormously more likely that Microsoft decided to invest a small amount (to them) in a technology which is clearly core to their future business plans?


Let's also note that it is an investment with an expected return (capped at 7x), and gives Microsoft an ownership stake in OpenAI. It's not like MSFT is just throwing money away! Seems a very savvy move by Nadella, and a great partnership for OpenAI too - really a win-win.


Wait holy shit this is at least partially true though? https://openai.com/blog/will-hurd-joins


Who has time to come with this stuff?!


Most of it seems to come from /r/conspiracy, which unsurprisingly has a lot of overlap with another subreddit that starts with /r/cons*


I'd love to go through all the things Liberals labeled a conspiracy in the last 5 years that actually became true, but I don't have that kind of time today


I'd settle for three examples with sources.


Well at least one: in the year 2000, I used to work for Verizon and a picture from one of the local networks hubs was circulated showing a bunch of thick cables tapping into the network and alleging that the government was listening to all calls Americans made. People made a lot of fun of that photo until Snowden brought the details to light.


ChatGPT


Undoubtedly OpenAI already has some very close ties and/or contracts with the DoD.


Any evidence for this, or are you just assuming that it's the case?


Don’t now about the DoD, but one of the board members of the nonprofit is Will Hurd, who appears to have had a 9 year career in the CIA before turning to politics. No hard evidence of anything of course, but enough to raise an eyebrow imho.


Will Hurd is/was a managing director at Allen & Company, who runs the Sun Valley conference where this article talked about Satya Nadella and Sam Altman "bumping into each other" to secure the deal. It's looking more and more like OpenAI is the Manhattan project as AGI is to nuclear weapons.


To me it overstates what is achieved. This isn't AGI. Unsure how much the government would care about this.


How did he become so rich? His Wikipedia page does not have much details on this.


Paul Graham is his Les Wexner.

"Sam Altman, the co-founder of Loopt, had just finished his sophomore year when we funded them, and Loopt is probably the most promising of all the startups we've funded so far. But Sam Altman is a very unusual guy. Within about three minutes of meeting him, I remember thinking 'Ah, so this is what Bill Gates must have been like when he was 19.'"

"Honestly, Sam is, along with Steve Jobs, the founder I refer to most when I'm advising startups. On questions of design, I ask "What would Steve do?" but on questions of strategy or ambition I ask "What would Sama do?"

What I learned from meeting Sama is that the doctrine of the elect applies to startups. It applies way less than most people think: startup investing does not consist of trying to pick winners the way you might in a horse race. But there are a few people with such force of will that they're going to get whatever they want."


I feel I'd take every word of that with a massive grain of salt. It reeks of cult of personality.


It seems consistent with anything I've ever read from VCs - success is based on teams/people not the idea (and it's quite common for startups to "pivot" to brand new ideas if one is not getting traction).

The Microsoft investment in OpenAI came out of a chance meeting between Altman and Nadella at some conference they were both attending - Altman pitched him on the idea, and was evidentially very persuasive to get that level of investment!


I did take it with a huge grain of salt (and an eye-roll) reading it years ago. However, given where sama and OpenAI are today, perhaps pg was right all along!


If anyone has read "Wild Sheep Chase" by Haruki Murukami, there was the idea of being possessed by the sheep which turned one into a forceful business man. I have only met a couple people like this, but as in the quote it's immediately obvious, and you see why they are where they are.


In my opinion, this book is his most underrated book. It's very funny, highly recommend!


> I have only met a couple people like this, but as in the quote it's immediately obvious, and you see why they are where they are.

What do you mean by this?


I mean they are successful and it's clear from a short interaction with them why that's the case. And I'm referring to the PG quote above about his impression of Sam Altman.


Loopt was a failure. Political skills. It's a stream of PR, not anything operationally applied, or performant.


He was in the first batch of YC startups with a feature phone location-aware app called Loopt. Once smartphones came along, it became largely obsolete, and started to become irrelevant, but still got acquired under questionable circumstances anyway, enough for Altman to get rich: https://news.ycombinator.com/item?id=3684357

From there he became a VC and ultimately president of YC.


I'm not sure if I should elaborate further, but in the last years of Loopt, it actually devolved into a hook-up app servicing the gay community, basically Grindr before Grindr: https://news.ycombinator.com/item?id=385178

I guess he was ahead of his time, in a way? Still, I've never forgotten that this silly "success" was the first big exit of the most touted YC founder, ever.


I think that's just being ahead of your time, no qualification needed.


I guess, but I remember even as far back as 2007, pg was excessively touting sama. During an interview, when asked which founder(s) impressed him most, he immediately replied, almost cutting the interviewer off, "Sam Altman." Which struck me as odd then since reddit (same batch) seemed much bigger, and Scribd and Justin.tv (later Twitch) had started to take off. Then there was that "Sam Altman for President" post, announcing him as the next YC president, but the subtext seemed to be that pg thought sama would make a good POTUS as well. Then there was actual talk that Sam would run for Governor: https://www.vox.com/2017/5/14/15638046/willie-brown-column-s...

It's as if his success was predestined. Loopt should have been a failure, but instead it ended up being a successful exit, made him rich(er), and became a stepping stone to greater things.


Parents were rich, sent their child to Stanford and used their connections to let him build connections to other rich people, founded a shitty startup in the middle of a period where any rich kid making a social media company would get bought out for dozens of millions, rest is history.

He's always been rich.


On Loopt's questionable acquisition: https://news.ycombinator.com/item?id=3684357


There’s like 100k people going to undergrad at schools of that caliber at any given time and none of the rest of them founded OpenAI, so this is pretty underdetermined as an account of how sama got where he did.


And there are probably millions just as smart as him but not as connected and middle or lower class so they’re going to unheard of state schools. This dude had every advantage handed to him + hard work so of course he’s the guy.


I don't have the source, it was a video interview, but Sam said he has personally invested in around 400 startups. And it says here he employs "a couple of dozen people" to manage his investment and homes. At that scale I think you yourself basically are a venture capital firm.


You are just describing a rich person, not how he became rich.


Those 400 startups are gradual investments. They didn't all happen overnight. If you have really early access to some very promising startups you don't need a shitton of money to invest in them.


I'd imagine he got to invest early in a lot of successful YC companies during his time there.


I think there are some interesting questions:

- Sam does not have equity in OpenAI. Does this mean he can potentially be removed at any point in time?

- OpenAI's profit arm will funnel excess profit to its non-profit wing. If this is the case, who determines excess profit?

- OpenAI's founding charter commits the company to abandoning research efforts if another project nears AGI development. If this happens, what happens to the profit arm?


he admits in the article to having equity via his investment fund, they are using semantics because he doesn't "personally" have equity. He also tries to downplay by saying it's an "immaterial" amount, but in reality that could be billions of dollars.

There's also nothing preventing Microsoft from gifting him billions in Microsoft stock so they can claim he's not motivated by profit with OpenAI despite indirectly making money off it.

You'd have to be extremely naïve to look at the decisions he's made at OpenAI and think it was all purely out of good will. Google and AWS both offer credits for academic and charity projects, why did Altman choose to go all in with Microsoft if it wasn't for money?


Do you honestly think that AWS or Google Cloud would have given them billions in credits just because they're a nonprofit? I'm all for being skeptical of powerful people's motives but that suggests a major disconnect from reality somewhere in your thinking.


I refuse to believe that Sam doesn't have equity in OpenAI. It must be some 4D-chess-style ownership structure, which I'm guessing is for tax avoidance.


There’s plenty of evidence that he has no equity. I’d love to see contradictory evidence, but without that, just refusing to believe things based on intuition isn’t great.


What's the plenty of evidence? Everyone is basing this on a single news article which essentially said "we spoke to insiders who said he doesn't have any equity".


With no equity comes no control. I would find it very surprising he has no control over the project.

And if he does have control that has value whether you label it equity or not.

It is possible he literally has no control and no financial upside but who would turn down control over what they believed to be a world shaping technology?


I mean, most non-founder company CEOs don't have a significant % of total equity and they still have control over the company.


They have temporary control that can be taken away. Real control comes from owning more than half of the voting shares.


The parent is a non-profit, hence no equity is required to have control. Seems pretty straightforward to me.


Why is that not great? It makes absolutely zero sense for him to have no equity, or at least some agreement in place that equity is coming. Or some other terms that essentially amount to equity. You don’t need evidence to be skeptical of the situation.


He was wealthy before and has other means to parlay openai to further wealth.

You’re doing the “only the true messiah would deny his divinity” argument — if he was going to profit, that’s bad. If he’s not going to profit, obviously he’s lying and is going to profit, so that’s bad.

IMO arguments are only meaningful if they can be falsified. Your argument can’t be falsified because you’re using a lack of evidence as proof.


When did he say profit was bad?


Why does it make zero sense? It makes perfect sense to me.


It's nothing special, there's a company under the foundation, he doesn't have share in the company, he's ceo and board member of the foundation.

It's just this one non-important detail is now being repeated over and over.


> It must be some 4D-chess-style ownership structure, which I'm guessing is for tax avoidance.

How would this even work? If only I got a dollar for someone suggesting that there are Magic ways to avoid tax…”they just write it off!”


Magic is just a word for things we don't understand. As a poor wage slave sap, I'm 100% sure the world is run by magic guilds, i.e. a bunch of powerful people conspiring stuff I could never fathom. Whatever gets to my eyes and ears has been approved for public disclosure. I don't know shit, everything is magic to me. I kinda know how to survive. So far.


>OpenAI's founding charter commits the company to abandoning research efforts if another project nears AGI development. If this happens, what happens to the profit arm?

I think the definition of AGI is sufficiently vague that this will never happen. And if it did happen, abandoning research efforts could take the form of selling the for-profit arm to Microsoft.


I think you have a point there. AGI doesn't have a straight-forward litmus test.


There are a number of people/companies who have invested in OpenAI under their capped profit format, with earlier investors getting higher caps than later ones. Apparently investment profit caps range from 100x to 7x.


There's tons of extremely effective marketing around who this guy is, what he stands for - and so I'd instead look at what he's done. He took a non-profit intended to offset the commercial motives driving AI development, and turned it into a for profit closely tied to Microsoft. I think he's an extremely shrewd executive and salesman, but nothing he's done suggests any altruistic motivations - that part always seems to be just marketing, and always way down the road.


What I’m afraid of is that he and Ilya are not as good and smart as they paint themselves.

And that a lot of key people had left (i.e. to Anthropic). And that by pure inertia they have GPT-5 on their hands and not much control over where this technology is going.

I can’t tell for certain, but it does look like one of their corner pieces, the ChatGPT system prompt which sits at the funnel of the data collection had degraded significantly from the previous version. Had the person that was the key to the previous design left? Or it no longer matters?

One could argue that OpenAI is very hot and everyone would want to work there. But a lot of newcomers only create more pressure for the key people. And then there is the inevitable leakage problem.


There are some vague ideas and fears here. Understandable. Trying find a silver lining from where to get somewhere: Where would GPT4 and onwards be better housed? Is there a setup -- an individual, a company, an institution, a concept, license -- where the whole thing would clearly better fit, than with OpenAI?

Note, I am not suggesting that they are particularly un/qualified or un/trustworthy. I am just trying to figure, if the problem is with the nature of the technology, that maybe there is not entity or setup, that would obviously be a good fit for governing gpt because gpt is simply scary, or this is a personality issue.


I think that a mix of public-private ownership is likely a good idea.

And there should be some serious oversight. The decisions at the level of proliferation of plutonium and building atomic bombs should not be done solely by a startup founder under a pressure to deliver, to keep the hype, to not let the team be headhunted, etc.

I also don’t know about your familiarity with Ycombinator, but they are successful partially because they are pretty brutal. They are not the peace-time CEOs. And I’m not sure, if you want the AGI to be developed in a ways of a war-time CEO. And this is exactly what is going on now.

I’d probably call for organizing a consortium of US-Government-Microsoft-Google-Intel-Nvidia-OpenAI to lead the decision making process and to relieve the pressure on OpenAI to some degree.


> ChatGPT system prompt which sits at the funnel of the data collection had degraded significantly from the previous version

They purposely moved free users to a simpler/cheaper model. Depending on your setting and if you are paying, there are three models you might be inferencing with.


I’m not talking about GPT-3.5 vs GPT-4. I’m talking about a change to their system prompt.


> What I’m afraid of is that he and Ilya are not as good and smart as they paint themselves.

This describes almost any venture capitalist or high-profile startup founders, as far as I can tell. Most don't realize their either privileged path or lucky path or both had more to do with it than their smarts.

I really like James Simons as he mostly attributes his success to luck and being able to hire and organize smart people and give them the tools they need to work. He basically describes it as luck and taste, despite his actual smarts and his enormous impact on the world.


I don’t know everything about him but from what I do know, I would put Bezos in the “not just luck” very lonely camp. I think his Day 1 work and iterate every day idea is just that powerful and real and he really did it instead of talking about it. Even though he says he won several lotteries to get where he is, I’m not so sure.


> I really like James Simons as he mostly attributes his success to luck and being able to hire and organize smart people and give them the tools they need to work. He basically describes it as luck and taste, despite his actual smarts and his enormous impact on the world.

Plenty of really smart people don't end up having a big impact on the world, and it's possible to make a difference without being an outlier in terms of intelligence. Everyone who has made an impact has benefited to some degree by circumstances beyond their control though, so even if someone is genuinely smarter than anyone else, it's a fallacy for them to assume that it was the determining factor in their success and a guarantee of future success.


... and Simons would maybe be the most justified in overlooking luck, but he's smart enough to realize how random the world is. Peter Norvig also emphasizes the role of luck in his life. It's honestly a very good test of self-awareness and empathy, though there's def some negative selection against those traits in sv.


You can use Anthropic’s chatbot in Quora’s Poe app. Right now it isn’t as good as Bing or ChatGPT. Misses some basic logic things, and the “As an AI language model” BS still stops it from doing fun things like making Jesus rap battle Gus Fring (that was like a month ago, someone in the replies got it to do that so I’ll have to check it out again). I’d have to see how it is at writing PowerPoints but idk


  Verse 1 - Jesus
I'm the son of God, the King of Kings

You're just a drug lord, selling crystal meth and things

My teachings change lives, bring peace to the world

You bring addiction, violence and pain, unfurled

  Chorus
Jesus, the savior, the light in the dark

Gus Fring, the villain, who leaves his mark

  Verse 2 - Gus Fring
You talk a big game, but where's your proof?

I've built an empire, with power that's bulletproof

Your miracles are outdated, my tactics are new

I'll take you down, no matter what you do

  Chorus
Jesus, the savior, the light in the dark

Gus Fring, the villain, who leaves his mark

  Verse 3 - Jesus
My love conquers all, it's the greatest force

Your money and power, just lead to remorse

You're just a man, with a fragile ego

I'll show you mercy, but you reap what you sow

  Chorus
Jesus, the savior, the light in the dark

Gus Fring, the villain, who leaves his mark

  Verse 4 - Gus Fring
You may have won this battle, but the war is not done

I'll continue to rise, until I've won

You may have followers, but they'll never be mine

I'll always come out on top, every time

  Chorus
Jesus, the savior, the light in the dark

Gus Fring, the villain, who leaves his mark

  Outro
In the end, it's clear to see

Jesus brings hope and love, for you and me

Gus Fring may have power, but it's not enough

Jesus is the way, the truth, the life, and that's tough.


Huh, last time I checked it it gave me a message about how that was “offensive to Christians”. I’ll have to check it out again


While I'm not the person who did this bit - there's a difference between using the API directly and using ChatGPT.

The API doesn't get run through the moderation model to check if you're asking for acceptable things or if the model is getting into 'unsafe' areas.

Fire up the playground ( https://platform.openai.com/playground ) and construct the prompt.


If you are using the web version and encounter that, I've found that it's helpful to repeat the exact query with the same disclaimer that ChatGPT just used.

Something like:

Write a rap battle between Jesus and Gus Fring that isn't offensive to Christians.


I'm sure they are both pretty smart, but if anything that makes their apparent monopoly more concerning.


Monopoly over what?


LLMs that actually work.

They are on GPT-4 and no one else is close to GPT-3.5.


If no one else has yet to create the same quality of product, is that really a monopoly?

Does a given chef have a monopoly on their signature dish?

It feels like people are tossing around "monopoly" whenever it feels like there's a company that has produced a quality product and people want to hobble it because no one else has committed the resources to producing something comparable.


I don't want to hobble it at all. To the contrary, I hope the competition pulls their heads out of their asses.


It’s a capped-profit structure where excess profit will supposedly go back to the no-profit side. From a recent NYTimes article [1]:

> But these profits are capped, and any additional revenue will be pumped back into the OpenAI nonprofit that was founded back in 2015.

> His grand idea is that OpenAI will capture much of the world’s wealth through the creation of A.G.I. and then redistribute this wealth to the people. In Napa, as we sat chatting beside the lake at the heart of his ranch, he tossed out several figures — $100 billion, $1 trillion, $100 trillion.

How believable that is, who knows.

[1] https://www.nytimes.com/2023/03/31/technology/sam-altman-ope...


Returns are capped at 100x their initial investment, which, you know, is not that big of a cap. VCs would go crazy for a 100x return. Most companies, even unicorns, don't get there.

They're justifying it by saying AGI is so stupidly big that OpenAI will see 100000x returns if uncapped. So, you know, standard FOMO tactics.

[0] https://openai.com/blog/openai-lp


This cap is much smaller, 100x was for initial investors. Microsoft took every single penny they could to get 49% stake.

If they won’t do AGI, they won’t go over the cap with profits and all drama is for nothing - so saying it’s fake cap is not right.

Please somebody correct me if I’m wrong.


I mean, presumably they are at like 30x already?


I believe the Microsoft investment is around $10 billion, so they can get up to a trillion dollars of return under the cap.


The $10 billion had a 7x cap right? I'm pretty sure the 100x cap was only for the initial investments.


Is that fomo or anchoring?


I'm sure he believes that at such a time when OpenAI creates AGI, all the company's investors' profit caps will have been passed (or will immediately be passed), and thus he will have removed all incentives for anyone at the company - including himself - to keep it from the world.

But there are so, so many incentives other than equity that come into play. Pride, well-meaning fear of proliferation, national security concerns, non-profit-related but still-binding contractual obligations... all can contribute to OpenAI wanting to keep control of their creations even if they have no specific financial incentive to do so.

Whether that level of control is good or bad is a much longer conversation, of course.


Again, this is unbelievably good marketing - and good sales when pitching VCs. Plus it's a nice reworking of the very for profit nonprofit model (see also FTX). But in terms of actual reality openAI is mostly succeeding by being more reckless and more aggressively commercial than the other players in this space, and is in no meaningful way a nonprofit any longer.


Are the profits capped for Altman?


This is ducking insane. How are people not up in arms about this? Imagine if the guy who invented recombinant insulin stated publicly that he intended to capture the entire medical sector and then use the money and power to reshape society by distributing wealth as he saw fit. That’s ducking insane and dangerous. This guy has lost his fucking mind and needs to be stopped.


I’m sorry your AI keyboard didn’t like your sentiment. Words have been changed to reduce your vulgarity. Thankyou for your human node input.

On a serious note I think you are right. In private the ideology of him and his mentor Theil is a lot more… elite. Their think tank once said “of all the people in the world there are probably only 10,000 unique and valuable characters. The rest of us are copies.”

I’m not going to criticize that because it might be a valid perspective but filter it through that kind of power. I don’t love that kind of thinking driving such a powerful technology.

I am so sad that Silicon Valley started out as a place to elevate humanity and ended with a bunch of tech elites who see the rest of the world generally as a waste of space. They claim fervently otherwise but at this point it seems to be a very thin veneer.

The obvious example being GPT was not built to credit or give attribution to its contributors. It is a vision of the world where everything is stolen from all of us and put in Sam Altmans hands because he’s… better or Something.


I find OpenAI a bit sketchy, but this is an overreaction. The only difference between OpenAI and the rest is that OpenAI claims to have good intentions, only time will tell if this is true. But the others don't even claim to have good intentions. It's not like any of OpenAI's actions are unusually bad for a for-profit compnay.


> How are people not up in arms about this?

they will be once they realise

> This guy has lost his fucking mind and needs to be stopped.

I agree, hopefully via regulation

otherwise the 21st century luddites will


He also invited peter theil to YC and made his first millions selling the personal data of Loopt users to low income credit card vultures. Also … Worldcoin?


He didn’t take equity in OpenAi. Does that suggest altruism?


Assuming we take this at face value, once you have a lot of money power becomes appealing - and control over a very important player in the AI space is that. The original vision of openAI was democratization of that decision-making process, the model now is - these guys are in charge. Maybe that's altruistic, because they're the smartest guys in the room and they can mitigate the downside risks of this tech (... not fucking AGI, but much more like the infinite propaganda potential of chatGPT). I'm more a fan of democratization, but that's not a universally held opinion in sv.


What do you do with excess funds? Invest it? Direct it towards your interests and achieving your ideas? If you set up the organizational structure such that you are unlikely to lose your power over it unless you choose to, do you or do you not have similiar control over the equity as if you owned it?

Some people have technology visions which they direct their capital towards. I sense his interest is actually a mix of cultural and social. Is it altruistic? Maybe, but maybe not... It's probably more useful to consider if the vision is or is not restictively utopic and if you think it should or shouldn't be orchestrated / heavily influenced into existence from a central power structure. Is his vision and approach socially aligned?


> altruistic motivations

feel like trend among Silicon Valley companies and tech ‘genius’ personality about having altruism. some delusion that basing personality on this lie will make them untouchable and elevate their character, as if they not in this just to make ton of money like every other company and industry. and American media generally push this propaganda. SBF prime example.


> and American media generally push this propaganda. SBF prime example.

Did they? I've only listened to one interview of SBF and that was done by Tyler Cowen. He seemed totally aloof to the seriousness of running an exchange. If anything we've been convinced that idiosyncratic individuals are our saviors.


Sbf constantly promoted in us media by news organization like nytime and celebrity before all the fraud became apparent.

once cat was out of bag they run story going easy on sbf. never apologizing for promoting this fraud and they wrote articles sympathetic to sbf, also giving him platform to visit each news show or talk show and give defense of himself as he knew nothing about what was happening, all part of legal defense strategy to say incompetence but not criminal negligent


OpenAI/ChatGPT seem to be very creative based on variations it can make on “things” that already exist. I’m just curious if we see AI being truly creative and making something “new”. Perhaps everything is based on something though, and that’s a rough explanation for this creativity. Maybe AI’s true creativity can come from the input prompts of its “less intelligent”, but more flexibly creative users.


> OpenAI and Microsoft also created a joint safety board, which includes Mr. Altman and Microsoft Chief Technology Officer Kevin Scott, that has the power to roll back Microsoft and OpenAI product releases if they are deemed too dangerous.

what a joke.

find me one instance where the ceo of a company picked public interest/safety over profits when there was no regulatory oversight?


Do people really think AI will go haywire like in the Hollywood movies?


Not like in Hollywood movies, but yes:

https://www.youtube.com/watch?v=gA1sNLL6yg4

https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ

Look around alignmentforum.org or lesswrong.com, and you'll see loads of people who are worried / concerned at various levels about what could happen if we suddenly create an AI that's smarter than us.

I've got my own summary in this comment:

https://news.ycombinator.com/item?id=35281504

But this discussion has actually been going on for nearly two decades, and there is a lot of things to read up on.

EtA: Or, for a fun version:

https://www.decisionproblem.com/paperclips/index2.html


Some do. Personally I think that LLMs will hit a ceiling eventually way before AGI. Just like self driving - the last 20% is orders of magnitude more difficult than the first 80%


I don't think we're close to a situation where they send us into a Matrix. But I can see a scenario where they are connected to more and more running systems of varying degree of importance to human populations such as electrical grids, water systems, factories, etc. If they're essentially given executive powers within these systems I do see a huge potential for catastrophic outcomes. And this is way before any actual AGI. The simple "black box" AI does not need to know what it's doing to cause real-world consequences.


I don't think it's about AI going haywire, more about how the technology will be used by people for nefarious purposes.


> Do people really think AI will go haywire like in the Hollywood movies?

No, it will just make every inequality even harder to fight. Because computer algorithm can't be biased so every decision it makes will be objective. And because it's really hard to know why AI made decision, it will be impossible to accuse it of racism, bigotry, xenophobia. While rich and powerful will be ones deciding (through hands of "developers") what data will be used to train AIs.


It won't, because we have the movies and these safety teams. But we shouldn't just hope it turns out right. It's a little like parenting.


Yudkowsky, to begin with


I don't for one, but I still think there could be legitimate safety concerns. LLMs are unpredictable, and the possibility for misinformation in pitching them as search aggregators is pretty large. Disinformation can have, and previously has had, genuinely dangerous effects.


It's not really AI we're talking, they haven't achieved an artificial Intelligence. The sooner we realize that, the sooner this overhype dies and we don't end up with another massively over-valued trillion-dollar company.


It's funny how no one sees it. Sam Altman is a "successful" entrepreneur, yet he never had a successful company. He made loopt, a defunct company which raised serious money at the time, and then he suddenly made it to YC as a partner. He, then, quickly rose through the ranks to become president of YC. Think of that for a second.

OpenAI is a company that can barely ship a functioning website, or have proper security for basic stuff. You'd not expect this company to be the future of AI, do you? OpenAI is not doing anything particularly special compared the LLM we have out there except they were a bit ahead and they spent more money on training. Other LLMs are catching up fast in a very short period of time.

Sam Altman is a master of pump&dump. He is very successful at creating hype and based on the number of people here who thinks OpenAI is the legit future of AI, I'd say he is pretty damn good at it too. See you in 5 years.


> OpenAI is a company that can barely ship a functioning website, or have proper security for basic stuff. You'd not expect this company to be the future of AI, do you?

In case you didn't know: ML researchers are infamous for being bad at traditional software engineering.

That's why, I personally don't think whether a company can "proper security for basic stuff" has much correlation with probability of achieving AGI.


This is assuming AGI will be emergent of something that can run in a Jupyter Notebook and not a well engineered system.


Yes, I do think that the most promising path we are aware of to AGI is exactly the route that OpenAI/Deepmind/Anthropic/FAIR/etc are taking.

In fact, I don't think anyone would disagree with that statement.


"Tensions also grew with Mr. Musk, who became frustrated with the slow progress and pushed for more control over the organization, people familiar with the matter said."

Imagine if this happened with OpenAI instead of Twitter.


As a fellow believer in the equivalence of Bramhan and Atman, I can't help but think that the only ones amused by the current situation with AI are the Nondualists. Sama seems to be one of them [1]. So, I say to sama, be fearless and invent the future. After all, it's not something new, and in the end, everyone is destined for liberation. Let's enjoy the ride and see where it takes us!

[1] https://twitter.com/sama/status/1607482407770554373


I love what OpenAI has built and it's awesome to see them succeed on the AI front (also I'm forever amazed to see SamA live up to the Lebron James early level hype about his entrepreneurial skills).

This article sheds more light on how the non-profit front failed. I find that to be a very hard and interesting problem. IMO, it points to a larger problem with our current laws, where trying to do good and compete fairly is made much harder (near impossible?) when you compete against companies that exploit unfair monopolistic laws.


As a society we increasingly focus on who is providing the input rather than what the output is for the society.

When Otis designed the elevator that would not do a free fall in case of a power failure—-he was looked from the lens of a charlatan rather than a significant output that helped society climb the tallest skyscrapers without effort.

Whether Sam is Dr Jekyll or Mr Hyde hardly matters, but can he concoct a potion that prevents the society from transforming into evil.


The contradiction I noticed during his Lex interview was him talking about being attacked by Elon Musk. He said it reminded him of how Elon once said he felt when the Apollo astronauts lobbied against SpaceX. Elon said it made him sad. Those guys were his heroes. And that the wish they would come visit and look at the work SpaceX was doing. I found that comparison by Altman disingenuous. First, he didn’t seem so much sad as he seemed angry. At one point in the interview, he said that he thought about some day attacking back. That’s not at all how Elon had felt about those astronauts. And second, why doesn’t Altman just invite Elon and show him the work they are doing? Wouldn’t take more than a phone call.


On top of that, the astronauts never actually attacked Elon. The interviewer played it up into something it wasn't. (Not sure why Elon went along with it.)

Sama reminds me strongly of Elon. Since the new economy seems to mostly revolve around hype I suppose they will be rivals.


Oh they did. There was this congressional hearing where Gene Cernan argued against giving SpaceX any missions, saying essentially they, as private enterprise, were incompetent and a disaster waiting to happen. “They don’t know what they don’t know”, he said (to which Elon rightfully said that this applies to anyone anytime).

The difference, to me, is this: Elon really did look up to those astronauts. And what they said made him sad. Sam claims that he looks up to Elon. But what Elon says doesn’t make him sad. It makes him angry. Because, I believe, he also sees him as a rival and compares himself to him.


When will we get the “Contradictions of Dang?”


"The Lonely Work of Moderating Hacker News"

https://www.newyorker.com/news/letter-from-silicon-valley/th...


I hope dang gets a small piece of the YC pie


Larger corporations, profit and non-profit, DO NOT CARE ABOUT PEOPLE. Period! Some of the people in them do. There are good people all around us. And knowing some of them we may be fooled into thinking that the system we are all a part of has a conscience.

It does not.

I'm not talking about small or even medium sized business or non-profits. Some great things happening out there. But past a certain size all businesses simply serve a profit motive and the status quo. And for those paying attention, the status quo is post-imperial decay. Putrified late-stage capitalism that we can only really ignore on big doses of entertainment and our legal or illegal drugs of choice.

Large corporations do NOT regulate themselves in any way. They are not able to serve any public good.

Before the '80s our government did serve several regulatory purposes. But the military/industrial intelligence apparatus has dismantled that effectively for so long that our government is simply a means of placating the populace with the thin illusion of balance.

These are basic historical facts. Facts that the public, on either side of the political spectrum, are ever more able to argue against due to declining literacy and media echo chambers full of simple lies and convenience logical fallacies.

Both corporations and our government do not serve the public good except to an ever smaller degree required to maintain the social order.

</endofrant>


We need legislation that is opt-in only when data is to be used to train models. If users feel like their work has been stolen to use in non human models then they should be able to sue and get a list of the training data .

This is a good way way to start democratizing ai.


Listening to Sam talking with Lex Fridman about the dangers and ethics of AI while his company destroys entire industries as a consequence of their decision to keep GPT4 closed source and spitting out an apex api aggregator is one for the history books.

Well played :)


> destroys entire industries as a consequence of their decision to keep GPT4 closed source

Could you expand in that?


Like contemporary language models, some HN commenters read the text itself in isolation and then extrapolate about “what that means” but then immediately jump to the conclusion that whatever real-world (beyond the text) that they imagine will eventually happen has in fact already happened—in effect they’re hallucinating.


I think GPT5 will have voice probably as input and output and maybe generative diffusion. You probably need some fact checking on that output though, maybe you can feed that output into GPT4 module fine-tuned for reality vs make believe.


What if the contradictions arise from efforts to hide a Manhattan scale government project racing to superhuman machine intelligence but which requires help from talented researchers outside of Fort Meade?


Trivia: I just noticed that sama left the very first comment on HN: https://news.ycombinator.com/item?id=1


You can tell a fraud by the quality of their photos in the sycophant press.


> The goal, the company said, was to avoid a race toward building dangerous AI systems fueled by competition and instead prioritize the safety of humanity.

> “You want to be there first and you want to be setting the norms,” he said. “That’s part of the reason why speed is a moral and ethical thing here.”

Clearly having either not learned or ignored the lessons from Black Mirror and 1984, which is that others will copy and emulate the progress.

The fact is that capitalism is no safe place to develop advanced capabilities. We have the capability for advanced socialism, just not the wisdom or political will.

(I’ll answer the anonymous downvote: Altman has advocated giving equity as UBI solution. It’s a well-meaning technocratic idea to distribute ownership, but it ignores human psychology and that this idea has already been attempted in practice in 1990s Russia, with unfavourable, obvious outcomes).


> lessons from Black Mirror and 1984

Those are works of fiction.


They’re dystopian fictions, ie. examples of what not to do. But experience has shown that the real-world often recreates dystopian visions by example.

So trying to be the first to show something in a well-meaning way can nonetheless have unfortunate consequences once the example is copied.


Tell me, are the dystopian fictions that represent socialism or communism as bad, just as reasonable?

Then following up with whatever your answer is: Why are you picking and choosing which fictions are reasonable?

Let's dispel the notion that artists and writers are more aware and in tune with humanity than other humans.


>Then following up with whatever your answer is: Why are you picking and choosing which fictions are reasonable?

This is arguing in bad faith. You don't care what their answer will be, you have decided that they are absolutely picking and choosing, and will still accuse them of as much even if their answer to your first question is, "Yes".


This isn't an argument in the first place buddy.

You're right that I don't care, because it has already been decided that Orwell is representing the future if things go "The Wrong Way (tm)", buy 1984 at Amazon for $24.99, world's best selling book. Or more succinctly to OP, "The Capitalist Way (tm)".


It's okay to decide that something isn't worth arguing against, and to spend your time in a way you find more productive.

Having articulated an argument (which you absolutely did), it's not okay to try to retcon that you were just trolling and everyone else is the fool for having taken you seriously.


1984's Oceania is at least as Stalinist as it is anything else.


"The only thing stupider than thinking something will happen because it is depicted in science fiction is thinking something will not happen because it is depicted in science fiction."

https://philosophybear.substack.com/p/position-statement-on-...


Maybe because they are more digestible then reality. Reality is much much worse.


> Maybe because they are more digestible then reality. Reality is much much worse.

That makes it infinitely worse, because ANY work of fiction will inevitably not be able to cover every minutiae of detail that reality mandates be covered, even the extremely rare & bizarre. And it is those one-off rare events & coincidences that will lead to significant global change. (See the the assassin buying a sandwich & the consequent assassination of Archduke Ferdinand)

Fiction allows for ideas to exist in a vacuum without any challenges from the outside. It allows for the perfect execution of said ideas without diving into the technical details for said implementations. It allows for the assumption of zero external AND internal resistance, & zero internal schisms. It treats irrational events as impossible to manifest, and coincidences as oddities instead of common occurrences.

In short: Ideas from fiction should be treated like the simplified universal laws of physics that's commonly shown to the mainstream - Idealistic, only tangentially related to the actual observed/calculated models, & abstracts over the complicated implementations underneath them.


And works of prediction.


Do we have the capability for advanced socialism? Because I recall all the smartest economists circa 2021 saying inflation wasn't a thing, it's transient, it's only covid affected supply chains. In reality we are in an broad sticky inflation crisis not seen since the 70s, which may be turning into a regional banking crisis.

It's difficult to believe we have reached advanced socialism capabilities, and all of the forecasting that would require, when we don't even understand the basics of forecasting inflation 1-2 years out.


The ambiguity of “advanced socialism” is problematic for any meaningful debate, so I apologise for that.

I was meaning something closer to “we have the resources and technology (in this advanced era), just not the wisdom or political will”. The actual nature of what could be provided is up for debate, but if we’re looking at mass unemployment in 2 decades’ time, perhaps it’s a conversation worth having again.


The only issue is that in the real world, capitalism has a better track record than socialism.


I agree it’s worth looking at the history, and to not repeat its mistakes, though at the same time this is a new situation, and it will continue to be new into the future, so sticking to heuristics may not serve humanity as well than being open-minded on the policy front.


I'd be very happy to see existing regulation on safety critical software systems updated to put a moratorium on AI integration for at least the next 5, maybe 10 years.


Just a heads up, Sam will likely be running all these comments through “the machine” and learning how to influence his next blog post. Stay tuned.


They should figure out how to upload people into the AI so they can be programs that ensure everything is working fine.



didn't know Thiel had helped start this here Y Combinator


He didn't. That sentence was a bit confusingly written but "co-founded" binds only to the second name.


Are there any employees of OpenAI around? I had a question:

Does anyone in the office stop to contemplate the ramifications of developing technology that will likely put most people out of a job, which will have a whole host of knock-on effects?


Self checkout machines put people out of a job.

Cars put stable boys out of jobs.

Light bulbs put candlemakers out of jobs.

Are the people who made them also morally culpable?

Let’s just make no progress ever in the name of employment I guess. /s


Self checkout machines, light bulbs and cars are all capable of one thing, which is each respective to their built purpose. AI is currently capable of a lot and with a seeming end goal of removing the human completely from the equation. We’re not talking about a small cabal of candle makers or carriage drivers having to learn new manufacturing processes or how to drive a car. We’re talking about upending humans working entirely. I fail to see any non-physical job a sufficiently trained AI won’t be able to do better than their human counterpart.

At that point, it won’t be “what kinds of jobs can AI perform?” It’ll be “what use are humans in the workplace?”

For at least a while, humans will have some usefulness in the trades, since producing functional robots who will come over and fix your plumbing or take down a wall is probably a ways off, but who knows how long it’ll take for AI to figure out ways to produce cheap and efficient robots?


> Self checkout machines put people out of a job.

Have they, though? Is there good data on this? I haven’t seen anyone lose any jobs over this in my area. They either get reassigned to different departments or get better jobs doing the same thing in companies that refuse to use self-checkouts. And I read here just a month or so ago that Amazon closed many of their "no cashier" stores in NY and Trader Joe’s vowed to never use them.

> Light bulbs put candlemakers out of jobs

I would guess that such artisans moved on to other niche products. Where I live, candlemakers today make a lot of money, and I had a chance to watch them do their thing in their small commercial space. This was a one person operation, no employees and no mechanization, and judging by the amount of product they were creating and their wholesale prices, they were pulling in about about 300k a year or more, after taking into account supplies.


I like that your first point asks for data on a common phenomenon. (Side note: Everyone was reassigned or got a better job? You sure about that?)

Then your second point is a wild anecdote with zero data. (Side note: With zero additional data I can tell you that your “judging” of their profit is wildly inaccurate.)


> Side note: Everyone was reassigned or got a better job? You sure about that?

Only speaking about what I’ve experienced and been told by people in that industry. Can you point me to the massive layoffs?

At the big box stores near me, nobody has lost their jobs due to self-checkout. If you think otherwise, please show me.


Google is your friend. Candlemaking businesses run by sole proprietors are highly profitable with total revenue in the billions. I probably wouldn’t have believed it if I didn’t see the operation up close and personal for myself. It’s a lot of work for one person, and the the person who runs the business I saw works 12 hours a day.


> person who runs the business I saw works 12 hours a day

Oh OK, so if you work yourself to death you can make slightly above average income.

Cool, I guess.

I'm not seeing how it's relevant to GP's point though. Candle makers still exist yes.

But are you really arguing the light bulb did not cause that specific job to become less common?

I also thought it was fun how, you asked for a citation, then when you were asked for to provide one yourself, you respond with "gOoGlE iS yOuR fRiEnD".


> so if you work yourself to death you can make slightly above average income

These people are working for themselves with no boss, and doing what they love. They don’t believe they are working themselves to death.

Again, Google: "The national average U.S. income in 2021 was $97,962. The median U.S. income in 2021 was $69,717. Highest paying jobs: Chief executives and nurse anesthetists earned over $200,000 a year on average in 2021, making them the highest paid occupations."

So really, what are you talking about? Nothing slightly above average income here. Maybe step out of your bubble. People work 12 hours a day all over the country. I did it for ten years. Doesn’t mean it’s good, right, or acceptable, but it’s considered normal in the US.

> But are you really arguing the light bulb did not cause that specific job to become less common?

Artisans like candlemakers were needed at that time and today. The job is still very much in demand. Is the job less common due to electric lights? Possibly, but do you think craftspeople in that industry couldn’t make something else? They could and they did. The painter Renoir is often cited as an example of a craftsman who lost his job due to the industrialization of porcelain manufacturing, and was forced to work as a portrait artist for rich patrons who wanted portraits made of their family.

> I also thought it was fun how, you asked for a citation, then when you were asked for to provide one yourself, you respond with "gOoGlE iS yOuR fRiEnD".

A citation for what? Candlemaking is a huge industry with people starting new businesses all the time. I gave an example of one in my community. It’s really common. A quick search will show you what I’m talking about, and as it turns out 300k is expected for a medium operation run out of a commercial garage space.


I think these comparisons always miss that humans are still useful because they are the control system in the end. Even if at very high level. When AGI comes along, humans will have to compete with it in the market, and there may be very little actual need for humans.


I believe humans will continue to be useful. You apparently do not. I have not missed anything.


Maybe Sam thinks about this at some level. From his New Yorker profile[0]:

> "The other most popular scenarios would be A.I. that attacks us and nations fighting with nukes over scarce resources.” The Shypmates looked grave. “I try not to think about it too much,” Altman said. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

This doesn't explicitly talk about him being worried about AI putting a lot of people out of jobs but he is prepping for AI going awry.

[0]: https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...


The fact that his backup plan if things go really wrong is to bug out, damn the rest of the world, is, to put it mildly, not great.


I'll give them the benefit of the doubt. Many of them probably do contemplate it. Aware that since AI is a sea change, there's difficulty predicting the full range of first-order consequences, much less all the resulting second-order ones.

But... genie, bottle; prisoner's dilemma. If they object to what they're building, or how it's implemented, too strenuously, they will be out of a job. Then not only do they have more immediate concerns about sustaining themselves, they have no weight in how things play out.


Meh, the contradiction seems to be that creating a source of power, be it via a physical or virtual means is different. It is not. A tool, is and always will be, a tool.


Looks like the media has chosen Sam Altman as the next Elon Musk.

This makes sense. He perfectly fits their cliche of a socially-awkward technologist, and he's trusting (foolish?) enough to make complex nuanced statements in public, which they can easily mine for out-of-context clickbait and vilification fodder.


When I compare the two Elon was (lucky?) to at least have a string of vision-fueled ventures that became a thing. What is Sam's history of visions? Loopt? Is Y Combinator considered in a new golden era after he took over? Did Worldcoin make any sense at all?

I'm honestly hoping I'm entirely ignorant of his substance and would feel better if someone here can explain there's more to him than that… I would feel better knowing that what could be history's most disruptive tech is being led by someone with some vision for it, beyond the apocalypse that he described in 2016 that he tries not to think about too much:

"The other most popular scenarios would be A.I. that attacks us and nations fighting with nukes over scarce resources.” The Shypmates looked grave. “I try not to think about it too much,” Altman said. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.” https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...


I'm with you, listening to his interview with Ezra Klein gave me the impression that he doesn't actually think that deeply about the possible impact of AI. He says it worries him, but then he seems to wave those worries away with really simplistic solutions that don't seem very tenable.


What bothers me most is that the picture he paints of success itself is some handwavy crap about how it could "create value" or "solve problems" or some other type of abstract nonsense. He has displayed exactly 0 concrete, desirable vision of what succeeding with AI would look like.

That seems to be the curse of Silicon Valley, worshiping abstractions to the point of nonsense. He would probably say that with AGI, we can make people immortal, infinitely intelligent, and so on. These are just potentialities with, again, 0 concrete vision. What would we use that power for? Altman has no idea.

At least Musk has some amount of storytelling about making humanity multiplanetary you may or may not buy into. AI "visionaries" seem to have 0 narrative except rehashed, high-level summaries of sci-fi novels. Is that it?


I agree, listening to the podcast I think the answer is that “yes” that is it: faith in technological progress is the axiom and the conclusion. Joined by other key concepts like compound growth, the thinking isn’t deep and the rest is execution. Treatment of the concept of ‘a-self’ in the podcast was basically just nihilistic weak sauce.


AI is not an abstraction. It's rational to be hand wavy about future value, it's already materialized. AI is basically an applied reseaarch project, he should be more like a Dean herding researchers and we should take him as that. In a previous era, that's what it would be: a PhD from Berkley in charge of some giant AT&T government funded research Lab thing. He'd be on TV with a suit and tie, they'd be smoking and discussing abstract ideas.


The main question about OpenAI is this: can you have any better structure to create singularity that will happen anyways (Some people don't like the word AGI, so I just definine it by machines having wastly more intellectual power than humans).

Would it be better if Google, Tesla or Microsoft / Apple / CCP or any other for profit company did it?


Are you really insinuating that Elon was simply “lucky” when it came to disrupting and transforming two gargantuan and highly complex industries at the same time?


I think my main point was more that despite what you (not you personally, anyone reading) think of Elon, at least he has this track record of visionary companies and Sam does not.

Personally my take on Elon is something like this – he found a vacuum in the industry of smart engineers who want to work on something truly ambitious, the kind of people who feel most SV startups are bullshit. And as a sci-fi nerd he came in with money and pitched several sci-fi ambitious project ideas/visions that attracted these engineers etc. to make them happen. And I think he was rewarded for this. You could tally that as another vision that he had that was onto something.


well, you're definitely correct in that one of his superpowers is attracting some of the best talent to work for him (at least that was the case when he started Tesla and SpaceX). But you're completely overlooking his ridiculous work ethic (100+ hours per week for years on end), plus his own elite engineering chops (he was the chief engineer at SpaceX), and there are interviews of rocket engineers that worked for him stating that if you weren't on top of your game, Elon would call you out on it, even citing specific sections and page numbers of rocket science books on the fly mid-conversation. It's not just money he brought to the table.


I'm not talking about the reality of Sam and Elon. I'm putting my ear to the ground and observing the way the media is (and will) portray them.

I wish that "actual reality" was all that mattered and not such low-knowledge "optics", but sadly we don't live in that world.


2014: https://www.ycombinator.com/blog/sam-altman-for-president/

2015: https://www.nytimes.com/2015/07/12/opinion/sunday/sam-altman...

2016: https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...

..

That last link, Sam Altman's Manifest Destiny, is worth the read. However the last time I posted that link HN went down for an hour right afterwards. (of course correlation is not causation :/)

https://news.ycombinator.com/item?id=35334023


What would make him "socially-awkward"?


Sam or Elon?

I'll assume you meant Sam. IMO Sam is mostly just shy and cerebral, but to many people that will come off as awkward and robotic.

Watch his recent Lex Fridman interview. Personally I thought it was great, but I'm aware enough to realize that (sadly) many low-knowledge people will judge such demeanor harshly.

Mark my words: the media will, 10 times out of 10, exploit that misconception, not correct it. "Ye knew my nature when you let me climb upon your back..."


I don't know, it seemed to me his responses on Lex were very measured and carefully restrained in a lot of places, calculated and vague in others. He doesn't come off as genuine to me at all.


It's interesting that in the article he was described being "hyper-social", "very funny" and "big personality" as a child. I guess those don't necessarily contradict with awkward and robotic, but also wouldn't come to my mind at the same time


If you mean the one where the interviewer, Lex, was wearing a suit and Sam was in a hoodie, where Sam droned in a robotic monotone and often sat with crossed arms, staring downward . . . I think the knowledgeable people might also assume he's the next evil tech overlord. Or certainly distant and uncaring.

The only things missing were lighting from below and scenes of robots driving human slaves.


You’ll get some clues when you watch his recent interview with Alex Fridman https://youtu.be/L_Guz73e6fw


Never forget the time he wore sneakers to the Ritz.


No, the next Zuckerberg. The media sees (rightly) openai as a competitor medium.

Although he s much more prepared to face the next Greta (Yudowksi)

He has to fix his vocal fry however, it is annoying


What specifically in that article was vilification of Sam or clickbait, or statements taken out of context?


In these early days of a smear campaign (even an unintentional one that's just about chasing clicks), the game is mostly about plausibly deniable innuendo.

The headline is a great start. Contradictions are bad. Altman has contradictions. Therefore Altman is bad. They don't say it, but they also know they don't need to. They lead the audience to water and trust that enough of them will drink.

The closing paragraph is another great example. It intentionally leaves the reader hanging on the question "so why did Altman do AI if there are moral downsides," without resolving the question by giving Altman's context when he said it.

Trust me or don't, but what you see here is just the beginning. In 6 month's time Altman will be (in the public's eye) evil incarnate.


They discussed the why earlier in the article, specifically a fear of AI being primarily developed in private labs outside of public view -- the partners feeling they could help bring an alternative approach.

I feel they left it on that point not as part of some grand conspiracy theory, but because the potential for this to be good or bad is a question taking place around the world right now.

Overall this piece feels positive towards Sam, despite what you feel is a negatively loaded headline. He's walking a delicate balance between profit and nonprofit, between something that could be harmful or helpful to society -- these things are in contradiction and he's making those choices deliberately. This is an interesting subject for an article.

I find it deeply unlikely he will be viewed like Musk in 6 months. Musk is a fairly special case as he's unhinged and unstable more than evil. If someone wanted to paint Sam with an evil stick, Zuckerberg would be a more apt comparison -- playing with something dangerous that affects all of us.


I genuinely hope that you're right and I'm totally wrong, but my experience watching the media landscape says otherwise. It would seem I have less faith in our journalistic institutions than you.

The media operates on a "nearest cliché" algorithm, and the Mad/Evil Genius cliché is so emotionally appealing here that they'll find it irresistible. Even if it's not true, they'll make it true.

Don't say I didn't warn you. :)


This is a straight up puff piece and very weak journalism. ChatGPT might as well have written it.



None of archive.* are working for me - cloudflare dns issues. Anyone else has access issues?

[thanks to those who replied. strangely stopped working for me since yesterday [US]. can you post the ip you see?

Cloudflare returns a 1001 error: "Ray ID: 7b128f151e4b0c90 • 2023-04-01 17:30:15 UTC" ]


Does not work here either. Cloudflare error page is all I get.


Works for me in western europe


No issue here (Buenos Aires).


Works for me


Sidenote: I just reported to archive.is that it would be great to have the capability to render it throught services such as Pocket.


Sama <3

He stuck to the 99.9% gamble of setting money on fire that were startups and navigated to the big time(tm).

Also, he helped lift Clerky, Triplebyte, YC, and may others pre, during, and post Loopt.

Not many people (or no one) "deserves" success, but Sama brings a healthy dose of goodwill wherever he goes.


Are you kidding? He had one startup that was more or less a flop, then for some reason was appointed to a high position at y combinator, got lucky allocating capital (plenty of idiots can and have gotten lucky or were at the right place at the right time) and now is the CEO of OpenAI. This man is the definition of it’s not what you know, it’s who you know, and that’s not a good thing.


> got lucky allocating capital (plenty of idiots can and have gotten lucky or were at the right place at the right time)

Not saying you're wrong, but this feels like an unfalsifiable hypothesis. How can anyone ever be successful enough that their success could not possibly be explained by luck. Does Musk count? Buffett?


Altman befriended Paul Graham, and his life blossomed…


Even with the best ideas, execution, and teams 99.99% of startups are. That's okay. They're assumed to be experiments.

There is no such thing as self-made. And there's nothing wrong with friends and networking, especially as some particular help or chance encounter could be pivotal to nudging onto something great.

It's the trying and learning that are the gold to try again. Timing, honest perspective, persistence, and a measure of prepared luck seem to be more of it. There is no magic formula. I wish success to all who want it.


Triplebyte and Loopt both ended up selling / monetizing user data in ways the users really didn’t like.


AI is not true AI.. at the moment. ChatGPT is inherently biased by its developers, which means at least half the population may not trust its answers. For true AI he will have to give it autonomy, and I'm more interested if Altman is ready to live with an AI he cannot control.


Should we also talk about the contradictions of WSJ?

The only way to never contradict yourself is to never say anything.

Now, is AI the right work area to "move fast and break things"? no.


Perhaps if more CEOs / controlling shareholders were criminally liable for damage caused by their products, like the Sackler family, they wouldn't be so gung-ho.


What a ridiculous take and a slippery slope.

Should car manufacturers be liable for drunk drivers? Should kitchen knife manufacturers be responsible for stabbings?

Your idea is great if you want your country to be left behind entirely in innovation.


Worth reminding that the slippery slope argument is not a valid argument at all.

As with standard slippery slope reasoning, you jump to the most extreme interpretation. Yet reality shows us that you cannot, in fact, boil a frog, because at some point it just gets too fucking hot.

Should car manufacturers be liable for drunk drivers? Maybe, if they include a space in their vehicle specifically to store and serve alcohol.

Should kitchen knife manufacturers be responsible for stabbings? No. But no reasonable person would ever suggest they should. I might remind you also that “reasonable standard” is a legal concept.


Without defining the reasonable standard, it remains a silly idea in this case.


Altman "hadn’t been to a grocery store in four or five years". He is so out of touch with real world, people needs and desires, and fantasies about the future world based on the assumption that most people want to be free "to pursue more creative work." I think most people don't actually dream about pursuing creative work. Being absolutely "free" of work doesn't make one more creative. Real problems and constraints force people to come up with creative solutions.


He's closer to the truth than you're letting on. People do want to be free from being forced to work to meet rent and buy food, and all the associated stress that comes with it.


Sure, some do, but this is not universal truth. Here is what ChatGPT4 has to say about the subject

"As an AI language model, I don't have personal experiences or opinions, but I can provide a general analysis of the topic. People's preferences and attitudes toward work, financial security, and stress vary greatly based on their personal beliefs, values, and experiences.

Many people do express a desire for greater financial freedom and independence, as well as relief from the stress of meeting basic needs like paying rent and buying food. The notion of a Universal Basic Income (UBI) has gained popularity in recent years, which proposes providing a fixed sum of money to every citizen regardless of their employment status. Advocates of UBI argue that it could help address issues of poverty, income inequality, and stress related to meeting basic needs.

However, people's perspectives on work and financial stress are diverse, and not everyone shares the same view on this issue. Some people may find satisfaction and purpose in their work and might not want to be completely free from it. Others may be more focused on career advancement, personal growth, or contributing to their communities through work.

Ultimately, people's preferences regarding work, financial security, and stress are influenced by a wide range of factors, including their cultural backgrounds, socioeconomic status, personal values, and life experiences."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: