The schadenfreude is very real right now. I have difficulty putting to words my level of antipathy towards Altman, and I hope to watch gleefully as this all blows up in his smug face.
Well, anyone who will flex their spine in every (im)possible position as required of them, just to get even more money and power.
I could understand that from someone with an empty stomach. But so many people doing it when their pockets are already overflowing is exactly the kind of rot that degrades an entire society.
We're all just seeing the results so much better now that they can't even be bothered to pretend they ever more than this.
Later edit: The way this submission fell ~400th spots after just two hours despite having 1250 points and 550 comments, had its comments flagged and shuffled around to different submissions as soon as they touched too close to YC&Co is a good mirror of how today's society works.
It's an addiction. There's no amount of money that will be enough, there's no amount of power that will be enough. They'll burn the world for another hit, and we know that because we've been watching them do it for 50 years now.
I've read a lot about Aaron's time at Reddit / Not A Bug. I somewhat think his fame exceeds his actual accomplishments at times. He was perceived to be very hostile to his peers and subordinates.
Kind of a cliche, but aspire to be the best version yourself every day. Learn from the successes and failures of others, but don't aspire to be anyone else because eventually you'll be very disappointed.
Yeah, definitely not a statement on Aaron himself. More a statement on idolizing people. There will always be instances where they didn't live up to what people think of them as. I think Aaron was fine and a normal human being.
Aaron was not happy. Neither is Trump, or Musk. I don’t know if Bernie is happy, or AOC. Obama seems happy. Hilary doesn’t. Harris seems happy.
Striving for good isn’t gonna be fun all the time, but when choosing role models I like to factor in how happy they seem. I’d like to spend some time happy.
Try to imagine a society where people only did things that were rewarded.
Could such a society even exist?
Thought experiment: make a list of all the jobs, professions, and vocations that are not rewarded in the sense you mean,
and imagine they don't exist.
What would be left?
I don't need to imagine. Teachers almost everywhere around the globe have poor salaries. In my country there are lower enrolment requirements to universities to become a school teacher than almost every other field of study. Means the dumbest students are there.
And then later they go to the school to teach our future, working with high stress and low salary.
Same with medical school in many countries where healthcare is not privatized. Insane hours, huge responsibilities and poor pay for doctors and nurses in many countries.
Nowadays everyone wants to be an influencer or software developer.
Teachers, sure. But what about janitors & garbage collectors, paramedics, farm laborers, artists, librarians, musicians, case managers, religious/spiritual leaders?
Because only one person can be king, but everybody can participate and contribute. Also there's too many things out side of just being "the best" that decide who gets to be king. Often that person is a terrible leader.
Upvoted not because I agree, but I think it‘s a valid question that shouldn‘t be greyed out. My kids dream job is youtube influencer, I don‘t like it but can I blame them? It‘s money for nothing and the chicks for free.
Tragedy of current days. No one wants to be a firefighter, astronaut or a doctor. Influencers everywhere! Can you blame kids? Do you know firefighters who earns million dollars annually?
AaronSw exfiltrated data without authorization. You can argue the morality of that, but I think you could make the argument for OpenAI as well. I'm not opining on either, just pointing out the marked similarity here.
edit: It appears I'm wrong. Will someone correct me on what he did?
This is an argument, but isn't this where your scenario diverges completely? OpenAI's "means to an end" is further than you state; not initial advancement but the control and profit from AI.
Yes, they intended for control and profit, but it's looking like they can't keep it under control and ultimately its advancements will be available more broadly.
So, the argument goes that despite its intention, OpenAI has been one of the largest drivers of innovation in an emerging technology.
At that same link is an account of the unlawful activity. He was not authorized to access a restricted area, set up a sieve on the network, and collect the contents of JSTOR for outside distribution.
He wasn't authorised to access the wiring closet. There are many troubling things about the case, but it's fairly clear Aaron knew he was doing something he wasn't authorised to do.
> He wasn't authorised to access the wiring closet.
For which MIT can certainly have a) locked the door and b) trespassed him, but that's a very different issue than having authorization to access JSTOR.
I don’t think your links are evidence of a flip flop.
The first link is from mid-2016. The second link is from January 2025.
It is entirely reasonable for someone to genuinely change his or her views of a person over the course of 8.5 years. That is a substantial length of time in a person’s life.
To me a “flip-flop” is when one changes views on something in a very short amount of time.
This is quite honestly one of the major problems with our society right now. Once you take a public stance, you are not allowed to revisit and re-evaluate. I think that this is by and large driving most of the polarization in the country, since "My view is right and I will not give an inch least I be seen as weak".
While most of the things affected are highly political situations, i.e. Trump's ideas or Biden's fitness. We also seem to have thrown out things that we used to consider cornerstones of liberal democracy i.e. our ideas regarding free speech and censorship, where we claim that it's not happening because it is a private company.
In 2016: Sam alluded to Trump's rise as not dissimilar to Hitler's. He said that Trump's ideas on how to fix things are so far off the mark that they are dangerous. He even quoted the famous: "The only thing necessary for the triumph of evil is for good men to do nothing."
In 2025: "I'm not going to agree with him on everything, but I think he will be incredible for the country"
This is quite obviously someone who is pandering for their own benefit.
IMO it probably is and Altman probably still (rightly) hates Trump. He's playing politics because he needs to. I don't really blame him for it, though his tweet certainly did make me wince.
That's the thing though right, that we all created this mess together. Like yeah, why don't you (and the rest of us) blame him?. We're all pretty warped and it's going to take collective rehab.
Super pretentious to quote MLK, but the man had stuff to say so here it is (on Inaction):
"He who passively accepts evil is as much involved in it as he who helps to perpetrate it"
"The ultimate tragedy is not the oppression and cruelty by the bad people but the silence over that by the good people"
It seems he was virtue signaling before. So it would be more accurate to blame him for having let himself become an ego driven person in the past. Or to put it nicely and to add the context of Brian Armstrong of Coinbase, who has also been showing public support for Trump, a mission-driven person.
Yes, the first mistake was a business leader in tech taking a public political position. It was popular and accepted (if not expected) in the valley in 2016.
Doing that then (and banking the social and reputational proceeds) created the problem of dissonance now. If he'd just stayed neutral in public in 2016, he could do what he's doing now and we could assume he's just being a pragmatic business person lobbying the government to further his company's interests.
I think “progressive” is probably the safest position to take. It also works if you want to get involved in a different sort of politics later on. David Sacks had no problem doing that when he was no longer interested in being CEO of a large company.
The evidence indicates not taking a position is the optimal position.
I have a lot of respect for CEOs who just focus on being a good CEO. It's a hard enough job as is. I don't care about or want to know some CEO's personal position on politics, religion or sports teams. It's all a distraction from the job at hand. Same goes for actors, athletes and singers. They aren't qualified to have an opinion any more relevant than anyone else's, except on acting, athletics, singing - or CEO-ing.
Sadly, my perspective is in the minority. Which is why I think so many public figures keep making this mistake. The media, pundits and social sphere need them to keep making this mistake.
I guess I think they should study what a neutral position looks like, and avoid going beyond it as best as they can. I had in mind a "progressive" who avoids any hot button issues. Someone with a high profile will be asked about politics from time to time. I think Brian Chesky is a good example of acting like a progressive in a way that stays low profile, but maybe he doesn't really act like one. https://www.businessinsider.com/brian-chesky-airbnb-new-bree...
Also it helps to have sincere political views. GitHub's CEO at the time of #DropICE was too cynical and his image suffered because of it.
There are no neutral positions in today's political landscape. I'm not stating my opinion here, this is according to most political positions on the spectrum. You suggested "Progressive" (but without hot button issues) as a way of signaling a neutral position. That may be true in parts of the valley tech sphere but it certainly doesn't hold in the rest of the U.S. "Progressive" is usually defined being to the left of "Liberal", so it's hardly neutral. Over half of U.S. voters cast their ballot for the Republican candidate. Almost all those people interpret anyone identifying themselves as "Liberal" as definitely partisan (and negative, of course). Most of them see "Progressive" as even worse, slipping dangerously toward "Socialist". And the same holds true for the term "Conservative" on the other side of the spectrum, of course.
No, identifying as "Progressive" wouldn't distance you from political connotations and culture warring, it's leaping into the maelstrom yelling "Yipee-Ki-Yay!" You may want to update your priors regarding how the broad populace perceives political labels. With voters divided almost exactly in half regarding politics and cultural war issues and a large percentage on both sides having "Strong" or "Very Strong" feelings, stating any position will be seen as strongly negative by tens of millions of people. If you're a CEO (or actor, athlete, singer, etc) who relies on appealing to a broad audience, when it comes to publicly discussing politics (or religion), the downsides can be large and long-lasting but the upsides are small and fleeting. As was said in the movie "WarGames", the only winning move is not playing.
I especially like how he quoted Napoleon or something framing himself as the heart of revolution and Deep Seek as a child of the revolution only to get a response from some random guy "It's not that deep bro. Just release a better model."
I worked on something back then that had to interface with payment networks. All the payment networks had software for Windows to accomplish this that you could run under NT, while under Linux you had to implement your own connector -- which usually involved interacting with hideous old COBOL systems and/or XML and other abominations. In many cases you had to use dialup lines to talk to the banks. Again, software was available for Windows NT but not Linux.
Our solution was to run stuff to talk to banks on NT systems and everything else on Linux. Yes, those NT machines had banks of modems.
In the late 90s using NT for something to talk to banks is not necessarily a terrible idea seen through the lens of the time. Linux was also far less mature back then, and we did not have today's embarrassment of riches when it comes to Linux management and clustering and orchestration software.
If you're a tech leader and confuse Linux boxes for mainframes then I don't think it's hindsight that makes you look foolish. It's that you do not, in fact, understand what you're talking about or how to talk about it - which is your job as a tech leader.
Yeah Elon has gotten annoying (my god has he been insufferable lately) but his companies have done genuine good for the human race. It's really hard for me to think of any of the other recently made billionaires who have gotten rich off of something other than addicting devices, time-wasting social media and financial schemes.
"Donald Trump represents an unprecedented threat to America, and voting for Hillary is the best way to defend our country against it"
- Sam Altman - 2016
"If you elect a reality TV star as President, you can't be surprised when you get a reality TV show"
- Sam Altman - 2017
"When the future of the republic is at risk, the duty to the country and our values transcends the duty to your particular company and your stock price."
- Sam Altman - 2017
"I think I started that a little bit earlier than other people, but at this point I am in really good company"
- Sam Altman - 2017 ( On his criticism of Trump )
"Very few people realize just how much @reidhoffman did and spent to stop Trump from getting re-elected -- it seems reasonably likely to me that Trump would still be in office without his efforts. Thank you, Reid!"
As a society we might talk about virtue, but the reason we put it as a goal in stories is that in the real world, we don't reward it. It's not just that corruption wins sometimes, but we directly punish those that fight it. The mood of the times, if anything, comes from people realizing that what we called moral behavior leads to worse outcomes for the virtuous.
A community only espouses good values when it punishes bad behavior. How do we do this when those misbehaving are very rich, and attempting to punish the misbehavior has negative consequences on you? There just aren't many available tools that don't require significant sacrifices.
That is particularly gross, but that really feels like the norm among all the tech elite these days - Zuckerberg, Bezos, etc. all doing the most laughable flip flops.
The reason the flip flops are so laughable to me is because they attempt to couch them in some noble, moralistic viewpoint, instead of the obvious reason "We own big companies, the government has extreme power to make or break these companies, and everyone knows kissing up to Trump is what is required to be on his good side."
I think Tim Sweeney's (CEO of Epic Games) comment was spot on:
> After years of pretending to be Democrats, Big Tech leaders are now pretending to be Republicans, in hopes of currying favor with the new administration. Beware of the scummy monopoly campaign to vilify competition law as they rip off consumers and crush competitors.
This is exactly what OpenAI is trying to do with these allegations.
Those men and their companies are responsible for hundreds of thousands of jobs and a significant portion of the global economy. I'm actually thankful that they aren't shooting their mouths off to the new boss like spoiled children at their first job. It wouldn't make the world better, it would make their companies and the lives of those who depend on them, worse.
There is a fine line between cowardice and common sense.
In what sense is the federal government "the boss" of private sector businesses? This isn't an oligarchy yet, right? They don't have to behave obsequiously, they are choosing to. They're doing it for themselves, not for their shareholders or their employees. It's an attempt to grab power and become oligarchs because they see in this government a gullible mark.
The richest man in the world has a government office down the street from the white house, which the taxpayers are funding. He's rumored to sleep there.
Puhleeeese. I'm not advocating that these leaders all lead protest marches against the new administration. But the transparent obsequiousness and Trump ball gargling under the guise of some moralistic principles is so nauseating. And please spare me the idea that the likes of Zuckerberg or Bezos gives a rat's ass about their employees.
For a contrast to the Bezos, Zuckerberg and Altman types, look at Tim Cook. Sure, Apple paid the 1 million inauguration "donation", and Cook was at the inauguration, and I'm not arguing he's winning any "Profiles in Courage" awards, but he didn't come out with lots of tweets claiming how massuh Trump is so wise and awesome, Apple didn't do a 180 on their previous policies, etc.
Although I dislike him now glazing Trump, I understand why he's doing it. Trump runs a racket and this is part of the game.
One of my most contrarian positions is I still like and support Altman, despite most of the internet now hating him almost as much as they (justifiably) hate Elon. Was a fan of Sam pre-YC presidency and still am now.
For me, it’s the technical results. Same as for Musk.
Tesla accelerated us forward into the electric car age. SpaceX revolutionized launches.
OpenAI added some real startup oomph to the AI arms race which was dominated by megacorps with entrenched products that they would have disrupted only slowly.
So these guys are doing useful things, however you feel about their other conduct. Personally I find the gross political flip-flops hard to stomach.
Why would you support someone you said was part of a racket in the sentence before? We're talking about real life, where actions have consequences, not a TV show where we're expected to identifiy with Tony Soprano.
Yeah I don't know, Altman is a sociopath who is now trying to get intertwined with local governments (SF) as well as the federal government. He's going to do a lot of weaseling to get what he wants: laws that forcibly make OpenAI a monopoly.
Society will always have crazy sociopaths destroying things for their own gain, and now is Altman's turn.
I don’t care for Sam Altman and his general untrustworthy behavior. But DeepSeek is perhaps more untrustworthy. Models from American companies at least aren’t surprising us with government driven misinformation, and even though safety can also be censorship, the companies that make these models at least openly talk about their safety programs. DeepSeek is implementing a censorship and propaganda program without admitting it at all, and once they become good at doing it in less obvious ways, it can become very damaging and corrupt the political process of other societies, because users will trust the tools they use are neutral.
I think DeepSeek’s strategy to announce a misleading low cost (just the final training run that optimizes a base model that in turn is possibly based on OpenAI) is also purposeful. After all, High Flyer, the parent company of DeepSeek, is a hedge fund - and I bet they took out big short positions on Nvidia before their recent announcements. The Chinese government, of course, benefits from a misleading number being announced broadly, causing doubt among investors who would otherwise continue to prop up American technology startups. Not to mention the big fall in American markets as a result.
I do think there’s also a big difference between scraping the Internet for training data, which might just be fair use, and training off other LLMs or obtaining their assets in some other way. The latter feels like the kind of copying and industrial espionage that used to get China ridiculed in the 2000s and 2010s. Note that DeepSeek has never detailed their training data, even at a high level. This is true even in their previous papers, where they were very vague about the pre training process, which feels suspicious.
> Models from American companies at least aren’t surprising us with government driven misinformation, and even though safety can also be censorship
Being a citizen of a western nation, I'm inclined to agree with the general sentiment here, but how can you definitively say this? You, or I, don't know with any certainty what interference the US government has played with domestic LLMs, or what lies they have fabricated and cultivated, that are now part of those LLMs' collective knowledge. We can see the perceived censorship with deepseek more clearly, but that isn't evidence that we're in any safer territory.
> There are loads of examples on the internet of LLMs pushing (foreign) government narratives e.g. on Israel-Palestine
There isn’t even a single example of that. If an LLM is taking a certain position because it has learned from articles on that topic, that’s different from it being manipulated on purpose to answer differently on that topic. You’re confusing an LLM simply reflecting the complexity out there in the world on some topics (showing up in training data), with government forced censorship and propaganda in DeepSeek.
Fine, whatever. It's actually much more concerning if the overall information landscape has been so curated by censors that a naively-trained LLM comes "pre-censored", as you are asserting. This issue is so "complex" when it comes to one side, and "morally clear" when it comes to the other. Classic doublespeak.
That's far more dystopian than a post-hoc "guardrailed" model (that you can run locally without guardrails).
> I don’t care for Sam Altman and his general untrustworthy behavior. But DeepSeek is perhaps more untrustworthy. Models from American companies at least aren’t surprising us with government driven misinformation, and even though safety can also be censorship, the companies that make these models at least openly talk about their safety programs. DeepSeek is implementing a censorship and propaganda program without admitting it at all, and once they become good at doing it in less obvious ways, it can become very damaging and corrupt the political process of other societies, because users will trust the tools they use are neutral.
These arguments always remind me of the arguments against Huawei because they _might_ be spying on western countries. On the other hand we had the US government working hand in hand with US corporations in proven spying operations against western allies for political and economic gain. So why should we choose an American supplier over a Chinese one?
> I think DeepSeek’s strategy to announce a misleading low cost (just the final training run that optimizes a base model that in turn is possibly based on OpenAI) is also purposeful. After all, High Flyer, the parent company of DeepSeek, is a hedge fund - and I bet they took out big short positions on Nvidia before their recent announcements. The Chinese government, of course, benefits from a misleading number being announced broadly, causing doubt among investors who would otherwise continue to prop up American technology startups. Not to mention the big fall in American markets as a result.
Why should I care about the stock value of US corporations?
> I do think there’s also a big difference between scraping the Internet for training data, which might just be fair use, and training off other LLMs or obtaining their assets in some other way.
So if training of copyrighted work scrapped of the Internet is fair use, how would the training of the LLMs not be fair use as well? You can't have it both ways.
> Models from American companies at least aren’t surprising us with government driven misinformation
Is corporate misinformation so much better? Recall about Tienanmen Square might be more honest but if LLMs had been available over the past 50 years, I would expect many popular models would have cheerfully told us company towns are a great place to live, cigarettes are healthy, industrial pollution has no impact on your health, and anthropogenic climate change isn't real.
Especially after the recent behaviour of Meta, Twitter, and Amazon in open support of Trump and Republican interests, I'll be shocked if we don't start seeing that reflected in their LLMs over the next few years.