Interesting that Worldcoin[1,2] isn't mentioned (at least in the non-paywalled portion). It's another example of world-spanning ambition masquerading as egalitarianism while riding the latest tech trend.
I guess it's because OpenAI is getting traction as a powerful company whereas Worldcoin and Helion (fusion) are not really. But he likes his grand projects.
You can't bring Nietzsche's view of ambition and consider it as truth without even bringing more contemporary criticisms of it, even if you'd like to see it through Virtue Ethics it should be considered as a virtue in a diverse set of virtues, ambition shouldn't trump other moral values.
As entertaining as it was to read "On The Genealogy of Morality" when I was 20 years old I'd say it's rather... Reductionist, at least.
Sloterdijk updates on it even bring up how to consider "will to power" not as an individual ambition but as a holistic way to actualise oneself to give something back to the world, not as a power over others which is the main way corporations are run.
Pursuing your ambitions at the expense of the rest of the globe is not moral.
Please update your philosophical repertoire, you're stuck in the 1800s.
I'm not going to engage further because you're coming from a condescending angle that doesn't invite discussion, but I can't help but laugh at a virtue ethicist telling me I'm stuck in the philosophical past. Something something glass houses and stones.
I was commenting on the notion that ambition is in some way prima facie bad.
In response, I was told that my understanding of philosophy is like that of a 20 year old and that I am intellectually stuck in the 1800s (in a comment comically premised on the moral philosophy of the ancient greeks).
I do not see my comment as a put-down at all and it certainly is not anything like that response. Nonetheless, I will take your feedback into account - clearly it came across to multiple people as condescending.
> In response, I was told that my understanding of philosophy is like that of a 20 year old and that I am intellectually stuck in the 1800s (in a comment comically premised on the moral philosophy of the ancient greeks).
Please tell me how Sloterdijk is premised on Plato or Socrates. It's a continuation of thought from Nietzsche's approach of "will to power" in a contemporary view, if Sloterdijk is invalid so is your use of Nietzsche as a foundation.
It's a condescending angle because you were condescending at first, even more when attempting to use 1800s philosophy to argument for ambition based on very fossilised philosophy. I like Nietzsche, I wouldn't use Nietzsche as the sole basis of morality argumentation since there are much more updated thinkers considering our moral standards in the contemporary world.
Receding from the discussion doesn't take that away, you were being reductionist in your argumentation, I just pointed out you were stuck in the 1800s.
Since then I read your profile and saw you are actually a consequentialist, I don't want anything to do with Effective Altruism true believers so thanks for not engaging further.
I for one don't understand what you're saying, but get the feeling it's in defense of Altman. I'd like to point out that the key here is the world-spanning ambition, which to me reads like an unhealthy megalomania.
> His boundless ambition is putting AI, and the world, on a dangerous path.
Has the narrative really become that Sam Altman is single-handedly responsible for the massive leap in AI?
I really don't think that Altman is even close to being in the top 5 worrying techbros. At best, he enabled OpenAI researchers to get their work done 1-5 years sooner.
This stuff was going to happen with or without Altman.
The advancements to come will happen with or without Altman.
The article doesn't even attempt to explain why "accelerationism" is a dangerous path. I'm not sure if I share that assumption or not as I don't even know what exactly the author thinks it is.
Can we petition for HN to block paywalled and ad-riddled content? As it is there's an incentive to spam paywalled content around social media websites to earn yourself more money.
Additional point: we can't have a balanced community discussion if only a small percentage of viewers have access to the full article. This is a community website, so we should favor content that is openly shareable.
Re paywalls: if there's a workaround, it's ok. Users usually post workarounds in the thread. This is in the FAQ at https://news.ycombinator.com/newsfaq.html and there's more explanation here:
Well Mr. Dang, we've got a new edge case for you lol: An article with substantive content before the paywall, but that doesn't have any workarounds for the rest of the article. I would love to be proven wrong, but AFAICT we'd need a subscriber to this particular person's Substack to host a version for us.
> Additional point: we can't have a balanced community discussion if only a small percentage of viewers have access to the full article
It's pretty common on non-paywalled articles for only a small percentage to read the article, yet we often manage to have balanced discussions of those.
The rest of the article is behind a paywall. I'm still on the fence about Sam Altman, and as someone who lives in Poland rather than Silicon Valley, it's difficult to form a solid judgment about him. It's much easier for me to see through Elon Musk's facade in his interviews, where he only plays at being an expert. Altman has some mystique surrounding him, though I think his frequent appearances on podcasts are ruining it. I sometimes wonder how he finds time to appear even on minor podcasts.
Amazing article, thanks for posting! I don't know this author, but they've definitely got a solid understanding of the relevant facts, IMO/AFAICT. That said:
Altman could now get equity in OpenAI—around $10 billion worth
He claimed to employees last week that he won't be following through on this. See: https://www.cnbc.com/2024/09/26/openais-sam-altman-tells-emp... Do I believe he won't pull a "whoops who knows" or a "it's not giant equity stake, just a big one"? Meh. But it's at least in doubt now.
What’s scary about him isn’t that he’s good at getting rich (he’s a billionaire even without any OpenAI equity)
This surprised me when I first learned it, but appearently it's true. Wikipedia has this (uncited!!) language on the topic: "Sam Altman has recently expanded his investment portfolio to include stakes in over 400 companies, valued at around $2.8 billion. Some of these investments intersect with companies doing business with OpenAI, which has raised questions about potential conflicts of interest, though Altman and OpenAI maintain that these are managed transparently."
Though Altman (wisely) wouldn’t use this term for it, I’d say it boils down to accelerationism
Eh, that term has a lot of loaded meaning among academic circles (or just hacker / e/acc ones...) that I don't think Altman openly subscribes to -- especially if you include its founder Nick Land, who's now a "Hyper-fascist" with some appearent brain damage. Long story short it involves burning down the current system, not just building a new one. See this amazing Guardian article: https://www.theguardian.com/world/2017/may/11/accelerationis...
I'd call Altman simply... arrogant. I don't think he subscribes to any academic trend, simply because he doesn't seem interested in reading any academia. Case in point is his recent decision to try to be the one to name the new era of human development, a task for which he chose "Intelligence Age" (https://ia.samaltman.com/); that's some serious confidence, at the very least.
IMO he is a normal MBA-type who's been caught up in something that feels world-changing, and he's at the point where any amount of deceit or malice is worth it to keep his influence over that. In this way, I see him as a much more well-spoken Elon Musk; they both are true believers in the power of AGI, and their defining purpose is to be credited with the benefits it'll bring about.
As I said in an old post on Altman: made-in-house bias is strongest when the house is your own skull.
[ETA in response to a comment below, b/c deleting a long paragraph feels like abandoning a project!]:
> Eh, that term has a lot of loaded meaning among academic circles (or just hacker / e/acc ones...) that I don't think Altman openly subscribes to -- especially if you include its founder Nick Land, who's now a "Hyper-fascist" with some appearent brain damage. Long story short it involves burning down the current system, not just building a new one. See this amazing Guardian article: https://www.theguardian.com/world/2017/may/11/accelerationis...
I'd call Altman simply... arrogant. I don't think he subscribes to any academic trend, simply because he doesn't seem interested in reading any academia. Case in point is his recent decision to try to be the one to name the new era of human development, a task for which he chose "Intelligence Age" (https://ia.samaltman.com/); that's some serious confidence, at the very least.
I think you would be shocked at how much these philosophical trends actually are part of the discussion in these SF circles.
Sam Altman is also absolutely not a normal MBA-type, having met many of those.
I'm not sure I can even blame Altman here. I think it's quite easy to move into a mode where all forward 'progress' is perceived as unequivocally good. It is so difficult to move the ball, once the ball starts rolling, it's absolutely exhilarating. He's the person on the crest of the wave at the moment, and it's either Sam or someone else is there. Is he a moral man? Can he dare to pause and ask if these features should exist? I doubt it. Perhaps his request for regulation was in earnest: He maybe knows that only governments can slow down a market that is cresting, and only governments can stop and ask questions. Since I believe in government (I know that many do not!) I think we must immediately create a Department of AI and a President's commission. This shit is about to become very real.
I think focusing on Nick Land's provocations rather than confronting the predictive validity of Accelerationism, which the Guardian article highlights, is just a nice coping strategy for people who are low on decoupling.
I'm doing a primary literature review of Land's core thesis that AI and capitalism are teleologically identical at https://retrochronic.com/ and I hope it will show that there is at least some substance to his work, such as his perspective on AI in the context of capital autonomization.
This guy writing longform articles and publishing books doesn't know what he's talking about. Take it from me, a random internet commenter with no discernable credentials.
Not all of that writing is relevant or useful in analysis. In particular, writing an article that contains a worry that people are "hating on the wrong part of Altman," is a bit of a giveaway that the author lacks the ability to a deep technical analysis of the problem and has instead decided to focus on personality.
Altman isn't a deep technical problem to be solved.
flood management is a deep technical problem
Energy policy is a deep technical problem
Altman is a dude whos wildly rich, in charge of a darling company, and having the same smoke blown up his arse as the other silicon valley darlings. Like Musk, he's deeply predictable.
What he does next depends on how much money he thinks he can burn in the next year.
He will lobby for extension of copyright for fair use (or similar).
He will lobby against any kind of data protections law (that fucks up the training pipeline)
> Altman isn't a deep technical problem to be solved.
He's running a company that has created one.
> What he does next depends on how much money he thinks he can burn in the next year.
Which is why personality analysis is entirely the wrong tool here. It offers you absolutely zero predictions. Why you would double down on this is beyond me.
> Which is why personality analysis is entirely the wrong tool here
I mean its not really. What is his attitude to risk, how well does he understand people's motivations? what narrative is he selling his investors? what is he aiming for? what's his personal goal?
All of this shape the outcome of the company, and those are dependent on his personality.
In the same way that Musk is short sighted and thin skinned, means that twitter is the way it is. Zuck is happy to burn billions so long as the research looks promising. Bezos is all about market share.
To remark - none. To be taken seriously -- probably some, otherwise the noise of millions of bloggers is too much to handle. Which credentials, I'm not sure, hence my question.
FWIW, the author has university education and is currently a Visiting Professor of Science and Religion at Union Theological Seminary, New York. I personally don't find it relevant to the topic, but was curious to see what others thought.
This is not an ad hominem attack. It's fair to ask about an author's credentials when they write a public article. (I do believe he lacks the credentials, but happy to hear a counter-argument.)
The author's credentials might help us understand their motivation for writing something, or help us understand any implicit bias they carry, but in no way do they indicate the quality of the argument myself.
Which parts you credential do we consider relevant to rate the expertise on the topic at hand?
Seriously though, I think the author is overworried about accelerationism but correctly rates sama power grab inclinations. What's your thought re TFA rather than the author?
the criticism of argument from authority is really only valid for logic-based arguments (which only exist in math). In nearly every argument online, the topic is not logic-based, but more a combination of fuzzy reason and rhetoric. In such situations, we normally allow for some prior belief that people who obtained education and have worked in an area have some level of expertise that makes their arguments carry more weight. While you can argue whether this makes sense, it certainly seems reasonable to me- although I still apply skepticism to expert opinions.
I think you need to make insightful points for that. Having credentials is not a requirement for having insight, although they are sometimes correlated.
Nope. In order to be upvoted by an audience of highly techincal readers, I would hope that all one needs is to make a strong argument, or provide something interesting, or otherwise of value. Credentials can be a shortcut/filter to finding something of value. Someone may reasonably choose not to spend time reading something from someone with no credentials, under the assumption that most of everything is garbage. But, after having chosen to read it, the credentials no longer have any bearing. It's either good or it's not.
This is the new norm on HN. Gang-flagging is rampart but the mods are ok with it most of the time. Altman is a YCombinator darling so of course this post won't stay up long.
I miss the era when the chattering class didn't know anything about tech. So nice to not be the product of think-pieces and in the ire of NYT readers.
The funniest bit is that the previous villains -- financiers and oil barons, etc. etc. - haven't gone away. I don't understand why the high-powered critics have completely pivoted to hating on tech. Say what you will about Google, OpenAI, etc. but we're not funding mass killings in the Niger river delta or foreclosing on people's homes.
> I miss the pre-covid era when the chattering class didn't know anything about tech. So nice to not be the product of think-pieces and in the ire of NYT readers.
The "chattering classes" in general and NYT in particular have been writing think pieces about tech for decades [0] [1] [2] [3]. So I'm not sure what you're talking about and what COVID has to do with it.
Agree to disagree, then. I think the direction of the trend has been pretty clear and I'm an avid reader of the news, but it is a difficult thing to quantify in the aggregate.
The point about the historical villains all being there still still being awful to everyone is so apt. But a couple of things. Tech defines such an overwhelming share of the market that it's an impossible to disregard giant. And the world has changed so much owing to tech.
But I don't think tech has had a positive human narrative in almost a decade. The noveau riche arose as semi-gentle intermediaries originally, empowering & connecting the small, and every single play we see is to intermediate, not serve.
So where is the contemporary good to report on, where is the modern earned goodwill?
Instead what is emitted is double trash. It's all inscrutable/doesn't serve a clear need to boot. And it's far more owned and intermediated than ever... Bitcoin/cryptocurrencies, VR, and now AI. None showing the genuine enrichment humans could really tangle with that tech had brought us.
Neutrality is not enough, only mildly bad news is not enough. We need to make things that fill needs, and that expand imagination/open horizons, in ways where we ourselves can be part of the narrative.
But yes please let's also cover the genuinely detestable parts of the world as such. Let all be accountable.
I am a serious sucker for science fiction, but I find the alleged inevitability of glorified chatbots dominating humans to be quite a colorful thought experiment.
That said, some part of it is true. There will be a stupid "AI" that reduces your credit score to some value. There will be no recourse and no human you can ask how the score was calculated, you have no method to correct false information. Perhaps it is just because you are ugly, verified by the ugly algorithm. Perhaps you aren't visible enough on social media,
Here you are indeed slave to the machine, but the news would perhaps just be, that this is already the case anyway.
At no point did your original post say anything about the politicization of tech; you literally were upset about the "chattering class" (definitely apolitical, unloaded term there) simply knowing about tech.
Everything with power is political. Tech is extremely powerful, and even if not directly inflicting harm it does in reality inflict harm, if not by intent then by carelessness or omission.
Tech hasn't pursued wars like the oil or food industry but enabled hybrid warfare by omission, it hasn't killed people directly but has created even more leverage for elites to dehumanise processes, e.g. customer support, setting rental prices, showing ads, etc.
There's no way for tech to not become politicised, it's wishful thinking, and wishing it away doesn't change the reality, the further tech embeds into our lives the more power it has, the more political it becomes.
> Tech hasn't pursued wars like the oil or food industry but enabled hybrid warfare by omission,
By making everything equivalent linguistically, we lose the language to condemn terrible and awful things.
I absolutely reject the notion that anything tech has done is comparable in magnitude to the actual wars and actual mass killings funded by the oil industry.
Your rejoinder is replacing some customer support roles?
> Your rejoinder is replacing some customer support roles?
No, it was an e.g., it is right there in the post.
> By making everything equivalent linguistically, we lose the language to condemn terrible and awful things.
Completely agree, at the same time the harm created isn't inconsequential, do you mind coming to a term which we can use?
And by the way, you attacked tangent points of my argument, nothing in the substance of it. Power is politics, tech has power, so it's inherently politicised. Please attack this argument, not tangents.
> And by the way, you attacked tangent points of my argument, nothing in the substance of it. Power is politics, tech has power, so it's inherently politicised. Please attack this argument, not tangents.
That part is a relatively compelling argument and one that I am partial to. I think that politics is an overloaded term nowadays and many, quite reasonably, consider the personal to be political and power to always be political. In these cases, I again have the same worry about making everything equivalent linguistically - I fear we lose the usefulness of the word 'politics'.
But acceding to your definition of politics, I would change my original comment to be: I do not like the direction in which tech is being politicized. I think there are larger-scale power arrangements that do not get nearly enough attention and I worry that the current discussion is more around 'how can tear down those who are succeeding' and less of 'how can we better share in the bounty that is being created'.
I don't agree with the premise that people using your platform to post hate or incite violence makes you an 'active part' in genocide in an even remotely similar fashion to Shell actively funding the militant groups engaging in genocide.
It is more akin to a paper company being blamed for the writings on it or a phone company being blamed because the orders to invade were sent over text message.
Very interesting attempt to gain causality through internet blackout, but I'm not really convinced by a study that provides no evidence that they pre-registered their analysis.
The fact that Facebook algorithms intentionally promotes hostilities as a side effect of its mission to every increase engagement doesn’t exactly make Meta passive like a phone service.
It’s even more damning for Meta when you consider just how insidious their tactics are to get people hooked on their platform.
They might not be selling the guns nor pulling the triggers, but they’re not exactly innocent either.
That is still not the fault of Facebook. It is the fault of people writing these comment and those that respond to them and want to kill other people. That they now have a common platform of communication is a fact everyone needs to adapt to.
You might level the criticism against the advertising industry. This is a legislative issue.
To accuse Facebook to help a genocide is just distracting from the guilt of those that attempt genocide.
You're looking at things too binary. It's not an "either / or" but instead a spectrum of guilt.
Your argument is akin to saying someone who stole a chocolate bar isn't a criminal because they weren't carrying a knife. Or someone else who stole a wallet at knife point isn't a criminal because they didn't kill their victim. Or that a murderer isn't a criminal because they didn't commit genocide.
There's always going to be instances where some bad things are worse than other bad things. But that doesn't mean that the less-worse bad things aren't also themselves bad.
Meta built their platform to be addictive and one of the negative consequences of constantly pursuing "engagement" is that breads negative interactions. And this was a very intentional move on Meta's part. So they're not innocent. They're just not literal murderers.
It is not at all binary, it is about criminal responsibility.
On the contrary, I think the one who stole the chocolate bar is guilty, not the shop owner who lacked a "no knife policy" and didn't protect its sweats enough. Or more fitting advertised them too much. The shop owner is just innocent in your example and the blame lies solely on the one who stole. Same with the people committing genocide. The meme that Facebook was a part here is just faulty reasoning.
The "stochastic terrorism" crowd comes to mind, who seem to create a new olympic discipline of reaching. They too like to accuse platforms that were used to communicate. That is just distracting from those that are responsible for the crimes at hand. If there hadn't been a Facebook, they would have used Twitter or any other social media platform.
Advertising is manipulative, but Facebook isn't enabling me to commit crimes. These issues need a clear separation. Otherwise any statement would be too dangerous if you generalize your concept of responsibility. That is not a healthy road to go down.
> On the contrary, I think the one who stole the chocolate bar is guilty, not the shop owner who lacked a "no knife policy" and didn't protect its sweats enough
You're moving the goal posts by talking about the shop owners when I'm talking about how a spectrum of "bad things" doesn't mean one guilty party makes another guilty party innocent.
There's other issues with your shop analogy, but I'll cover that further on.
> Same with the people committing genocide. The meme that Facebook was a part here is just faulty reasoning.
I never said Facebook took part in genocide. That was a different commenter. I said Meta aren't an entirely innocent party in the same way that people talk about phone services.
Once again you're looking at things too binary when what I'm making is more of a nuanced point.
> That is just distracting from those that are responsible for the crimes at hand.
Some people, like myself, can say there are plenty of people to blame and not be distracted by it.
To say "this bad thing is a distraction from this less bad, but slightly unrelated, bad thing" is exactly why I claimed you were looking at things too binary.
> Advertising is manipulative, but Facebook isn't enabling me to commit crimes. These issues need a clear separation. Otherwise any statement would be too dangerous if you generalize your concept of responsibility. That is not a healthy road to go down.
Another really binary take. If you cannot have a conversation about enablement for fear of a theoretical eventual end conclusion then it demonstrates a complete inability to understand that, like with most grey areas, you can draw a proverbial line in the sand before you reach that theoretical worse case conclusion. If you cannot, then you're looking at things too binary.
A better way to frame the question is this:
Is Facebook's algorithms passive or not?
A phone service is passive because it doesn't recommend content. Facebook's algorithms are not passive because it does recommend content.
So the next question is whether those algorithms create harm, and if so, whether Facebook are aware of that. Sadly the answers to both of those are "yes". Sure, Meta's algorithms aren't always harmful and even when it is, it's usually it's only slightly harmful. But it's never completely beneficial for the consumer.
This doesn't mean Facebook are complicit in genocide but it does mean Facebook are not innocent service providers like a phone service.
So lets frame your shop keeper service examples differently: Is a shop keeper allowed to sell alcohol to children or people already super drunk? No they're not. In most territories they have a legal obligation to limit who is entitled to purchase alcohol.
The problem with your shop analogy is that people are consumers. We don't steal from Facebook, we consume their product for free because we are also their product. So you cannot compare Facebook to stealing. But you can compare Facebook to the consumption of safe vs potentially dangerous substances.
With regards to Facebook: sometimes that product is mostly harmless (like chocolate). Sometimes its harmful to the wrong audiences (like alcohol). And Facebook knowingly serves and even promotes harmful products to the wrong audiences.
So Meta are not innocent. They might not be monsters like those who commit genocide, but that doesn't mean we can view Meta as being innocent for fear of being distracted by other, unrelated, monstrous things. The world isn't black and white like that. It's perfectly fine to say more than one party of doing bad things, of different severities and in different ways.
I don't think I move the goalpost when I pick up from your example.
I simply do not agree that Facebook can sensibly be responsible here. No, the misdeeds lie with those that use the platform for their personal quarrels.
A recommendation algorithm does not make you a partner in crime. If we talk about guilt, we need to talk about the advertising industry as a whole that doesn't only include Facebook.
Your argument is analogous to rock music making kids more violent without there being a direct causal link. Without that is remains speculation and even statements that they are "a bit guilty" have to be rejected.
That the advertising industry as a whole is detrimental is likely true, but then Facebook taking part of genocide should not be a starter.
> I never said Facebook took part in genocide. That was a different commenter. I said Meta aren't an entirely innocent party
That is pure semantics. No, they are not a guilty party. And if they are really at fault it recently was about removing too much content, which they correctly self identified being a problem. So they have at least that.
It's worth keeping in mind that Altman likely neither wants nor needs the equity in Open AI, but investors are forcing him to take some to align incentives.[1][2][3]
[1] https://worldcoin.org
[2] https://en.wikipedia.org/wiki/Worldcoin