> The movement recently has been people trying to sell something less than this as an AGI.
Selling something that does not yet exist is an essential part of capitalism, which - according to the main thesis of philosophical Accelerationism - is (teleologically) identical to AI. [0] It's sometimes referred to as Hyperstition, i.e. fictions that make themselves real.
> unless those AIs go full Skynet and only one ends up standing at the end. (This last part is a joke, hopefully).
But is it a joke? One could argue that Skynet follows from Omohundro's Basic AI Drives: It's self-protection technologically extended backward in time.
One interesting possibility is that this is not something humans need to work on at all - because, as Nick Land famously put it, "Tomorrow can take care of itself." [0]
Vernor Vinge has argued that far-future SF makes no sense because of the "wall across the future" that The Coming Technological Singularity will create. [0]
If you're open to Theory Fiction, you can read Nick Land. Even his early 1990s texts still feel futuristic. I think his views on the autonomization of AI, capital, and robots - and their convergence - are very interesting. [1]
Land has always argued with Omohundro against orthogonality and thus against paperclip maximizers, but the singular goal is correct: for him it's intelligence optimization.
You're absolutely right - Land's vision of technocapital is fundamentally anti-human, while Marc Andreessen's is pro-human. Mark Fisher criticized Land for underestimating the importance of the human face in keeping capitalism functional. However, I'd argue that for Land, camouflage has always been central to pre-singularity capitalism. A human face can serve as useful cover, helping technocapital advance toward its ultimately non-human ends.
Millenarian philosophers whose arguments rest on this kind of technological singularity/rapture event, especially one fueled by technocapital development have to contend with the fact that historically capital has stepped away from the fray multiple times. And it's in their interest to, as they want to preserve class relations. A bolstered welfare state and worker protections passed during times of upheaval have happened periodically when there's been backlash. This is how we end up in "The End of History", and though that seems to be unraveling, there's no reason to believe the politics of today aren't a temporary push of capital to reorganize the world that will also come with a backlash. This is where I take issue with orthodox Marxism, because ultimately capital does not want to destroy class relations. Will this be the case in the future? I'm skeptical of anyone saying it won't or can't, but I ultimately have no interest in this kind of prediction market philosophy of historical materialism.
I have also always respected Land's intellectual honesty. I think the closest primary source to your point about the cybernetic OODA loop and competition is Land's text Against Orthogonality:
"Any intelligence using itself to improve itself will out-compete one that directs itself towards any other goals whatsoever. This means that Intelligence Optimization, alone, attains cybernetic consistency, or closure, and that it will necessarily be strongly selected for in any competitive environment." [0]
> "Any intelligence using itself to improve itself will out-compete one that directs itself towards any other goals whatsoever
This is bunk, and only works when there's one variable that controls who wins, and there are no diminishing returns. Its sounds as naive as "Any athlete that uses X to improve themself will outcompete one that directs themselves towards any other goals whatsoever"
The point of Land is that there is an underlying reality. Systems that make use of that reality most effectively are those that will propagate and dominate that reality. Landian intelligence isn't about scoring high on the SAT (which, obviously, won't make someone a star basketball player), but instead about how a system can react to reality to propagate itself. Almost tautologically, systems that make better use of reality outcompete those that make worse use of reality.
I said "X" instead of intelligence, strength, mass, reaction time, precise control, spatial awareness, or any other single characteristic because the best athletes have to be great on multiple dimensions - not just one. The same goes for "intelligence", unless it's used by Land as a catch-all phrase for multiple attributes, and if so, the statement becomes pointlessly vague, and papers over the fact that some of these attributes have physical limits and can't be changed by the self-improving intelligence, this limits are present in any medium e.g. latency, bandwidth, signal attenuation
I'm quite a fan of that piece as well. I don't think I agree with it exactly as stated, the claim feels like it can be usefully weakened - but it's crisp, so I like it the same way I like Nietzsche.
Yes, it's my project - thanks for the feedback! Tracking down the source material has been a real challenge since it's so scattered and often offline, but I'm hoping it makes things easier for anyone wanting to dive deep into the primary sources behind Land's main thesis.
The important thing here is the hyphenlessness of the word technocapital. While Marc Andreessen is using techno-capital in his Techno-Optimist Manifesto, Land's hyphenless technocapital points to his main thesis that capitalism and AI are (teleologically) identical. [0]
Selling something that does not yet exist is an essential part of capitalism, which - according to the main thesis of philosophical Accelerationism - is (teleologically) identical to AI. [0] It's sometimes referred to as Hyperstition, i.e. fictions that make themselves real.
[0] https://retrochronic.com
reply