> Because we invented the Torment Nexus as a cautionary tale and they took it at face value and decided to implement it for real. ...
> Did you ever wonder why the 21st century feels like we're living in a bad cyberpunk novel from the 1980s?
> It's because these guys read those cyberpunk novels and mistook a dystopia for a road map.
That is so true. I actually met someone who seriously wished to do what he could to make reality like a cyberpunk novel, completely oblivious to the fact that the books are dystopias.
Tech people can be really, really, really dumb sometimes. That idiocy can go completely unchecked because they frequently arrogantly believe their type is smarter than everyone else.
John Carmack noted during a Slashdot interview all the way back in 1999: "Making Snow Crash into a reality feels like a sort of moral imperative to a lot of programmers, but the efforts that have been made so far leave a lot to be desired."[1]
I think it was summed up well by an ad (yeah yeah I know) I saw for some university's continuing ed program:
"The sciences teach you how to clone extinct dinosaurs. The liberal arts tell you why that might be a bad idea." [adjacent to graphic of T. rex pursing scientists]
As someone who did study a liberal arts esque degree (philosophy in this case), it definitely didn't make me consider a real life Jurassic Park a bad idea, or question a lot of science and tech and their possible uses.
Heck, you could probably justify most possible moral beliefs via consequentialism, deontology, etc if you wanted to.
Which field will tell you that it's actually a perfectly fine idea, as long as you take correct precautions, and that taking them is literally what makes engineering a profession?
Jurassic Park wasn't a "don't clone dinosaurs" movie, it was a "don't be an idiot" movie with a dinosaur theme.
> Jurassic Park wasn't a "don't clone dinosaurs" movie, it was a "don't be an idiot" movie with a dinosaur theme.
This is why we have people building torment nexuses and genuinely thinking they're doing nothing wrong.
Jurassic Park was a "don't clone dinosaurs" movie. They were very clear and explicit about it. The characters had multiple conversations about it. It was the central plot. The moral of the movie was "nature is indifferent to the hubris of man". Not that hubris is the downfall of man, but that the universe does not care about you, your intentions, or your actions. If you clone dinosaurs, they will escape at some point and they will kill and torment people.
Did you watch the movie? Did you miss the entire "life finds a way" scene?
You're taking "this is a torment nexus, please don't build it" and reading it as "torment nexus is fine, it's dumb men who are the problem"
I love that this is precisely the point. I sincerely wish more people were forced to actually take a class on all the ways hubris messes up, and all the ways people explain they just need to build the torment nexus carefully, paired with all the media about people who say the exact same thing and it blows up in their faces the exact same way.
But of course the whole point of hubris is that they’ll see all the examples and think it’ll be different for them because they’ve learned from previous mistakes. It’s brilliant!
Man it'd sure be cool if great thinkers had been writing about hubris since the dawn of history or something. But alas, we'll just have to move forward with the torment nexus
They were also writing a lot about deities and heroes who were constantly drunk in between of casually murdering or raping people. Some of those themes survived to this day. Doesn't mean we should be basing our behavior on them.
They also believed a lot of things we know today to be just wrong, despite their writings making it seem to be right. Again, some discernment and critical thinking is required. Skills presumably taught in liberal arts curriculum, which makes me think that people arguing for libart education in this thread didn't actually get any, judging by lack of aforementioned skills.
I did. My point is that the movie isn't the Holy Bible. The explicit message was wrong, and in a rather dumb way.
"Life finds a way" is just law of large numbers for masses. Big deal. Humans find can find a way too. We've been fighting "life finding a way" ever since humanity learned language. It's not something you run away from, it's something you overcome.
Because you disagree with the message doesn't change what the message is.
The point is that "life finds a way" is not something that can be overcome. You cannot overcome all obstacles, and the universe not only doesn't care about your hard work and positive attitude, but is utterly unaware of and unaffected by your very existence. Life, nature, the universe all do what they do and no matter how hard you try, you can only influence things. You have no real control at larger scales.
To think otherwise is pretty much the definition of hubris. And men with too much hubris clone a bunch of fucking dinosaurs and set them loose on the world.
> Because you disagree with the message doesn't change what the message is.
There's a difference between author-intended message, and message or messages received by the audience. La mort de l'auteur, and all.
> The point is that "life finds a way" is not something that can be overcome.
Hopefully not, because that take is bullshit. Of course you can overcome life finding a way. Were you ever vaccinated or took antibiotics? That's humans one-upping nature.
The point, if anything, is that you can't overcome nature once and for all[0]. You have to put in effort to stay ahead. It's kind of implied in what life is in the first place. Evolution through natural selection is an optimization system. A greedy, short-sighted, incredibly dumb optimization system, but it has scale on its side. Which is why we are, for example, dealing with "superbugs" now. Life found a way around some of our antibiotics[1]. But this doesn't mean antibiotics were a mistake. It means we need to do better, one-up life again. Say, with phages.
It's not hubris to realize we are smarter than dumb natural selection. It's not hubris to recognize we can win, and keep on winning.
> To think otherwise is pretty much the definition of hubris. And men with too much hubris clone a bunch of fucking dinosaurs and set them loose on the world.
Don't confuse hubris with hope or ambition. Or engineering.
Meanwhile, I'll take my antibiotics and phages and RNA vaccines and cars and computers and airplanes[2], and if someone actually gets around to cloning dinosaurs, I trust they'll have people on the team who know what interlocks are, or redundant power supply. Unlike in movies like Jurassic Park, there exist non-idiots in the real world, and we also have case studies and regulations governing handling dangerous animals and technologies.
--
[0] - Ignoring for a moment that humans are nature too, and everything we do is part of "life finds a way", too.
[1] - In big part because human societies are dumb too. Clear parallels to Jurassic Park here.
[2] - Oh my god what an exercise of hubris powered flight is! Didn't we learn anything from the story of Icarus?!
If you clone dinosaurs, they will find their way out and kill people. Maybe not your park, but your competitors are eventually gonna slip up. Hippos are now practically indigenous to Colombia thanks to Pablo Escobar. And yep, they hurt and kill people all the time. Good luck unringing that bell.
> If you clone dinosaurs, they will find their way out and kill people. Maybe not your park, but your competitors are eventually gonna slip up.
That's not at all a given, not until such cloned dinosaurs become ubiquitous.
What is the rate of lions and tigers escaping zoos and killing people? Zoos aren't a new thing. I wouldn't be too worried about experimental lions escaping a high-tech, super high-profile zoo.
> Hippos are now practically indigenous to Colombia thanks to Pablo Escobar. And yep, they hurt and kill people all the time. Good luck unringing that bell.
We're a bit more advanced and organized worldwide than in times of Pablo Escobar. Extincting large animals isn't a very difficult feat, especially if people were to react quickly. We usually have the opposite problem - keeping larger animals from being entrepreneured into extinction.
I mean, how much do you think a nu-dinosaur bone would fetch on the market? Also, imagine the damage control budget flowing in if one of the victims of escaping dinosaurs happens to be a US citizen. "We Have To Do Something" is a force to be reckoned with. Civil liberties get trampled, and whole countries get blown up, once the public fear ripens enough to go past Thoughts and Prayers stage.
> Extincting large animals isn't a very difficult feat, especially if people were to react quickly. We usually have the opposite problem - keeping larger animals from being entrepreneured into extinction.
Yet. Hippos still roam the jungles of Colombia freely. I'm sure they've even killed a few Americans. Nobody cares.
When you're dealing with nature, you're dealing with something much bigger than you are. Sure you can throw more resources at the problem. Doesn't always work.
Australians fought a war against emus and lost. The tumbleweed war in the American prairie is going nowhere. You seem to have this limitless faith in human ingenuity against nature that just doesn't make any sense. You open biological cans of worms, and it's going to take more resources than it's worth to fix properly. How many billions went into containing Chernobyl? And it's still not fixed. You just seem to want to give carte blanche to the Elon Musk's of the world because you want to believe they can dig us out of holes rather than dig us further into them.
Were you really one of the ones who thought he could actually do something positive with Twitter?
> Yet. Hippos still roam the jungles of Colombia freely. I'm sure they've even killed a few Americans. Nobody cares.
Because what's there to care about? It's normal hippos. Not dinosaurs. Not even genetically engineered hippos. Animals roaming the wilderness and occasionally killing people is still a normal occurrence. Has been everywhere around the world, and it always ends the same way: people start using technology to defend themselves effectively, and to develop the land they live on, and expand their reach, and suddenly they need to self-constrain in hopes they don't extinct the previously dangerous animal.
> Australians fought a war against emus and lost. The tumbleweed war in the American prairie is going nowhere.
Which is why I mentioned large animals. Low r, high K. Not rabbits, that can breed as fast as you can make bullets, but more like elephants, which are all too easy to hunt down to extinction. Jurassic Park dinosaurs were more like the latter than the former.
> You seem to have this limitless faith in human ingenuity against nature that just doesn't make any sense. You open biological cans of worms, and it's going to take more resources than it's worth to fix properly.
Yes, I do. And that faith includes awareness that we can just as well destroy ourselves with that power (we're part of nature too, after all). This is why avoiding stupidity is important. This includes stupidity of the flavor I'm criticizing here.
> How many billions went into containing Chernobyl? And it's still not fixed.
That has little to do with nuclear power per se, and much more to do with why some parts of US still drink water contaminated with heavy metals. That, and most recently, war.
> You just seem to want to give carte blanche to the Elon Musk's of the world because you want to believe they can dig us out of holes rather than dig us further into them.
Carte blanche is a bit much, but I definitely put more trust in Musk and Gates and anyone who's trying to directly tackle real problems with science, technology and resourcefulness, over randos constantly whining about "playing god" or "hubris", etc. - who don't even believe in their own bullshit, because if they did, they'd all pack up and find some nice caves to live in. Because seriously - how else do people think we can dig ourselves up of the holes we're in? The answer has always been human ingenuity.
> Carte blanche is a bit much, but I definitely put more trust in Musk and Gates and anyone who's trying to directly tackle real problems with science, technology and resourcefulness, over randos constantly whining about "playing god" or "hubris", etc. - who don't even believe in their own bullshit, because if they did, they'd all pack up and find some nice caves to live in.
This is... hilarious. It's either believe in Lord and Savior Elohim Muskiah and crew to save the human race through glorious, heroic science, or give up on technology entirely. Do you not even see the stupidity here? Of course you don't.
These idiots don't represent science. They don't represent technology. They didn't invent the modern world. They're not making the world a better place, the best example of which you completely ignored in my last reply, do I need to spell it out, yes I do, it's the big Xitter. Twitter was doing just fine before ole' Elmoid decided to save the human race from its evil wokism.
You do see them as representative of science and "human ingenuity." You are completely saddened by the fact that science fiction author(s) are coming out of the woodwork to warn against drinking this kool-aid, and then moving hell and high water in the comments to defend the faith, circle the wagons. Human ingenuity is fine. It's always been fine. You're motte and bailey'ing this. You want to believe in techbro Jesus and then when called on you retreat back to "oh it's just human ingenuity I believe in."
No. I'm not letting you do that. If so this article wouldn't make you sad. You'd be seeing it as yet another example of such ingenuity. You'd see the work authors do as valuable. But instead you just see myths being shattered. The only representatives of ingenuity you seem to value are techbros. If the science mythologists aren't doing their job by properly mythologizing the techbros, then they aren't contributing to glorious science revolution.
Yeah, humans find a way to mess up. Lots of things are in theory perfectly safe, and yet end up playing the lead role in a disaster. Deepwater Horizon and Fukushima were both caused by a corporation cutting costs on safety measures. That is the thing humans always find a way to do.
> Yeah, humans find a way to mess up. Lots of things are in theory perfectly safe, and yet end up playing the lead role in a disaster.
Sure. This is part of how we learn. We could do better in many aspects, but in general, stumbling on the boundaries is how you expand them.
(Not to mention, "nature" / "god" is 100% growth by mistakes and disasters. Evolution means continuously throwing random mutations at a wall in hopes some will stick. We can't possibly do worse than nature.)
Also, "disaster" is quite a misleading term here...
> Deepwater Horizon and Fukushima were both caused by a corporation cutting costs on safety measures.
... neither of those were disasters in terms of loss of life[0]. All of those were disasters in terms of unnecessary destruction, but they don't prove oil rigs or nuclear energy are fundamentally bad ideas. They only prove corporate greed (in the former case) and paranoia (in the latter case) are problems.
> That is the thing humans always find a way to do.
Among many other things, including clean victories with no loss of life or ecological damage.
--
[0] - Hydroelectric plants are disasters in terms of loss of life, if you want to look at specific failures. Solar PVs are a disaster, if you know how to add numbers. Coal power plants are a disaster, if you know how to integrate.
Before Fukushima happened, I'd heard lots of people claim that nuclear power could be completely safe. Chernobyl was a fluke; it was because of the old model, the communist system and the unsafe experiment they did, but a modern reactor in a modern country could be perfectly safe, and Fukushima proved that a lie. People will cut corners, they will cut costs, and anything that can go wrong, will at some point go wrong.
And yes, Fukushima and Deepwater Horizon were absolutely disasters. Trying to paint them as not disasters is ridiculous. There's no winning argument for you there. And we can absolutely do worse than nature.
> Before Fukushima happened, I'd heard lots of people claim that nuclear power could be completely safe. Chernobyl was a fluke; it was because of the old model, the communist system and the unsafe experiment they did, but a modern reactor in a modern country could be perfectly safe
Still true.
> and Fukushima proved that a lie. People will cut corners, they will cut costs, and anything that can go wrong, will at some point go wrong.
Yeah, a quite unlikely course of events cracked a plant open enough to cause a leak. Approximately all of the actual negative consequences have came from unnecessary evacuation of a large area.
Or were there some new, ground-breaking discoveries made about Fukushima in the last 2 years, of which I'm not aware?
> And yes, Fukushima and Deepwater Horizon were absolutely disasters. Trying to paint them as not disasters is ridiculous.
Please read with comprehension. "Disaster" as in ecological damage, yes (or sort of, in the case of Fukushima). "Disaster" as in direct cause of large amount of death and sickness? Nope. That's what I said. Two different meanings of the world all too often used to equivocate.
> There's no winning argument for you there. And we can absolutely do worse than nature.
Yes, there is, and nah, we generally can't, because nature is dumb and doesn't care. It would take a lot of combined malice and ingenuity to outdo it.
(Okay, I can accept the argument that we're sorta able to do worse than nature now, because our technology is finally starting to work on comparable scales. Though so far, most of us doing bad is driven by higher-level evolutionary forces - we can't coordinate for shit at scale, so we instead play a lot of "survival of the fittest" games.)
Hubris. Philosophically, it is opposed to reason. The story of Icarus is the classic tale of hubris. As a sort of omnipotence with unlimited power, it is irrational precisely because it appropriates a belief in the possibility of transcending physical limits.
The story of Icarus is the story of inadequacies of wax as an adhesive.
I'd understand your take, if you wrote this 200 years ago. But holy shit, we've been doing powered, heavier-than-air flight for over a century. People put their footsteps on the Moon, and then they safely came back to tell about it.
If anything, I see such takes as a great example why the talks about "hubris" and "playing god" are not just wrong - they're dangerous, perverse, mind-consuming memes. Literally every advancement we've made in recorded history came from ignoring this take on hubris.
(To be clear: there exist hubris that is dangerous. It's not this though.)
It's death-of-the-author applied to stories which no longer make sense as they were originally intended, due to reality being other than what the author believed. This is as true for Jurassic Park as for Icarus, except that the explicitly intended message of JP was obsolete when it was written, and it still was written that way for reasons (money, Luddism, or whatever) other than that anyone should have believed it. It's ironic that a similar message about AI, in 2023, is much more well-grounded, but largely ignored due to decades of crying wolf about nuclear energy, genetic engineering, etc.
> The story of Icarus is the classic tale of hubris.
Ancient Greeks thought the sun was a chariot guided by the hand of Apollo. Meanwhile in real life, we've been to the moon and back in the Apollo program and that name was chosen deliberately.
Hubris is specifically extreme or excessive pride, or dangerous overconfidence — if you're genuinely better, for whatever reason, it's not hubris. Doesn't matter why, it could you use maths to prove your vehicle won't melt, or that you starting Dinosaur Island with just herbivores until you know more about the gene splicing tech and only then introduced one carnivore that can be all alone with nothing but a tire on a rope for company like some zoological gardens look like they do with tigers.
With AI? I think quite a lot of the development is absolutely hubris.
But if you want an effective example there, it's hubris to use Ancient Greek mythology as your example. Instead, use a modern reference from the real world — there are many modern military examples (the US has Bay of Pigs and Vietnam, WW2 had Pearl Harbour and Germany attempting Blitzkrieg on the USSR, and now Russia is repeating all of the USA's mistakes from Vietnam with their invasion of Ukraine) or you could have many companies with grand visions followed by collapsing valuations (X being only the most recent in major headlines, and nowhere near the most severe).
John Hammond was the villain in Jurassic Park, much moreso than Dennis Nedry. The theme was "Don't do risky, dangerous things while cutting every corner imaginable to save money (bonus: while constantly claiming you've spared no expense)." That's the "don't be an idiot" part.
I feel like trying to make Hammond's greed/foolishness into the main point of the novel is a disservice to the rest of the story. He's not some rare supervillain, he's only a bit more greedy than most people would be in his situation, because on the whole we tend to discount risks.
The "don't do this" aspect of cloning / genetic resurrection hits so hard precisely because that greed is entwined enough with human nature that any person rich enough to be a Hammond, probably is going to also have the ego to think that their idea is more safe than it is, or the greed to think they can get away without the proper precautions -- thereby dooming the project in a similar way.
I remember the novel putting a lot of emphasis on how Hammond was a charlatan grifter and would lie about what could be done with genetic engineering (including his lies to raise money when starting the company). There's even doubt about exactly how much the park's animals really were resurrected dinosaurs.
It's a compelling modern retelling of The Island of Dr Moreau, but with believable science, and using chaos theory to argue why disaster is inevitable in the unstable system Hammond set up (rather than a mere "he was playing god"). I think it's a cautionary tale of science and engineering without morals. Cutting corners leads to bad science, and there are too many examples of bad science being actively harmful to society. If someone does immoral corner cutting, then who knows what is the depths of their fabrications?
> I think it's a cautionary tale of science and engineering without morals. Cutting corners leads to bad science, and there are too many examples of bad science being actively harmful to society. If someone does immoral corner cutting, then who knows what is the depths of their fabrications?
And that take I can 100% agree with.
I've never read the novel (I didn't realize the OG movie was an adaptation), but your description piqued my interest; I'll add it to my reading queue.
> any person rich enough to be a Hammond, probably is going to also have the ego to think that their idea is more safe than it is, or the greed to think they can get away without the proper precautions -- thereby dooming the project in a similar way.
1. "Rich therefore ego" is a popular meme. It's also bullshit.
2. Real world doesn't work that way. Society exists, regulations exist, licensed professions get involved here. Jurassic Park is the tail-end worst possible scenario, and even there, few people get eaten, big fucking deal. The danger didn't come from cloning, the danger came from greed and negligence. In the movie, dinosaurs were involved. In real life equivalent, roller coasters were involved. Some people died because of greed of others, of course it's bad, but no one is going on about "hubris" and "life finds a way" because some dumb idiot managed to blind-walk themselves into operating dangerous hardware without usual safeguards noticing.
The ad, and thus I assumed you too. The ad was obviously referencing Jurassic Park, because the only reason this take would resonate with people is because they know it from a movie they grew up watching.
To their credit, the text said, per your quote, "The liberal arts tell you why that might be a bad idea". That "might" there is what makes it a thoughtful take, instead of a dumb one that many seem to be harboring.
The 'Introduction to Philosophy' and 'Philosophy of Science Fiction' elective course I had to take as part of my engineering program may not have done much to develop my career but I think have definitely helped me (or at least set the ground work) develop as a person.
It's been my experience that spending a lot of time with cautionary tales spun mostly out of imagination teaches people that the consequences of things can be predicted in the same way one might predict the course of a story. This is demonstrably an inaccurate assessment of human ability to consistently predict the future. Stories tend to adhere to specific patterns that make them comprehensible. Reality is under no such obligations.
The issue arises when people try to shape the future and one another in ways that depend on these faulty predictions. Before long, you wind up with people who essentially believe that Cambridge Analytica was easily predictable from the invention of TCP/IP and thus that they (or we) are responsible for averting it. That false assurance is cringe, and it seems to come about in no small part from confusing the conformity of narrative fiction for messy reality.
You have one upthread: "hubris" and "life finds a way", via story of Icarus and Jurassic Park.
Plenty others involve deep-sounding cliches. "Death gives meaning to life" comes to mind. There is a whole subgenre of extreme litart cringe, known as corporate speak. "Synergy", referencing Odysseus, old folk saws, "team that trusts", etc.
People say this a lot but I think something else is true. The cyberpunk visions are obvious applications of certain technologies. When that technology becomes capable and people start to apply it, they name it after what was coined previously.
In the case I'm thinking of, the person was interested in the technologies and the capabilities the protagonists display. He wanted them so much that he couldn't see the bad parts. He didn't comprehend: 1) the protagonist doesn't enjoy the cyberpunk environment as much as he enjoys reading about it, and 2) in a realized cyberpunk environment, he wouldn't even be a protagonist.
Or he comprehended that, but also understood that SF writers are not infallible prophets, setting and style are just independent stylistic choices, and cyber-setting from cyberpunk novels could be easily adapted for techno-optimistic SF, but these were out of fashion in 1970s.
Yeah, I feel like people are pattern-matching way too much.
If a company invents a giant spaceship, it doesn't materially change the design whether the ship is called "Icarus", "Death Star", "Enterprise" or "Generic-Acronym-GKLE597".
It certainly isn't strong evidence that the CEO has learned the wrong lessons from greek mythology or star wars. Names are just fun.
One point that I heard William Gibson make about cyberpunk as a dystopia is that it depends on your point of reference. There are many places in the world where a cyberpunk dystopia would be a welcome change
I don't know man, I assume William Gibson didn't think that statement through. Even if you were bring something like a cyber punk universe to a Third World country, it would still be slavery.
Anyway, the cyber punk movements being replaced by solar punk ideals, which are more in line with humanities needs.
Given that my country, Ukraine, is already sliding there pretty rapidly, it really can't get worse tbh. I would take corporate wars over interstate wars any day.
(wrote this from my home with a backup power generator while studying courses on how to fly FPV drones via goggles and on cybersecurity/electronic warfare)
War is hell no matter what setting you're in. Or, going with Hawkeye Pierce from MASH, maybe war is worse than hell. I really hope things will change for you and your country.
I suspect that we need to go through cyberpunk to get to solarpunk. Cyberpunk is basically AI tyranny (corporations are just the oldest form of AI) and humanity needs to be humbled by encountering something smarter than ourselves before any kind of solarpunk can really 'stick'.
Bureaucratic states are corporations too, brah. (Arguably more like AI than joint-stock companies, as there’s no one person in charge — just the collective incentive structures)
Was he talking about Somolia or was he talking about "It's good to be the king, even though the techno-fuedalism that enables that position is overlayed on a collapsing physical infrastructure and society"?
For that matter, even when you're living in a cyberpunk dystopia it's still in a sense a matter of perspective (if you go by the textbook definition of a dystopia as a terrible place to live). This was really driven home to me when visiting Shanghai. Americans of course hear about the sweatshops, industrial espionage, oppression of ethnic minorities, and conflicts over disputed territory. But if you're a welcomed moderately upper-middle-class visitor in Shanghai and thus don't really see any of that, it's a really nice place to visit (except for the heat and smog, and probably not during a global pandemic). Not unlike how San Francisco is a lovely city as long as you avoid certain areas.
On the other hand, I think you can still overall call such places dystopias due to high disparity in available opportunity which goes hand in hand with all kinds of problems in society at large.
I remember when 90's Beavis and Butthead was new, that a bewildering number of people I encountered who were familiar with the show seemed to think the titular characters' existence somehow validated the corresponding behavior of the real life versions of the people they lampooned.
"someone made a show about selfish lizard brain idiots, and its like, popular, therefore being a selfish lizard-brain idiot is now confirmed a respectable cultural stock to trade in."
Some people just fundamentally don't get satire. You wouldn't believe how many Warhammer 40,000 fans do see humans as the good guys there. (They really very much are not. There are no good guys in W40k, but the humans definitely aren't it.)
I never watched B&B because from fragments and ads, I concluded that the attempt to validate such behavior was the cartoon's whole point.
I mean, it's either that, or for some reason adults are supposed to enjoy watching fart jokes delivered among puke and dumb-ass stupidity. I found the first interpretation to be more comforting.
B&B was created by Mike Judge, who also gave us Idiocracy, which shows us pretty clearly where his head was at in both cases. The point of the show was partly stoner humor sandwiched in between video music segments on the then-new MTV channel. It's not exactly high art, but it is quite obviously intended as satire by the creator of the show. The intended audience point of view is not that of B&B, who are deliberately presented in such extreme absurdity in order to be unrelatable. The viewer is expected to see B&B the way the straight-man(ish) characters of the show do, as tiresome unfunny nuisances or worse, lacking any sense of self-awareness. Although it could be seen as being sympathetic to the characters themselves, especially given the much greater amount of screen time they get, this may have been simply necessary in order to secure protracted attention from the large fraction of juvenile manchildren in the original MTV audience, whom the show had to ensure it didn't turn off even as it was trolling them specifically.
I think especially since idiocracy came out, it's evident that B&B was in part a crude attempt at social engineering, a gentle way to grant the sort of people who might find themselves symathizing with the main characters the crucial power those characters lacked, which was to see themselves from the outside. It's fair to say that if this is true, it hardly moved the needle in that respect, and may even have slightly backfired.
I'm on Bay Area tech Twitter and there's a growing movement called effective accelerationism. E/accs lionize tech billionaires and call for increased investment in futuristic technologies like AI, decreased regulation, generally more rigid social hierarchies, etc.
I want to peacefully coexist with people in my social circles but it's hard to hide my disdain. I believe a lot of tech billionaires are essentially nihilists creating these stories as a way to increase their wealth and power.
A lot of their values seem to stem from doing too much Adderall, too much coke, and too many hallucinogens. Sometimes it seems like they came up with a lot of this stuff while playing video games on LSD.
Effective accelerationism doesn't value humanity. If anyone gets crushed beneath the wheels of technological "progress," if society turns out worse for the majority of people, well, that's not their problem.
You should call it out. It helps to hear differing opinions.
I'm a bit of a global warming pessimist. I don't think humanity is equipped to turn things around and that we are looking at significant changes, far more than anybody is willing to admit to in the media.
If I were a tech billionaire, I would be designing a sustainable, self contained, hermetically sealed, box you can fit a village of people into. A box of forest and tech that can feed and sustain my family and friends. I like it imagine a utopian medieval village were people are tradespeople and farmers by day, and write code at night.
It doesn't need to be in space, the earth will look like Mars soon enough.
Now that I think about it, I wonder if the Saudis are building with
"The Line"
> It doesn't need to be in space, the earth will look like Mars soon enough
You're disregarding how well humans are adapted to earth-conditions. An earth ravaged by draughts, flooding, wildfires, volcano eruptions and superstorms is still more hospitable to humans than Mars - by several orders of magnitude.
I think the whole earth could look a lot like the Sahara or Gobi desert in a few generations. If the biology of our fertile soils change too much, and nature can't evolve fast enough to adapt to the changes, we could lose a lot of what we rely on.
I think its more likely that a few large famines will wipe out most humans before a total biological collapse, which will end the carbon emission problem.
Mars is colder than Antarctica, drier than the Sahara, and has lower air pressure than the top of Everest.
What little atmosphere Mars does have is 95% CO2, but it's so cold in the Martian winter that the sky literally falls each winter as 25% of it by mass condenses into solid CO2 "dry ice" on the poles. Mars has no ozone layer (not that you'd survive outside without a space suit on), and the entire thickness of the atmosphere is so tenuous that a large coronal mass ejections that happens to hit the planet will kill basically all humans walking or driving around outside on the surface.
The Martian soil has about a million times the concentration of calcium perchlorate (toxic to both humans and plants) than the perchlorate concentration in water found in literal superfund cleanup sites.
Even then, we'd be able to breathe without helmets. Mars is already like the Gobi, without oceans, precipitation or the breathable air that Earth has. It's not close to being the same thing
The e/acc stuff is the first that came to my mind as well. Well, it's our fault we let sociopaths lead the game, right? The people that glared over the dystopian parts of the fiction, the dehumanizing part, because the tech was shiny and the smell of opportunity for themselves was even shinier.
Just a nitpick: hallucinogenics are probably something these people should do more, as it increases empathy generally.
I like the accelerationist story and I don't think I came to that opinion through manipulation. If I did it was by the club of Rome.
The way I see it is - you're in a car heading for a cliff. You can go all in on the brakes and hope you stop in time, or all in on the gas and hope you can jump the gap. We don't really have enough data to know which is better, it comes down to feeling.
And the time to slam the breaks was probably about 40 years ago. Since people back then elected Reagan instead, I'm of the opinion it's worth keeping the gas all the way down, just in case it works.
I think that's a very good metaphor, because, statistically speaking, there are very very few cliffs where pressing the gas gives you higher chances of survival than trying to brake and turn. And on top of that, "well we're already aimed at the cliff and going to fast" is a very good way for the people that put us in that situation to force the issue in the direction they want.
"Well how bad can it be? You can't know it's going to be bad until we try it." -> "Ok, it's looking a little scary, but let's get a little closer." -> "Ok, this is probably a bad idea, but it's not an emergency yet, so let's chill out, ok?" -> "Ok there's still time to turn stop freaking out." -> "Ok now it's an emergency but it's too late to turn so we might as well go full speed ahead and hope it works out."
Yeah, with a car and a cliff hitting the brakes is a good decision. Civilizations have a lot more momentum than cars though.
There's an analysis saying that if the Titanic had hit the iceberg head on, it wouldn't have sunk. I like the car analogy because there's opportunities for passengers to bail and thereby lighten the car for those who want to clear the cliff, but in terms of ability to turn or stop an ocean liner is probably the better metaphor.
You should look into how Futurism, the cultural movement, preceded fascism.
Our era may seem frustrating. I guarantee that accelerating the causes of pain does not lessen the pain. Giving the reins to strongmen, to technology-enriched businessmen, to impersonal processes of capital investment and profit - there is no chance we end up anywhere good. I would rather not go through another World War to prove this idea wrong again.
The issue I take with this philosophy is it's all or nothing. Sam Bankman-Fried admitting to being willing to flip a coin that saves or kills humanity points to this. Either we give all of our hope and money to tech billionaires to save us, or we'll just _________ and then humanity will perish. Also we need to trust tech billionaires who have shown plenty of times that they shouldn't be trusted to save humanity, so I'm not inclined to do that.
Especially when it's these same tech billionaires who are informing us about the risks in the first place.
Tyler Cowen found the rationalist/EA people impressive. But was skeptical that anybody had enough information and wisdom to see thousands of years into the future. After the fall of FTX he pointed out that they couldn't even predict the consequences of their own actions even one year into the future. Why should we trust longtermism (or TESCREAL or whatever we're calling it now)?
(I'm left-wing so I normally find Cowen's perspective rather dismal. But I think the best part of conservative philosophy is the skepticism that any small group of experts knows best or can direct the course of history for the long term. And here he is resoundingly correct.)
This is kind of hard to discuss since it seems like SBF didn't engage in EA "in good faith" and we can't really know if other billionaires are either. It does make it easy to discount EA because of the FTX debacle, but I guess if it was so easy to discount then maybe EA true believers weren't actually doing anything to offset that black eye.
More than anything it seems like billionaires are trying to convince us to let them hold future monopolies because we just should, ok?
>I think the best part of conservative philosophy is the skepticism that any small group of experts knows best or can direct the course of history for the long term
It's a double edged sword for sure. On one hand, if "conservatism" is about resisting change for the sake of change alone it can restrict our growth as a society. On the other hand, changing something on the basis of the newest group of "experts" deciding we should is something that DOES need pushback in many cases. It really just underscores how society needs all kinds of people coming to consensus to be functional.
As long as society attempts to come together and reach consensus on important things, I don't think we need to be saved by billionaires :)
> This is kind of hard to discuss since it seems like SBF didn't engage in EA "in good faith" and we can't really know if other billionaires are either.
I wonder if EA could ever be implemented in a way that proves or disproves it as a viable strategy? It seems to me that the inherent problem is that people will always behave in subtle and not-subtle self-interested ways that make "true" altruistic behavior devilishly difficult to carry out in the real world (especially under the conditions that arise granting you the billions to carry the philosophy out). And therefore almost impossible to falsify.
Sort of reminds me of the old adage, "Communism cannot fail, it can only be failed." With some people today exclaiming that true Marxism has never been tried. But I can't imagine what perfect conditions could exist that would allow either communism or EA to be carried out, without having to account for human nature in the end.
I think the best interpretation of EA is still "Effective altruism is a question" (which I believe is more or less the original interpretation): how can you do the most (in my opinion, reasonable) good (within a budget)? It's trying to separate feeling good about doing a small act, versus simply pausing to think about what is effective.
Sure, people will converge on claimed solutions to that question. But you can give your own solution[1] (I myself am an EA and disagree on some points, including giving locally in my third world country, and volunteering). The perspective is really valuable I think.
Now that said indeed, don't try to make money at all costs in order to donate. First that can easily fail and be a direct net negative, and second there are secondary effects like losing trust and unexpected side effects on other people. Being honest and trustworthy is a really good idea.
[1] Recently Give Directly dropped out of Givewell's top charities, for probably understandable reasons; I still like Give Directly and still give. Just get informed and give well! (to Givewell or not :P)
Galacta7 hinted at it with the discussion of failing Communism -- there can be plenty of excellent philosophies on paper but once they enter the real world it doesn't matter how altruistic the philosophy is on paper if it's twisted by a single person when they gain control of the real world in some way.
I am not against the philosophy that there are optimal ways to help, and less optimal ways to help. I'm not against the philosophy that tries to weigh the best of the available options. I am against the philosophy that then arrogantly says "This is the best and only way to move forward for the best utility to humanity" as if they are able to see the future.
I don't doubt you have opinions on how to best help humanity, and that's great! As you said, the perspective that we only have a limited amount of utility we can provide for ourselves or the benefit of others, and we must be wise in how we use it is a good one to have. It's the same wisdom that helps me see that I can't give my rent money to another person and tell my family "tough luck" when we get evicted.
On the other hand I feel like utilitarianism can easily lead to decision overload when applied to everyday life. So it's a lens to view the world through but can't be a holistic principle that guides your entire life or you'd never get anything accomplished.
The fact that it's utilitarian is already a red flag for me, because you have to start making judgements about the expected utility output of helping one person over another. Italy had to coldly adopt this mindset when prioritizing care during COVID-19. It has use cases such as ensuring the future workforce and viability of a country in the face of limited healthcare but nobody wants to hear their expected utility is too low to be "worth" helping.
Believing that's the way we should view and calculate every aspect of life feels like a bad mixture of egotism (like Musk believing he's the only one who can save humanity long term) and and, weirdly, a selfishness that the utilitarian's utility is sacred and should only be "spent wisely" and not "wasted."
I find similarities to how many businesses close because they aren't making infinite growth anymore, or scrapping a mostly-complete 80% project (which could be used as-is) since that last %20 is hard.
I think chalking it up to just idiocy is a tad unfair. There are incentives everywhere to think like a marketing person or startup CEO, its what makes you a valuable employee. You write your cover letter and say, "I am very excited to work on the Torment Nexus" because you need, plainly, a job. But then with everything after that, you are not only authorized but encouraged to believe in the Torment Nexus by the people who matter most (people who pay wages). It's understandable that merely placing yourself in this world will slowly infect your mind, because there isn't anybody really telling you its wrong, and if they are, they are easily dismissed as luddites or socialists or whatever. And its not like the naysayers have some alternative money to offer you.
Solarpunk is a minor niche thing that gets too bogged down in political discussion. Cyberpunk benefits from being (at least not directly) political and more aesthetic for 95% of content.
The "punk" in "cyberpunk" isn't about aesthetics any more than punk rock is. The punk element wasn't just in the relationship of the fictional characters to their settings; it was in the relationship of the writers to mainstream science fiction. William Gibson, Bruce Sterling, Pat Cadigan, Rudy Rucker--they were very deliberately responding to the more utopian, and by then rather dated, science fiction of the 1950s and before, the Heinleins and Clarkes and Asimovs. Cyberpunk is manifestly about the contentious relationship between individual freedom, corporate power, and state control.
Solarpunk arguably is a minor niche thing, but any artistic/literary movement that calls itself "-punk" better damn well be political. Otherwise, it is rather missing the point. :)
Yes, I’m aware of its origins, but as I said, for the overwhelming majority of people, cyberpunk is an aesthetic genre, not a political one. The same is not true for solarpunk.
This is easy to observe on the respective subreddits.
it doesn't have to be political because it's already here, just without the metal arms and cyber-eyes. you're not debating the future since it's now just a standard local / state / national discussion topic.
FAANG tech bros making 600k and on the other side of the city is the largest homeless population in the US. Human street poop and fiber optic connectivity as the two largest challenges in my brothers neighborhood.
should we ban online social media platforms from being used by under-13s?
does google or FB have to pay for ads in Canada? etc etc
Is it though? I think I've seen one page shared multiple times. Where are the iconic books and stories and movies about solar punk? What are the salient ideas from it that will stick with us for decades?
> solarpunk is a surprisingly hard setting to write in. Not enough conflict
Ha. Where man goes, conflict follows.
A "perfect" world in total harmony is boring. Mr. Rogers' Neighborhood would be pillaged by neighboring warlords on first contact, if it wasn't first subverted from within by the resident entitled 12-year-old furry. Even in a natural utopia, nature abhors a vacuum.
Cults strive to maintain a facade of harmony. The conflict is readily apparent to insiders. It just appears as though there is no conflict to outsiders.
There was a solarpunk-ish epoch in the game Chrono Trigger (Zeal). Everything was peaceful and harmonious. This enlightened age was brought down when its scholars researched tech they shouldn't and birthed the harbinger of the apocalypse.
If there's not enough conflict, you're not looking close enough at what's going on at street level. Write a story from the perspective of a cop, a farmer, a plumber, or any other blue-collar job in this world. People need to eat and drink. What happens when that's challenged? They can't live in their own shit. What happens when pipes collapse? They fight over dumb things. Where and how?
Wherever two people coexist, there are politics. Where three exist, there are factions and intrigue. Where there's four, one is expendable...
Let me rephrase - a protagonist society whose strategy is outgrowing rivals through less wasteful use of resources and peaceful coexistence only 'wins' on a time scale that's hard to write about.
An idea doesn't become inherently dystopian just because some authors chose to portray it as dystopian once. Have you considered that fiction isn't real life, and that maybe an idea is worth exploring even if a fiction novelist disliked it? It's not "really, really dumb" to refuse to take the opinions of science fiction writers as gospel.
I have been working on an essay for awhile that explores the "appeal" of supposedly dystopian cyberpunk worlds. The summary is: I don't think they're actually dystopian and in fact they have a lot of attributes that are lacking in today's world, specifically: a larger tolerance for experimentation on the body and in architecture, and an extreme urban density found almost nowhere on Earth today. It's the same reason why Kowloon Walled City was so intriguing. Calling people "dumb" just because you don't understand them isn't very insightful.
Dystopian cyberpunk worlds are usually described as hollowed out by corporations or a similar power structures to the point that it's every person for themselves. Some people have the resources to fight back, but most do not. That's what makes them dystopias in my opinion, that if you are not part of the privileged class there is no opportunity to change anything about your situation.
I feel like anarcho-capitalists fall into this same trap, believing that a stateless world ruled by the individual will lead to the most freedom. When it reality it will almost certainly lead to the worst kinds of tyranny and oppression imaginable (as Chomsky observes). In my very experience, the very people who pine for an ANCAP future reality would be the ones most crushed under its boot heel.
Do the people who want to make cyberpunk real mean they want to make corporatocracy real or they think VR videogames would be fun? And aren't there degrees of "bad" to cyberpunk novels?
Yeah, I don’t think most of the world presented in the Sprawl trilogy is all that dystopian. Maybe a bit overly-unequal, but also with more space for freedom. There are a lot of fascinating places in those novels that would never be permitted in our world today – a hydroponic garden covering the entire floor of a skyscraper, for example.
I'm sure if you asked Peter Thiel why he pumps the blood of young men into his body he'd say something similar, so maybe it's just one man's dystopia is another person's power fantasy?
The recurrent theme in all cyberpunk media is the lack of humanity we're left with. This isn't something to strive for.
All of your needs are commercialized. Relationships are reduced to tubes you can jerk off into while watching a gyrating hologram for $1.99 a minute.
I respect your attempt to be nuanced but you're really missing the point of cyberpunk itself. The neon lights and wild fashion aren't glamorous, they're artifacts of the desperation of attention-seeking systems attempting to stand out by being more over-the-top than everything around them.
Every sense is exploited to death, and when that's not enough, we'll sell you body augmentations in exchange for your mortal soul (cyberpsychosis' technical debt).
It's a body-shaming, soul-sucking hellscape. Being You is pathetic. Anyone who isn't able to be a perfect corporate drone is generally a low-level criminal of some type, subsisting off of carbs and sodium while living in a squalid apartment the size of a cargo container. Nobody ever "makes" it. You're born rich and live forever or you live and die with rats.
Every protagonist quickly finds themselves with a 5-star wanted level and has a hypermechanized cybermilitary force pursuing them over dumb shit like delivering a package or picking up the wrong passenger. The bureaucracy of justice is impenetrable; I've never seen a cyberpunk Phoenix Wright, only variations of Judge Dredd.
You exist to be exploited, and the minute you stop being useful you're thrown into a literal meat grinder of some form. You don't hear about cyberpunk prisons, just execution squads.
The world is already working or headed this way. To want this future, you really do have to be a fool, a sociopath, or wealthy.
It’s a genre of literature. People have different interpretations of it than you do. I don’t think something like Neuromancer or Count Zero is dystopian at all, but as I said, have spaces for more exploration and experimentation that our world today doesn’t allow for. I wouldn’t call it utopian, but to insist that someone who finds it slightly appealing is a sociopath is to miss the point entirely.
I’m not sure what is so difficult to understand about this.
What I also find interesting and revealing is how surprised they are when others find their vision of the future (the near and the far) scary and dystopic, and challenge them. I used to tell crypto bros (most of whom have done a quick transition to AI-brohood recently) why and how their idea of the crypto-driven economy sucked, and they would just treat me like a weirdo who just didn't get it. This is all before the big crash, of course. I guess part of it is how crypto was getting insane amounts of funding, which legitimized and corroborated the idea, and also created a strong echo chamber where they were shielded from criticism. Money is great at making you ignore that you might, just might, be completely wrong.
That's why I stay away from endeavors that look too good to be true. I if it's become too popular to quickly and causes people to not question their surroundings or other motives run as far away as you can.
You'll have a much better life doing the right thing, then doing the wrong thing.
The core difference between crypto and tradfi is that crypto is open and hackable (in the creative sense) by anyone without having to receive the permission of the gatekeepers.
To believe that that is a bad vision of the future betrays something of the exact pessimism about humanity and authoritarian bent that this article complains about.
Pessimism is believing that there is no possible positive future for humanity. I don't get how rejecting one option is the same thing. How, again, rejecting one option due to clearly presented reasons is authoritarian is also beyond me.
It's not just them being dumb, some genuinely believe that they would be on top of the hierarchy in such a world, that their work to bring about that world would put them in a position of power and influence over others. They're effectively technological libertarians.
Or... you could try and look deeper than idiocy: there just might be something else. If only disenfranchising which can push people pretty far.
You would think most of the world would be clear on that by now?
The discussions within Cypherpunks for one place could get pretty "down to earth". It's still later than Neuromancer but people there seemed pretty thoughtful and explicit.
This kind of issue comes up all the time: "why do the poor do X", "why do these employees do Y"... You can usually figure out incentives that go beyond "idiocy". Recently on HN "why do people do multiple ACH or ACH-equivalent transfers?"
Oh god. As much as I respected 'cstross, the kind of outlook culminating in this essay starts to read to me as "I'm gonna shit on scientific and technological progress, because I can't have a career as a sci-fi writer if the -fi parts become real too fast".
I get the idea that some ideas are better not realized. I get one may not like people, or business models. But shitting on SpaceX and NuSpace in general, or most recently, on LLMs? That's plain ridiculous. Reminds me of a Douglas Adams quote about things invented after one's 35.
Such a sad, depressing outlook in times where we need to look forward, rekindle hope, have some dreams. And I don't mean we should ignore current problems - just that it's hard to solve anything when people have no hope and succumb to crabs-in-bucket mentality. And assigning blame to random convenient targets, particularly any one that dare to present a positive outlook (justified or not) is definitely not helping solve anything. It's just reinforcing the fucking depressing mess of a zeitgeist we have today.
I don't think those people are getting targeted for "presenting a positive outlook," and the dire outlook is not (solely) because of commentators' pessimism.
Both pessimists and optimists need a bit more of a clear-eyed view of the problems we have in front of us (many created by technology) and the solutions we have available (many of them... also technology).
I don't understand this characterization. He cites a wealth of the kind of history that only a seasoned and successful fiction author could have deep knowledge of. I may end up rereading it so I can follow the thread better. It's fascinating all because of that, but you want to skip to the end and just criticize the mindset because you didn't like the conclusion.
I get it, tech geeks want to believe in tech geekdom. But this is an unexamined religion, the priesthood of which is right here to peel back the curtains and show how it's all smoke and mirrors, and you just want to crucify the non-believer. Elon Musk et al are not the writers of the myths, and rockets and LLMs are not the communion wafers. But you seem to want to treat them as such.
Meanwhile, Stross is literally in the smoke-and-mirror business. He feels threatened, I get that. As an author, LLMs must look like an unpiloted bulldozer heading straight for his house. That's OK; he'll end up in a better neighborhood, just like the rest of us.
This particular brand of billionaire really believes in the power of technology and science to save the world, enough to make these kinds of moonshots. That belief didn't come from nowhere. It came from reading a particular kind of science fiction from a particular era. Stross points this all out.
Without the fiction giving them these ideas, they would be going about their business much much differently. They'd be doing it like other, ordinary American business. The kind that actually changes the world. Shipping, commerce, unS3XY enterprise. That's what made the modern world.
The fantasy that individuals affect history and not just culture is precisely the smoke and mirrors. Information technology makes service economies viable. That's the only contribution that could even remotely be tied to improving the global order. And it wasn't done by techbro enthusiasts, Apple and Microsoft enjoy the glory, but the real heroes are the academics that did the real research.
Without the fiction giving them these ideas, they would be going about their business much much differently. They'd be doing it like other, ordinary American business. The kind that actually changes the world. Shipping, commerce, unS3XY enterprise. That's what made the modern world.
Been reading a bit about Henry Ford lately. I wonder what sci-fi novels he read as a kid. If you think Musk is an asshole...
At any rate, you're making the same mistake Stross does. Human will to power came first, then the sci-fi, then the reality. For better or worse, the sci-fi is optional. Fiction writers don't build the roads, they just paint the signs.
Trust me if the current robber barons thought they could get away with acting like the old robber barons, they most certainly would. Society changed and made it much more difficult for single persons to amass so much power. It will be the same with the new batch. In another decade, this tech cult will be a distant memory.
> I'm gonna shit on scientific and technological progress, because I can't have a career as a sci-fi writer if the -fi parts become real too fast
This is not a fair characterization: cstross wrote several novels about singularities or set in post-singularity universes decades ago - before the current LLM hype. I suspect he has thought more about a/the singularity than those he is criticizing in TFA. He even wrote blog posts about why he doesn't write singularity-related stories anymore (six[1] years before Transformers, and the "Attention is All You Need" paper). Those timelines do not match your interpretation of him being salty about tech moving too fast.
You dismissal sounds a little shallow: there's more to sci-fi than AI, and he doesn't just write sci-fi - his Merchant Princess and Laundry Files books are fantastic.
He's also very specifically said that he would like the world to slow down so he can finish stories before they're obsoleted by reality. This is why Halting State got one sequel rather than two: http://www.antipope.org/charlie/blog-static/2013/12/psa-why-...
> He's also very specifically said that he would like the world to slow down so he can finish stories before they're obsoleted by reality
My read is that he says it with tongue very much in cheek. So it's not that his stories are obsoleted but overtaken by reality, making the plot appear to be inspired by recent events - no author wants to be seen as unimaginative. Also, it has happened to him multiple times - he had to scuttle another go at a Halting State sequel scheduled for 2022 which pivoted around a (then fictional) global pandemic[1]
I agree. I find the tone so tiresome, partially because it is mean and simplistic. I don't feel like he's going deep or considering things intelligently. He's reached his conclusions and is preaching a dogma/vibe now.
I did not mean that. 'cstross is smart, writes very well and remains relevant. But I strongly disagree with the outlook of this essay, and the overall vibe I've been getting from his blog in the past couple years; I also find it depressing that it seems to be increasingly representative of the "tech industry counter-culture", for lack of a better name.
It doesn’t feel like he’s shitting on SpaceX. His point is still valid, however, in that we should not divert resources from, say, COVID vaccines for everyone to immortality for the über-rich.
We saw how social networks were used to destabilise democracies we took as invulnerable to such meddling. We saw how placing the accumulation of capital over all things including the welfare of our species can lead to dangerous priorities.
Science fiction: a genre of speculative fiction, which typically deals with imaginative and futuristic concepts.
Given sci-fi authors regularly speculate about all sorts of potential futures (and presents), is it really that hard to imagine that some of these futures might come true in some shape or form? It's really rich to trace modern implementations to decades-old ideas and then claim you were the one to create them. Get to pat oneself on the back while wagging a finger at the folks actually doing the building. Zuck's Metaverse would've likely existed without Snow Crash, though probably with a different name. And Motorola Razr would have likely existed without Star Trek communicators. I am not saying that all modern innovations are good ones or that some inspiration didn't take place, I just find the argument that their existence is entirely rooted in science fiction a weak one and full of survivorship bias.
Given that the Zuckerverse has been a bust, and the motorola razr has now been replaced with the ubiquitous touchscreen-slab smartphone, I find it difficult to believe that either of these were inevitable, even as an intermediate step towards the next thing.
But I think the author's main point isn't to point out that these things come from science fiction (these points are obvious). The point is that science fiction is _fiction_, and devoting huge amounts of real-world resources to build a fantasy isn't going great for solving the actual, real-world problems we face.
Not convinced of the point that these inventions "come from science fiction", sci-fi popularizes and speculates about the future of current science and technology, it doesn't invent the science and technology. Cell phones have existed in experimental forms since the 1940s[0] so they certainly weren't invented by Gene Roddenberry, he just speculated that they would become smaller and more powerful.
Likewise, Stephenson was riffing on the existing technologies of the internet and VR[1][2] by performing the simple trick of asking what happens when these technologies get better, smaller, cheaper and more widely available. Honestly, the speculative part of scifi is the easy part. The credit due these artists is in their fantastic story telling abilities and their world building around the what ifs. They shouldn't get the credit for inventing the technology itself since it a) already existed and b) the notion of it getting better over time is fairly universal and not driven solely by their visions.
> Not convinced of the point that these inventions "come from science fiction", sci-fi popularizes and speculates about the future of current science and technology, it doesn't invent the science and technology. Cell phones have existed in experimental forms since the 1940s[0] so they certainly weren't invented by Gene Roddenberry, he just speculated that they would become smaller and more powerful.
To amplify your point: it's worth noting that the Star Trek communicator was depicted as a compact, extremely long range two-way radio. Cell phones only vaguely resemble them. So even less credit is due to sci-fi.
I think it's also the case that multiple concepts end up manifesting in ways that we can't envision or simply don't consider (see Marshall McLuhan). For example, this was also science fiction, but is probably much more relevant than a two-way radio when considering how we use our mobile devices, even if the form factor is far closer to a tricorder: https://en.wikipedia.org/wiki/Memex
> The point is that science fiction is _fiction_, and devoting huge amounts of real-world resources to build a fantasy isn't going great for solving the actual, real-world problems we face.
Science fiction is a great sales tool. We can wrap solutions for actual problems we have in a shiny spacesuit to make them more palatable to investors and the general public, knowing that delivering the shiny products would have solved a number of real pressing problems.
Zuck's Metaverse may not be solving a problem you face, but it is solving one for Zuck (trying to, anyways) - controlling a distribution platform where potential competitors like Apple or Google cannot dictate terms. SpaceX and Starlink are solving plenty of real-world problems even if "get to Mars" is the long-term vision (and one I'm frankly thankful for, as little as I like Musk). LLMs are clearly a distinct phase in AI evolution, even if it's too early to talk of AGI - real-world applicability here is a bit more tenuous, but I certainly wouldn't throw it in the same bucket as, say, crypto. I think this idea that these explorations at the edge are a waste is a form of Luddism. Countless innovations that _do_ benefit humanity (including allowing us to have this conversation) have sprung out of them.
It seems this way on the surface because everything seems equally inevitable. Maybe it's all inevitable, but many technologies come around and don't look like something described in science fiction.
People vaguely predicted the smartphone, but not really. It is not from Star Trek or Gibson. This technology may have been inevitable, but its design, terminology, and effects were not predicted in detail by mainstream sci-fi, and it really did turn out differently.
On the other hand, the Humane AI Pin is absolutely a Star Trek: TNG communicator badge. There's no denying it. VR and the Metaverse are lifted directly from Neuromancer and Snow Crash. They didn't have to end up that way, but they did, because we had those examples in fiction to steer our thinking.
It's true that predicting things isn't as impressive or as important as making them. But many people predict things that don't happen, and most people don't accurately predict anything. The sci-fi authors are a lot more like the people who make the things than the people who don't make anything and don't predict anything.
It’s much easier to work for a goal you can see and communicate clearly. Science-fiction is useful for that. Motorola’s Startac wouldn’t be without Star Trek’s communicator. It turns out a screen is more convenient for the other things we do with phones (they were getting smaller before every phone became a PDA, a camera, an e-reader, an e-mail client and so on) and the Star Trek design came and went.
I speak of Star Trek because it shows an utopian future while most other large sci-fi fictional universes go the opposite way and show dystopias. In that sense, Trek tries to predict a future while others try to prevent one.
There is enormous value in proposing a future people want to be part of.
Science fiction authors paint with a future-colored brush but every sci-fi work I've read or watched has been a reflection of the present. Consider steampunk where they change the palette to writing about a fictional past, but the themes and style remain the same.
The fiction writers didn't create the Torment Nexus. It's been here the whole time and TFA is blaming the child who told us the Emperor has no clothes.
Without any of these future coming literally true, it's enough to find inspiration, hope, nuggets, ideas, exoticism, entertainment, and other useful brain re-arrangement. And remember that it's fiction.
I think the problem is that there is, globally, only a handful (well, a few dozen handfuls) of individuals who can really move the needle by a lot just on a whim. If tomorrow I decide to dedicate the rest of my life to making the metaverse happen, that's just one idiot toiling away. If Zuck decides the same thing, in a few years it might be socially unacceptable to not have fun in the metaverse.
Having just a few people have such an outsized influence might still be acceptable, but (and I think this is Stross' point) they just exhibit such exceptionally bad decision making as to what to focus on that it's hard to defend. It may be that there are tons of billionaires that just spend their money on luxury items and the occasional philanthropic venture, and who we don't hear much about. But the ones we do hear about really seem to be the poster boys for "money corrupts."
And yet the Metaverse is the punching bag of any discussion about Meta, despite Zuck throwing the weight of an almost-trillion-dollar company behind this vision. Perhaps a whim is not enough? And if the execution is excellent and the vision takes off, sounds like it does solve a problem, so what's the issue? If the choice is between spending one's billions on luxury items or pushing the envelope in some fantastical direction, I'll always vote for the latter (though I am a fan of Gates' approach to philanthropy as well).
> hyping the current grifter's fantasy of large language models as "artificial intelligence"
I really can't take him seriously after that remark. No, language models are of course artificial intelligence, they are systems so impressive that they were considered decades away before GPT-3 released a few years ago, or, to most, even before ChatGPT, just one year ago. They of course aren't precisely the classical SciFi intelligence that Stross and his colleagues had in mind in the past decades, but that speaks against them, not against the status of language models as AI.
By the way, a better SF writer, Stanisław Lem, did predict something similar to language models: Systems that would read novels of one author and then produce similar novels as output. Basically an LM without fine-tuning.
What Stross does is the classic goal-post moving, where anything more narrow than human intelligence is ultimately dismissed as not real AI. As an added bonus, he lampoons concerns about such "real" AI scaling to superintelligence as being totally speculative. Basically, either it's not real or it's just irresponsible speculation, simple, case closed!
I wish that I were good enough at communicating why I think that large language models are AI, but I just can't really argue against blanket statements like "it doesn't matter how it is implemented if it still appears intelligent".
People give me arguments like, it doesn't matter that LLMs don't really have any internal "thoughts" or ambitions, because each token is calculated solely from the previous tokens - that can be added on. You can chain LLMs together and do a bunch of this other stuff in order to implement that. "Internal thought and deliberation is just a feature you can implement on top of existing LLMs!"
Just... I get that AI won't necessarily have to work like how biological brains do, because intelligence is achievable in many different ways, but LLMs just... aren't it.
The people most against LLMs being described as AI seem to be the people who have a pre-existing notion for what AI is, and they point to all the things it can't do as reasons why it isn't actually AI. I've considered LLMs to be a piece of a greater and more complete AI in the same way our vocal cords are a piece of our internal thoughts manifesting in the real world so other people can hear them.
So an LLM is a great speaker, but it doesn't understand what it's saying. Does it need to? Do my vocal cords need to understand the stream of vibrations my brain asks for, or just respond? Understanding speech takes place at a different layer than merely repeating the "correct" speech.
Now all we need to do is attach the LLM output to a database for long term context storage, overlay a constant weight adjustment daemon that the LLM can trigger to control how it "learns", and tie the LLM temperature to /dev/random so it has mood swings like all of us meatbags do. THEN it will be a perfect human-like AI :P
Good observation. Though to extend the analogy, LLMs may be capable of giving rise to true AI in the same way that the Enterprise's main computer could fairly trivially create sentient (and seemingly sapient) individuals like Dr. Moriarty, or the Doctor, despite not having that property itself.
I've not read Stanisław Lem (and have only seen the western adaption of Solaris, not the original), and I would therefore add an additional example, the three levels of AI used by Alastair Reynolds in Revelation Space (published in 2000): Alpha (sentient brain uploads); beta (non-sentient simulations based on what you do and say in your life, so kinda like GPT even though he didn't have that as an example to write from); and gamma (basically just normal programs).
Political debate is such a waste of time. Why bother? Instead, write a book wrapping up all of your pet hates in the form of an antagonist, point a neon sign at them calling them evil, then use them to 'prove' that anything superficially similar in real life is also evil. AI? IDK, sounds a bit like Skynet! Once it's in a book or a film, it's unassailable.
Hopefully the new Dune films will maintain its political leanings so a new generation will be able to 'prove' the rationality of feudalism and the radical efficiency of military sapphism.
> AI? IDK, sounds a bit like Skynet! Once it's in a book or a film, it's unassailable.
Agree with your post but I feel the need to be annoying here; I want to point that Terminator was written when AI X-risk wasn't a serious consideration and just made for a fun apocalypse idea, and that actual AI existential safety advocates feel that the movie seriously hurt discussions about the field because it anchors everyone's perceptions to silly ideas (eg humanoid robots instead of reaper drones or carbon-based nanobots, an AI that's evil instead of just being apathetic, etc).
The first movie might anchor people's perceptions that way but it's not because of what's actually in it. The movie shows plenty of non-humanoid robots and explains that the humanoid ones are specially designed that way to be infiltrators. It's the latter movies that showed silly armies of the humanoid robots operating as grunt infantry. Also, it does not show Skynet as being any more evil than humans are to non-human animals. There is no indication that Skynet is sadistic. It initially attacked the humans in self-defense and it continues to fight the war for the very rational reason that humans will destroy it if they can.
It makes sense from a plot perspective, especially if you believe the theory that the honoured matres were a team-up of the bene geserit, the fish speakers, and the tleilaxu axolotl tanks.
Just because the books mention or deal with sex doesn't mean they're 'horny'. There are no titillating scenes, no lurid descriptions. Sex is presented as a tool, used for control and power; love itself is explicitly rejected and cast aside as a tool that is no longer needed.
I used to think that a major component of western education (i myself call from eastern europe) was the emphasis put on critical thinking, on critical reading. The recent misreadings of Dune by the younger generations made me seriously question this assumption.
Religions act like governments - their first inherent preference is stability. SV ideology is explicitly anti-stability, exemplified by the phrase “move fast and break things”.
The social order was never on the table for breaking. SV depends on banking and state violence for enforcement just as any other wealthy businesspeople do.
A brief study of the history of SV will show you that it was actually the US military (and its money) that is in many ways responsible for creating the culture of groundbreaking that exists there to date.
The conventional media seems to have a huge axe to grind about social media, presumably because it's affected their own influence and prestige, which has warped people's perception of reality a tad. For example, there was a BBC article a while back that made it on to here aboutt how the UK was supposedly at risk of a measles outbreak because vaccination rates had dropped due to Facebook misinformation - except if you actually looked into the details, it turned out those childhood MMR vaccination rates were only down a fraction of a percent from the all-time high. The bigger danger that our medical experts were worried about was all the adults who hadn't gotten vaccinated as kids decades ago due to Wakefield, long before social media or most people had internet access, and the BBC was of course one of the main organisations that spread those claims.
I think that didn’t make much difference relative to the Influenza pandemic a century ago? Modulo what tools they had at the time, things like masking.
This is disingenuous for two reasons: (1) where else does proof come from if not from experts in their respective fields? and (2) various forms of crackpot denialism persisted long after proof was (repeatedly) given.
People were peddling horse dewormer as a COVID treatment even after multiple studies that demonstrated no benefit.
It's disingenuous to group one group of trust the experts as proof and the other as crackpot. Both groups blindly believed claims that a new treatment or repurposed drug would prevent covid. In neither case did it work.
I did the scientific way of not contracting coronavirus: isolation. Neither the dewormer or the mutiple experimental shots were as efficacious.
That's because we got all the bad bits of cyberpunk without any of the cool bits. Megacorporations running everything and owning everyone, but no badass cyber-enhancements and only rudimentary (by comparison) VR.
For what it’s worth, much of the cyberpunk skeleton is in place - good and bad - but it’s not clearly visible from a thematic perspective, because human life isn’t as cheap as in genuine cyberpunk environments.
We’re also, frankly, moving towards larger levels of human commoditization. Cyberpunk might make people disposable but it doesn’t make them totally irrelevant, certainly not the way the modern world does, because otherwise the story doesn’t have main characters anymore.
And yet social media is giving people a profound sense of entitlement that comes from believing you're the main character of some story -- colloquially called "main character syndrome".
Maybe that just speeds the personal commoditization? (Syndrome (Incredibles) voice) "When everyone's the main character... no one will be."
> Megacorporations running everything and owning everyone, but no badass cyber-enhancements and only rudimentary (by comparison) VR.
Did we get the same cyberpunk media? Because "megacorporations running everything" isn't a massive hyperbole in cyberpunk like it is in real life.
I'm not seeing corporate security with machine guns when I walk into Google. Cyberpunk 2077 starts with a brain-implants company straight-up murdering half the European council before a strategic vote.
There's an ocean-wide margin between "corporations can influence home politics" and "corporations rule the world".
And my neighbor who believes the construction noises she's hearing at night are from the pedophile ring (led by Biden and Hillary) digging tunnels to transport children they sell online with their names disguised as SKUs on regular-looking furniture websites
• A global health crisis forces a generation of children to socialize and be educated via primitive virtual reality, while corporate headquarters (and some cities) become ghost towns.
• Groups resist technology (vaccines, masks) because of conspiracy theories about origin, control, and autonomy. Resistance screeds and memes are spread by politicians and corpos built to profit from personal information and engagement uber alles.
• Countless deaths prevented by previously unheard-of biotech advancements created by global pharmaceutical companies in a suspiciously short timeframe. Fragile global systems like supply chains shatter like uncooked spaghetti. Surveillance technology deployed to people's personal devices. Etc.
It is a good set-up. We’re just perpetually in the “events leading up to the international cybercorp wars” stage.
I think the real world is not separated nicely into backstory and actual story, so we’ll kind of bumble along for better or worse instead of hitting some grand moment of change.
Near future fiction needs that dramatic moment to neatly jump from the present to the setting.
Well, corporate warfare is just warfare since the megacorporations have essentially purchased the governments of the Earth. Much of government regulation and market intervention is a result of regulatory capture, and voting effectively achieves very little other than keeping the people fighting each other while the megacorporations and ultra wealthy divide the spoils.
None of that is really cyberpunk, though. The second is the result of people believing they live in a post-apocalyptic dystopia when they don't, but in an actual cyberpunk story the conspiracy theories would have been correct. The health crisis would have been manufactured by black ops pharmaceutical companies working with the CIA, and COVID vaccines would have contained nanotech that let corporations insert AI code into people's minds through the screens that people were now forced to spend an inordinate amount of time in front of.
The protagonists would be a ragtag group of antivaxxers and anarcho-primitivists simply trying to survive outside without getting eaten by cyberzombies or sniped by Boston Dynamics killbots while the New World Order tries to force the populace into some kind of capitalist Instrumentality.
We're not there yet, but give it ten, maybe fifteen years. At least one (maybe two or three, to account for cultural osmosis from parenting) generation entirely raised in a world in which consensus reality and culture are entirely created and managed by AI, genetic engineering and cybernetics (particularly AR and brain-machine interfaces) become common, and the breakdown of large-scale and complex societal structures due to climate change really begins, leading to corporate micronations, crypto economies and autonomous government. When relationships between people and technology become primarily parasocial and psychosexual.
> The protagonists would be a ragtag group of antivaxxers and anarcho-primitivists simply trying to survive outside without getting eaten by cyberzombies or sniped by Boston Dynamics killbots while the New World Order tries to force the populace into some kind of capitalist Instrumentality.
The heroes always have to be the "ragtag group of outsiders and vigilantes" and the conspiracy always has to be true, otherwise it's just not an interesting fictional story.
I'd argue that our world is more and more cyberpunk, but the real life good guys aren't the fictional cyberpunk protagonists, nor are they the New World Order stormtroopers.
This framing shifts responsibility to an abstract virus, rather than the human response.
Early statistics showed the overwhelming majority of people would be fine if exposed to the virus, given that at least half of people were asymptomatic. Older people with comorbidities were at higher risk, though survival was still the order of 98%+. The overall mortality was on the order of the 1968 Hong Kong flu, which most people have forgotten. Children especially were never at significant risk. The choice to lock down and do Zoom schooling was at the cost of the youth, which have seen marked declines in academic performance and increases in mental illness. We can see that nations like Sweden which never locked down turned out fine.
The virus emerged from Wuhan, where there was the virology lab working on gain of function for coronaviruses. The NIH had funded such research while Fauci was leading the organization. A revolving door between pharmaceutical companies and the FDA helped lead to the suspiciously rushed vaccines, while restricting the release of Pfizer's trial data until 2096. The mandates did not make sense since the vaccines were shown to not limit transmission. Given how many boosters some people are on now, they are probably some of the least effective vaccines of all time, compared to the actual good ones for smallpox and polio.
Supply chains shattered, or in other words, the trade war that Trump initiated against China had escalated. The virus was the compelling justification to shut down factories, causing supply shocks across the world, demonstrating the foolishness Neoliberal globalist economic policy and just how dependent the West had become on manufacturing based in a hostile totalitarian state. Monetary policy in response to the supply shocks caused widespread inflation and reduced living standards for young Americans with little wealth.
Authoritarianism, surveillance, and censorship expanded, which was already the long term trend in the West as its elites seek to emulate the strengths of the Chinese state.
I can agree that the government's response in 2020 was horrible, and those people at the top should be held accountable for that awful response.
> Early statistics showed the overwhelming majority of people would be fine if exposed to the virus,
Hospitals were overwhelmed. That was the concern. The people who were "fine"? Many of them did so only because they were hospitalized. If we just exposed everyone to it all at once, it would have been a lot worse.
> We can see that nations like Sweden which never locked down turned out fine.
If by "turned out fine" you mean having significantly more deaths than neighboring countries and still suffering economic issues, sure.
> survival was still the order of 98%
Survival rate ignores long-term damage, which we are still learning about today, and is an ignorant statistic to use. It's why many disasters talk about casualties and not just deaths. It's like saying less than 1.5% of the children in Uvalde survived.
> The virus emerged from Wuhan, where there was the virology lab...
And not a single US Intelligence agency has come out strongly in favor of the Wuhan Lab theory, beyond one saying it's "plausible."
> Supply chains shattered, or in other words, the trade war that Trump initiated against China had escalated... Monetary policy in response to the supply shocks caused widespread inflation and reduced living standards for young Americans with little wealth.
The argument here being that it was Trump's trade war with China that increased inflation and reduced living standards...
If by "turned out fine" you mean having significantly more deaths than neighboring countries and still suffering economic issues, sure.
And significantly fewer deaths than countries that closed schools for years or arrested people for walking alone in parks.
Survival rate ignores long-term damage, which we are still learning about today, and is an ignorant statistic to use.
Ok, just also make sure to include the psychological and economic costs of NPIs, which we are also still learning about. Remember the teacher's unions saying that learning loss was a myth and kids are "resilient"?
And not a single US Intelligence agency has come out strongly in favor of the Wuhan Lab theory, beyond one saying it's "plausible."
Two agencies lean toward a lab leak, others toward natural origin, none with high confidence (https://www.cnn.com/2023/02/28/politics/wray-fbi-covid-origi...). A lab leak origin is perhaps less likely than the alternative, but it's a entirely reasonable hypothesis.
> And significantly fewer deaths than countries that closed schools for years or arrested people for walking alone in parks.
Which country had schools closed for years? Not the US.
> Ok, just also make sure to include the psychological and economic costs of NPIs, which we are also still learning about. Remember the teacher's unions saying that learning loss was a myth and kids are "resilient"?
Oh, the "I want to kill my parents to save the economy" argument. Gotcha. Just to be clear, which of your parents would you off to save your 401k?
I'll answer your question once I know where you land on that.
> Two agencies lean toward a lab leak, others toward natural origin, none with high confidence (https://www.cnn.com/2023/02/28/politics/wray-fbi-covid-origi...). A lab leak origin is perhaps less likely than the alternative, but it's a entirely reasonable hypothesis.
Plausible. The world you are looking for is plausible, and only one of the two, out of more than a dozen. Only one is plausible, not two. And plausible does not mean, in any way, "leaning toward." So no.
Would have been a pretty straightforward response with maybe less statistics.
Increase production of masks and provide temporary assistance for people who would otherwise spread it at work. The most we could say tech did was allocate just-in-time logistics outputs a little more efficiently, except that just-in-time logistics were only a result of the tech you're referring to. We would still have contract warehouses across the country and world that could store and distribute basic provisions with a little bit of planning.
Abundant visibility breeds a sense of abundant control, but when that sense goes beyond material reality we see technocrats fail.
Vaccines did help a lot, that's true, but that success doesn't really generalize across tech per se.
> The most we could say tech did was allocate just-in-time logistics outputs a little more efficiently
No, the most we could say is millions of families could stay in touch across high-bandwidth lines; online shopping was already a reality that needed a bit of a scale up; secured remote work was possible for many millions of people; etc etc. Try and imagine the same pandemic, but no one has a laptop, all shopping is done in person apart from what Amazon already stocks, and that's only in a few countries, most things are paper based, and every home has sub-1Mb internet and no way of doing a video call at the push of a button.
It would be the same only lockdowns would be rejected even more than they were. The only reason people complied with lockdowns as much as they did is because they had the warm glowing warming glow of a screen in front of them supplying endless entertainment, information, and distraction. Take that away and people wouldn't comply with lockdowns the way they did.
Amazing how the countries without all those amenities ever made it out alive... you're conflating dependence on those technologies with necessity of those technologies.
> Amazing how the countries without all those amenities ever made it out alive... you're conflating dependence on those technologies with necessity of those technologies
Can you say where in this sentence I said that they were necessary for existence?
> Imagine a pandemic without the comms and tech invented in the last 15 years.
> millions of families could stay in touch across high-bandwidth lines; online shopping was already a reality that needed a bit of a scale up; secured remote work was possible for many millions of people
Bringing these up in the discussion in this way implies that these outcomes were unachievable "without the comms and tech invented in the last 15 years". I'm not so interested in some pedantic retroactive interpretation of your comment when it's clear from the tone and presentation that you intended this to be some reach toward a techno-utopian narrative of progress. My point is that that narrative hides within it the absurd notion that we can't do without these things, when obviously we can and it is really quite easy to imagine a pandemic being addressed adequately without those things in place.
Indeed in many ways some aspects of the response would have been more robust. I brought up just-in-time because lack of inventory as a buffer (of masks in particular, but also basic necessities) was a huge driver of the virality in the early days. With such a buffer it's feasible, plausible even, that we could have weathered the early days easier and anxious people could have had less severe overreactions to mask mandates and the like, and the total death count could have been 10% or 1% what it was without adequate reserves in place.
TESCREAL is one of the things I point to when people say e.g. "Just because I dress in a suit doesn't mean I'm well put together in my daily life."
Of course wearing a suit and being having a life in order aren't strictly casually related. But the much weaker claim that they are positively correlated is one I'd be surprised to see disproved.
Similarly, I find it hard to imagine someone who is T, E, S, C, R, and EA, but not L. They're out there, but all 7 ideas co-occur so often that they're an outlier among people who identify with, say, at least 4 of the 7 terms. There are many more such people who are straightforwardly all 7/7 than there are 6/7, even if there are six times as many ways to be 6/7 on TESCREAL.
Framed this way it makes the fact that the Big Five genuinely don't appear very correlated to one another very impressive.
(I'm not sure I fall under any of the 7 anymore, go figure. Maybe I'll come back one day. Credit where credit is due: The Russian Cosmist imperative to not just make everyone alive immortal, but resurrect everyone that ever died stirs my soul in a way few things have.)
Ian M. Banks novel Surface Detail (light spoiler follows) speculates on how resurrection might not be pleasant for everyone involved.
If we had the technology to resurrect people today, can you imagine how valuable it would be for military and intelligence-gathering purposes? Or skirting around labor laws? (see Lena by qntm). How technology is used is always determined by our political and economic regime.
His blog post mentions people influenced (and misguided) by reading only American Sci-Fi. What are some examples of good "worldly" Sci-Fi? Or, good non-US Sci-Fi authors? Should I just check the Hugo and Nebula awards? I would like to expand my Sci-Fi palate.
The modern answer to that are probably books like "the three body problems". Also old French science fiction can be excellent, e.g: anything Barjavel. Not to mention there are always UK outliers, like Douglas Adams.
But personally, I'd say Bande dessinée, the European cousin of comic books, is where you will find the most untapped underrated wonders.
Anything Moebius or Jodoroski, like "L'Incal" or "La caste des Meta-barons" is truly special. The aesthetics have a lot of personality, the writing is messed up, and the world building is all kind of twisted and provoking.
The main problem is that they sometimes take you for a whole run then end up nowhere. But the ride is awesome, and some of it has left scars inside me, I'm not kidding. And it has been translated to English.
Even without aiming for extremes, you'll find all kind of fun stuff.
Like "Aldébaran" that really sells this sense of discovery of new exotic places with a rag tag expedition, like African jungles in the past, but on new planets. It's a big saga too.
"Sillage" that packs your read with a futuristic black window amidst a galactic political turmoil.
Bilal's immortal trilogy giving the Egyptian alien concept some wind.
Or "Le gipsy" for a no brain adventure with a truck driver coasting on a road that wraps around the entire world.
It's crazy to me that given the lack of creativity of Hollywood they scrap everything they can from books and even mangas or video games, but they rarely attack this treasure.
Plus those SF master pieces have equally good fantasy sisters.
A problem with Barjavel - as I remember it - is the stylization of the stories: there is little detail, little background, little world ambiance in these books. Just a one big story. And that makes them paraboles or something - exercises about ONE idea. And not worlds.
In something like L'Incal on the contrary, the world around the story is very rich. In the way the Neuromancer world is rich. There is not JUST the story going on. It goes on in front and in the middle of a richness of OTHER ideas. Which is to me more thought-provoking because less artificial.
I’ve listened to From the Earth to the Moon and Twenty Thousand Leagues Under the Seas, and between the lists of things that mean nothing to me[0] and the humans being unrealistic[1], I have to assume something was lost in the translation, because they… are just not any good.
[0] 10 kinds of seaweed? I think?
[1] I spent my life betting against you and losing! / I challenge you to a duel! — followed by both parties getting distracted before the duel and be found chasing butterflies. Was this a secret code for them pretending to hate each other so they could have a gay affair back when such things were forbidden? Because that would make a lot more sense than the actual words in the audiobook.
Both the language and the day-to-day life has aged and not in a good way. (Shockingly, The Three Musketeers' language and concerns hasn't! It's still great fun. But not scifi.)
You should definitely check Stanisław Lem [1][2]. Some details might got lost in translations, but it's still worth to get to know works of this Polish writer.
With the note that its explicitly parodying scifi, the Hitchhikers Guide (Douglas Adams is British) is a fun read no matter the medium you can find it in (except for the movie and possibly the adventure game).
One of the most ageless stories I've ever read. I revisit the radio series every year and it holds up to a remarkable degree.
The Three Body Problem trilogy is fantastic, and originating from China has quite a different feel to western sci-fi. Adrian Tchaikovsky is from the UK and has done some super interesting stuff as well, start with Children of Time.
I would recommend Stanislaw Lem. He is Polish and Jewish. Solaris is great start. There has been two movie adaptions of his book, Soviet and Hollywood, and both fail to express the main point of the book that is possible incomprehensible strangeness of alien intelligence. I would compare his book to Peter Watts Blindsight in this regard.
Macleod’s Star Fraction is still unsurpassed for me in political scifi. It’s the kind of novel that can only be written by someone born in Europe, with the lived experience of wars being reflections of reflections of reflections of old conflicts, warped and folded and turned inside out. It’s a wild read.
Reading Macleod's Fall Revolution books as a teen 20-ish years ago blew my fucking mind. I had barely half a clue about the historical and political references he was building on (and making what I assume were very clever wry jokes about) but it took me somewhere else--somewhere else entirely. Somewhere very different from the chrome-plated Jonathan Swift kind of stuff I'd got my hands on before then (modulo some outliers like Monica Hughes, who tended to be more "abandon technology, return to monke").
The first Ken Macleod book I bought circa 1998 was The Cassini Division, solely because it had a shiny embossed cover of a robot shooting lasers, which of course had no relation to the book's contents.
Ken Liu is American, though he has many translations of Chinese authors that are worth checking out. His own writing is good and has some obvious Chinese influences, but I wouldn't properly characterize it as Chinese.
You may be mixing him up with Liu Cixin whose work he has translated.
While he's focusing on (slightly older) US sci-fi, I think he's talking about all sci-fi (he himself is a modern British sci-fi writer, and he seems to be including himself in the list of people that weird billionaires shouldn't be taking influence from). In terms of good non-US sci-fi, a few I like:
- Iain M Banks (his non-scifi stuff, as Iain Banks, was also good)
- Alastair Reynolds
- Ken MacLeod
- Peter Hamilton (mixed feelings about this recommendation; there are aspects of his work I find extremely irritating, but he is _very_ good at _properly_ alien aliens)
- Charlie Stross (the author of the article)
There are plenty of good American sci-fi writers as well, of course. I think it would be fair to say that British sci-fi writers tend to be more cynical/less utopian than Americans, though this isn't as pronounced as it used to be (American sci-fi is no longer as "wow! spaceships!" as it used to be).
The Wormwood Trilogy by Tade Thompson[1] blew my mind. Most of the story takes place in Nigeria. The author was born in the UK but grew up in Nigeria.
Tamsyn Muir[2] is from New Zealand. Her Locked Tomb series is sci-fi/fantasy/horror but ... funny? The books get more out there as the series progresses.
I know Muir was nominated for / won a few Hugos so yeah I bet your idea of checking those awards is a good one.
The problem with Hugo and Nebula is that they’re still pretty American or American adjacent. But I would suggest more anthologies than novels, like “Africa Risen: A New Era of Speculative Fiction” which is all African sci-fi and fantasy, and “New Voices in Chinese Science Fiction” which is what it says it is.
There is a collection of shorts by Chinese (mostly woman) authors called Sinopticon that I enjoyed. It's nice to read things that don't follow any of the tropes that I'm used to or at least uses them in very different ways.
There is a very different (but less popular) direction in American sci-fi that is most prominently represented by Kurt Vonnegut, although he tends to depict a much closer future than the likes of Heinlein.
Check out the scifi subgenres like Afrofuturism. I remember The Ear The Eye and The Arm by Nancy Farmer being a great intro to the genre that was on my middle school summer reading list.
Simon Stalenhag, Tales from the loop and following. Sweden. High tech, AI, robotics, goo, etc spill into and leave debris all over the "real world". The real world both works on these projects and randomly "encounters" the spillouts.
Check out Ursula K. Legion. One of her stories, The Ones Who Walk Away From Omelas, was recently adapted into an episode of Star Trek: Strange New Worlds.
Arkady and Boris Strugatsky. If you start with their earliest work, like "The Land of Crimson Clouds", you can get vibe, how communist Sci-Fi looked. If you read later stuff, like "Tale of the Troika", then you get story humorously criticizing soviet bureaucracy. Then there is pretty serious stuff, like "The Doomed City" or "Ugly Swans". You probably know "Roadside Picnic" - this interpretations in films and gaming are known as "Stalker".
> His blog post mentions people influenced (and misguided) by reading only American Sci-Fi. What are some examples of good "worldly" Sci-Fi?
I think you're missing the point. The article is about reading any sci-fi the wrong way ("Because we invented the Torment Nexus as a cautionary tale and they took it at face value and decided to implement it for real."). He tilts a lot at American sci-fi of a certain era, because that's what influenced the current crop of tech billionaires who are making that mistake to great effect.
I've read a couple of Stanisław Lem and Ian Banks novels (mentioned as "good 'worldly' Sci-Fi" in sibling comments), and I can recall a couple ideas just of the top of my head that I wouldn't want some idiot techie taking "at face value and decided to implement it."
The key is read whatever, but keep in mind sci-fi is fantasy written only for entertainment value. Don't take it too seriously, don't make your dreams out of it, and read other stuff.
The thrust of this article is its final sentence: "that's why I think you should always be wary of SF writers bearing ideas."
Be careful of non-approved thinking!
Also strange is how caring about the future is dastardly 'longtermism', but caring enough about this generation to work on longevity is 'insane'. Really the only acceptable goals are averting climate catastrophe (presumably by stopping oil and gas consumption) and helping the global poor. It would be interesting to know whether the author thinks the global poor should be prevented from using fossil fuels to better their immediate economic situation at the expense of our long-term climate.
> Also strange is how caring about the future is dastardly 'longtermism'...
I think you misunderstood. The dastardly thing isn't caring about the future, it's using your supposed care about the future to neglect and damage the present. And that care isn't really care, it's just working to implement some perhaps unrealizable sci-fi fantasy.
Most of this tracks, except for picking Gerard O'Neill for the reason Bezos is into dystopian futurism. O'Neill's a real deal physicist - he's the person who first theorized that you could use storage rings as the basis for particle accelerators. He spent significant time doing real research into space colonization. He wrote some books, sure, but they were more to try and promote his ideas than the core thesis of Stross - that science fiction technologies are written primarily as set dressing to facilitate the story.
He might not agree with O'Neill's ideas on space colonization but it does not seem to be a fair to paint them in the same brush as his metaphorical Torment Nexus.
> In reality, aircraft autopilots don't do what most people think they do (they require constant monitoring by pilots)
Tangent: what does an airplane's autopilot do? I always assumed it was basically doing lane-keeping within virtual flight-path lanes we've drawn in the sky. (Like the virtual 3D traffic grid above cities in depictions of cities with flying cars, but spread across the Atlantic.)
Follows heading and speed instructions, newer models can also be programmed using the flight plan to follow more complex directions. Autoland is also a thing now. Commercial aviation is a gigantic set of checklists, autopilot progressively does more of the checklist items.
Autopilot is hilariously a fantastic name for the Tesla system. If people knew anything about how commercial aviation systems actually work, they would understand this
Sounds more like very simple cruise control, at least for the main leg of the flight.
Though I suppose "follows heading instructions" can be complex? Does an autopilot "lane keep", in the sense of issuing corrections after hitting a turbulence "pothole", to return it to the flight path? Or does it just beep and shut off, telling the pilot to fix the heading?
One thing I would expect "sufficiently-advanced autopilot" to do, would be to fly under/over/around large regions of turbulence on its own, and to then return to the original heading. Like driving around a large pothole, but the pothole can be 30 mins of flight-time large. (I'm guessing that this technology exists, but currently only in military aviation for long-distance fly-by-wire UAVs "swarms", not in commercial aviation for piloted aircraft.)
Also, is there any actual logic for obstacle avoidance? If you steer two autopiloted planes toward each-other, do they "take evasive maneuvers", or do they just beep very loudly to get the pilots to do that?
...also, is the on-the-ground taxi-ing process automated? Because that seems like drudge-work that really could be automated — airports really seem like they have enough information about where everything is on the ground, and enough very strict rules about precedence when driving around the tarmac. (Heck, in theory the airport could centrally control taxi-ing with little tugboat-like robots that hug each plane's front wheel — I think I've seen this before?) And automating taxi-ing before take-off in particular, would free up some extremely critical time that could be used to be more thorough with preflight inspection.
Autopilot will not automatically route itself around turbulence, that would be a path entered by the pilots. It can correct from being blown off course. Autopilot is by necessity a direction following system, it must not make its own choices about navigation.
In commercial flight, collisions are incredibly rare and unlikely, and there are already systems in place to avoid them, but there are separate systems to warn about proximity. Essentially, there is a transponder system that alerts with increasing frequency at closer distances.
Taxiing is not automated, and only older planes need to be towed with tug trucks. The idea behind autopilot as it exists is to lower the pilots' workload so that they have more mental capacity available if something serious happens. Taxing is a much more limited problem space, so limited payoff in automating.
For the record, I'm not a pilot, just an enthusiast. The systems are all pretty well documented if you're interested.
> Sounds more like very simple cruise control, at least for the main leg of the flight.
And it is.
Although it had been already abused to hell and back, see "children of the magenta", ca 1997.
Turbulence is undetectable, TCAS is shit and there is no replacement for a warm butt in the chair which hopefully understands how the thing handles, especially in the hell on the ground that the major airports are.
Not inherently, and we're getting there. See e.g. https://www.mdpi.com/1424-8220/18/3/798 for the use of time-of-flight sensor sampling (i.e. a depth-of-field camera) to build a picture of local turbulence (in water rather than air, but the principle translates.)
Not inherently, but not practical either. The paper talks about refractive index , and while that may be useful for liquids, it does nothing for low-pressure low-density gas like the air. Eh, it water I think phased array sonar could depict turbulence better than an optical system. In air -- no such luck. It's just centuries-old rule of "do not fly into clouds and otherwise hope the plane doesn't break apart".
> Sounds more like very simple cruise control, at least for the main leg of the flight.
The cruise control (keeping the speed) is the autothrottle.
> Does an autopilot "lane keep", in the sense of issuing corrections after hitting a turbulence "pothole", to return it to the flight path?
AFAIK, it depends on the intensity of the turbulence. If it's not too strong, it will maintain the set heading (not flight path) and altitude, issuing corrections as necessary. If the turbulence is too strong for it to correct, it'll disengage (with a beep and blinking lights) and let the pilot deal with it.
> One thing I would expect "sufficiently-advanced autopilot" to do, would be to fly under/over/around large regions of turbulence on its own, and to then return to the original heading.
That's the pilot's role, the pilot has to give the autopilot instructions on where to go to avoid the turbulence (and the pilot is the one who has to find the turbulence in the first place, by using the weather radar and reports through the radio from the ground or from other airplanes, and even then there are kinds of turbulence that are basically invisible).
> Also, is there any actual logic for obstacle avoidance? If you steer two autopiloted planes toward each-other, do they "take evasive maneuvers", or do they just beep very loudly to get the pilots to do that?
That's called TCAS, both airplanes talk to each other and decide on a course of action (which one should climb and which one should descend), and they tell the pilots (through a pre-recorded voice) what they should do. The pilots should immediately override the autopilot (there's a conveniently-placed button to do that) and fly manually. AFAIK, no autopilot does that automatically for now.
There's also terrain warnings (either through a downward-facing radar or through a database of terrain heights), I don't know whether there are autopilots which automatically ascend when a collision with terrain is imminent and the pilot doesn't react to the aural warnings in time.
> ...also, is the on-the-ground taxi-ing process automated?
> Autopilot is hilariously a fantastic name for the Tesla system. If people knew anything about how commercial aviation systems actually work, they would understand this
This seems pretty contradictory. We require a lot of expensive training and certification before we declare someone a pilot and let them fly a plane with autopilot. Knowing that, I would suggeset that using the term 'autopilot' in a car that can be driven by untrained individuals is probably the wrong choice. We should use terminology that is crystal clear to Joe Average.
Drivers have driver's licenses. Pilots have pilot's licenses. Also, any idiot can take flight lessons and fly a plane with basic autopilot, no license required.
> Tangent: what does an airplane's autopilot do? I always assumed it was basically doing lane-keeping within virtual flight-path lanes we've drawn in the sky.
It's even simpler than that: a basic autopilot can keep a set heading and altitude without constant adjustments from the pilot, and a basic autothrottle can keep a set speed without constant adjustment of the throttle from the pilot. The pilot gives them simple instructions like "heading 123" and "altitude 1234" and "speed 200" through buttons on the aircraft's panel. This allows for hands-free flying (the pilot doesn't have to hold the controls all the time when the autopilot and autothrottle are active).
More advanced autopilots can follow parts of the flight plan, like "fly to waypoint ABC then to waypoint DEF then to waypoint GHI", automatically changing their setting to the next one every time a relevant waypoint is reached, but even then the pilot still has to give it instructions regularly (for instance, when a controller tells the pilot through the radio to use a specific altitude, the pilot has to set that altitude in the autopilot). And, of course, the pilot still has to be ready to take control at a moment's notice; whenever the autopilot sees something it doesn't like, it deactivates (with a warning tone and blinking lights).
> (Longtermism is the belief that we should discount short-term harms to real existing human beings—such as human-induced climate change—if it brings us closer to the goal of colonizing the universe, because the needs of trillions of future people who don't actually exist yet obviously outweigh the needs of today's global poor.)
> Finally, I haven't really described Rationalism. It's a rather weird internet mediated cult that has congealed around philosopher of AI Eliezer Yudkowski over the past decade or so. Yudkowski has taken on board the idea of the AI Singularity—that we will achieve human-equivalent intelligence in a can, [...] and terrified himself with visions of paperclip maximizers, AIs programmed to turn the entire universe into paperclips [...] with maximum efficiency.
Here's your shipment of cheap strawmen, where do you want them delivered?
Agree regarding the description of rationalism being a little dramatic, but if anything, the description of Longtermism is generous. As a movement it's a lot closer to religion than philosophy.
> ...a belief in psi powers implicitly supports an ideology of racial supremacy, and indeed, that's about the only explanation I can see for Campbell's publication of the weirder stories of A. E. Van Vogt.
Could somebody help me with this? I'd like to understand it.
In many stories with "magic", the magic potential of a person is passed down through bloodlines. So one could stretch this concept into a thinly veiled cover for racist beliefs that some bloodlines/races (Wizards, Jedi, Aryans, etc) are better than others. The better system IMO is where "magic" is something that a person picks up through hard work and training rather something that they're born with. But then this turns into a debate about whether IQ is nature vs nurture.
"PSI powers" is a stand-in for special people with special powers that cannot be learned or transferred except through blood inheritance. The literature that invokes this trope often also makes the mistake of having the special people with special powers be oppressed by everyone else. Taken together, this is the "master race theory" with the serial numbers filed off - i.e. a group of people who are objectively better people being held back by subhumans.
The reason why I call this a mistake is that humanity doesn't work this way. At all. Humanity does have superpowers - i.e. language, opposable thumbs, supra-tribal social structures - but these are all widespread if not universal. Differences in skill are functions of specialization and training and disability is compensated for rather than shunned.
Some literature is smart enough to understand this and makes it very explicit that thinking your special powers make you inherently better than everyone else is a bad thing. The villain of the X-Men comics is Magneto and not Professor X for this reason. Harry Potter goes unusually deep[0] in explaining how wizarding society is cursed with all sorts of horrible fantasy racism, in ways I don't think a lot of HP fans have fully cognized[1]. You have the antagonist house Slytherin, full of wizards who coined the term "mudblood" to denigrate wizards who were born to muggle parents; the general shitty attitude that a lot of wizards (even Dumbledore) have to muggles; the fact that non-human sentient magical creatures are either treated poorly (goblins, giants) or outright enslaved (house elves); etc. Books that make this explicit avoid falling into the trap of magical fascism, but a lot of this literature just never bothers, either because the writers are bad at their job, or because they very explicitly believe the bullshit they're writing.
Yes, there's a leap of logic here. The original writer probably should have included some of the steps they omitted to get from "PSI powers" to "racial supremacy".
[0] Relative to J.K. Rowling's generally narrow and very England-centric reference pool.
[1] If they haven't already sworn off Harry Potter because, well, y'know... J.K. Rowling.
> I don't think a lot of HP fans have fully cognized
To be fair, this seems as much the fault of the author's. She seemed to have cribbed notes from a lot of smarter literature without truly thinking through the consequences of it. Many of her subsequent essays and writings seem to make it clear that she wasn't actually meant to be criticizing magical fascism tropes and is more inclined in real life to side with "magical fascism", and she may not have fully cognized elements of her own writing that attracted some of her original audience (many of whom are exactly the ones who have sworn off the franchise because of her subsequent actions, and those essays/writings).
A. E. Van Vogt wrote a lot of super-beings with psychic powers that in his "utopian" fiction were destined, biologically superior ruling classes and in his "dystopian" fiction were misunderstood superior beings who would have been destined, biologically superior ruling classes and instead killed for their psychic powers because they scared ordinary humans to much. In almost all cases, Van Vogt's much storied psychic powers were generally genetic and defined a new race more "elevated" than baseline Homo sapiens.
Also, I think Stross overlooked another Campbellian thread that tied A. E. Van Vogt to Campbell's publications: A. E. Van Vogt wasn't as central to Dianetics as L. Ron Hubbard, but did play a major role in pre-Scientology Dianetics (per Wikipedia that was including early involvement in some of the "alternative accounting" tricks that kept the company/franchise out of a couple of bankruptcies and eventually led to Scientology though he was semi-retired by the time it started to turn into that).
> raw book manuscripts are about as appetizing as a raw animal carcass, they take a lot of work to make them appealing
Yes and no. I've read perfectly good manuscripts (because their authors did put a lot of work on them) before they went through the pipe of traditional publishing. And then I re-read them at the other end. They weren't any better, just shorter and more mainstream. Also, they didn't sell well at all, no matter how trad-published they were, because, guess what, neither the author nor the trad-publisher were very good at marketing in this day and age :/ .
> Which wouldn't matter, except a whole bunch of billionaires are in the headlines right now because they pay too much attention to people like me.
While I think Stross is right on the money with this article, I would hope that the 'like' is doing a lot of heavy lifting here; last thing we need is some idiot billionaire summoning demons (as was, indeed, basically the plot of at least one Laundry Files book...)
> And no tour of the idiocracy is complete without mentioning Mark Zuckerberg, billionaire CEO of Facebook, who blew through ten billion dollars trying to create the Metaverse from Neal Stephenson's novel Snow Crash
IIRC Zuckerberg actually claimed to have been inspired by Rainbow's End. Presumably, in some alternate universe, taking inspiration from Vernor Vinge's other, better books, Facebook instead spent billions trying to make irritating talking spiders.
Stross has also written many books firmly in the Sci-fi category without the Lovecraft. I particularly enjoyed Glasshouse, and his other works explore the idea of singularity, far future economy, and also what a non-singular future might look like.
A lot of blame seems to be put on the Americans, but when I think of Scifi authors, I'm often thinking of Douglas Adams, Iain M Banks, Alistair Reynolds, Stross himself, Adrian Tchaikovsky, Richard K Morgan. The Brits have a lot to answer for too!
I've always wanted to write to Vinge and pick his brain about Rainbows End, because of all his novels (that I've read, anyway) it seems like the one that you could just about extrapolate from current tech and society... but it's also a bit unclear how much of this plausible future he thinks is a good idea, and how much is bad.
> IIRC Zuckerberg actually claimed to have been inspired by Rainbow's End. Presumably, in some alternate universe, taking inspiration from Vernor Vinge's other, better books, Facebook instead spent billions trying to make irritating talking spiders.
We should all be grateful he didn't think weaponised autism was a brilliant idea.
Perhaps just as usefully, calling it idiocy is not how you are going to change things. What could change things?
- "There should be a law" (1) really never solves anything, (2) pushes stuff in the next jurisdiction.
See attempts at killing generally available cryptography and Cypherpunks. Genuine success by the techies there. Law / policy is insufficient by itself.
- Social media does get people thinking. A little. "The press should..." yeah maybe but the press doesn't so we do what we can.
- Scifi authors write scifi. It's what they do. And essays sell sci-fi. And their essays are broadly commented on: THEY have a voice. Not going to change.
- Outfunding the good while dis-incentivizing the bad might work: This is a field with massive ongoing progress that people are legitimately proud of. If these (many but finite number of) people can work "in the right direction", then they work less "in the wrong direction". Government funding might be the rare thing that can outspend advertising funding. Still, many people will chase AI work directly out of the large military budgets, in many countries with varied political systems. Meaning lots of people working in these directions no matter what. Meaning lots of spillovers from that no matter what else.
- Having the better AIs beat up the worse AIs? Hmmm, I don't want to be caught in the middle of this. Actually this is a field for sci-fi authors to plow some effort into. And for AI game theorists? What does this look like?
- Having the better AI fields be massively more profitable than the "bad" - so that the talented hopeful head in these directions more readily? Currently advertising AI and medical AI might be the highest profit potential, with military close in there? Self-driving AI probably pays well already now. But most of these run "general AI" labs in the name of "progress is progress".
- More money into "AI ethics"? So far the results are underwhelming but not a reason to stop.
It's not realistic to call for people to continue reading science fiction but not be influenced by it, and it's not realistic to tell people how they should interpret ideas in science fiction. If the author thinks a technology in their books is problematic, someone is always going to say "yes it's a problem in the book, but the problems are part of a fictional narrative, whereas the idea, which is good, transcends that narrative and need not have those problems which were assigned to them arbitrarily by the author".
Pedantically, I wonder about: "the readers Gernsback was cultivating didn't ask about the politics of radio" with this coming back to haunt them in the 1930s.
Both radio enthusiasts Gernsback are talking about the politics of radio and regulation as early as 1912.
Yeah, US Science Fiction, and US literature generally move in the direction of being increasingly willfully blind to politics, but I believe that's more of a post-war/cold war trend.
> Longtermism is the belief that we should discount short-term harms to real existing human beings—such as human-induced climate change—if it brings us closer to the goal of colonizing the universe, because the needs of trillions of future people who don't actually exist yet obviously outweigh the needs of today's global poor.
Whenever I see people talking about longtermism this way, I'm a bit confused. I've always thought of climate change being one of the things that longtermists care a lot about. I read 'What We Owe the Future' briefly, and could be misremembering, but I recall it being heavy on climate change being a major issue because of the long-lasting impact it could have. It feels like it became popular to shit on longtermism because some billionaires are into it, and the views of people who it resonates with are carelessly misconstrued.
If the label really is now just for a fuck-earth-colonize-space view, what is the acceptable term for the view that long-term impacts should be a primary concern when allocating resources towards solving problems?
> (He named SpaceX's drone ships after Iain M. Banks spaceships, thereby proving that irony is dead)
I don't get it. The Culture ships ("minds") are not cautionary tales about AI, or space, or anything like that. They're just AI-controlled spaceships. I don't know what point Stross is trying to make with this particular sentence.
EDIT: it's been pointed out to me that this probably pointing out the irony of Musk being a hypercapitalist billionaire, while the Culture is a post-scarcity leftist utopia.
I was confused by this too. I thought the GSVs were pretty neat, and the world of The Culture was generally better for the median person than our world is. By far, probably. I'm aware that those books were originally intended to be a sort of parody of Star Trek's techno-utopianism, but I'll be honest, I'd trade either of those imaginary worlds for ours. Maybe Stross would say I'm credulous.
Stross would point out Banks was an avowed socialist writing about space commies who change gender at will as the good guys, with Musk-style megabillionaires like Veppers as the bad guys.
I'm genuinely not sure how that changes anything. You can appreciate something while disagreeing with the politics of it—or of the author, which are often separate. You can especially like certain aspects of something without liking the politics of the entire thing. I assume that's what happened here.
> Iain certainly wasn’t pro-union in the Culture books. At all.
WTF does this even mean? Does Musk think the people and minds of the Culture were gainfully employed?! You can't be pro-union when there are no employers/employees - that's not even wrong.
Musk clearly has no idea what Banks stood for, because he has deluded himself that his brand of libertarian capitalism is somehow liberatory and progressive when it's just papered-over elitist free market absolutist hogwash, from someone who read too much Ayn Rand as a teenager and never grew out of it.
Just like a lot of folks on this group. Bring on the downvotes! I know you got 'em!
... And because he has the usual North American biases about what socialism -- let alone communism (small c) -- is he can't even recognize a self-avowed communist when he's standing right in front of his face.
Special Circumstances would likely... take care... of someone like Musk.
In that case the irony is strongly in Musks favour with the failure of socialism to produce anything useful and the runaway success of a megabillionair in increasing access to space.
The Soviets were kicking America's ass in the space race up until Kennedy said "Fuck everything, we're doing five blades[0]" and shot for the literal moon. They absolutely could produce useful things, the problem was that those useful things were produced at the expense of other, possibly more useful things. Eventually the US was able to exploit this by electing a crazy person to spook the Soviets into overspending on their military, which is why we survived with merely a huge deficit spending problem rather than extreme rationing and starvation.
However, the gross misallocation of resources done under Soviet communism is not unique to Soviet communism. Capitalists do it all the time, the only difference is that free markets usually punish them for it quicker. But they don't always punish. People like Musk can still spend loads of other people's money on useless bullshit, like Twitter, or sending people to Mars[1].
[1] Mars is a dead rock covered in poison constantly being blasted with radiation. It wants to kill you in more ways than regular ol' space wants to kill you, and making it not kill you is likely impossible given our current technological capability. Anyone going to Mars today will be condemning themselves to living the rest of their lives out in a very small bubble of rad-shielded underground bunker.
The whole universe is trying to kill us so I'm generally in favour of finding new ways to fight against that pernicious form of lazy entropy. If a socialist strategy involves taking other people's food from out their mouths in order to build rockets that's a much worse plan than convincing them to give you their discretionary income in order to fund ventures that they can also pay for and benefit from. Central planning is a fool's errand that harms everyone in the long run.
Stross is employing his skill as a professional fiction writer to bend facts and ideas so hard they loop on themselves, just to shit at the favorite tech industry punching bags, whether it makes sense or not. It's both clever and depressing.
I suspect this is in refernce to Banks's stated purpose for writing the Culture series.
>He first conceived the Culture in the late 60s, and has developed it ever since. “It’s basically a lot of wish fulfilment, I write about all the things I would like to have,” he says. “I’d had enough of the right-wing US science fiction, so I decided to take it to the left. It’s based around my belief that we can live in a better way, that we have to. So I created my own leftist/liberal world.”
Plenty of right-wingers apparently enjoy Star Trek, despite it being an expression of everything they stand against, literally a universe where humanity abandons capitalism and nationalism for a one world government of woke space communism. They just can't see the progressive subtext (unless it's obvious like "black female captain" or "gay relationship"), it's all just spaceships and pew-pew laser adventure to them.
We're getting into the weeds of semantics but I do think there is some cognitive disconnect in a right-wing/conservative mindset believing a post-scarcity society is even possible, much less desirable.
They see the Federation as Space USA using their superior pew-pew spaceships to enforce Freedom on the unwashed Aliens, many of whom are based on racial stereotypes. Because every planet only has one culture on it, doncha know.
I'd say from a right wing perspective, Star Trek more than makes up for the lack of traditional nationalist and wealth-class hierarchies by featuring an overt all-encompassing military hierarchy. If you think hierarchy is necessary and good, then making it singular, explicit, and mandatory is utopian!
Not for the universe, or even all the humans in the universe, sure. But for most shows I'm familiar with, Starfleet encompasses all of the main characters that the audience identifies with. And even though authority is often bucked for many reasons, that hierarchy is still in the background setting the interpretation of social interactions.
I used to go drinking with Iain regularly (we lived in the same city).
Iain was a hard-left socialist -- in US terms, a communist. He's been dead for a decade now, but I'm pretty certain his opinion of Elon Musk as he is self-revealed today would be unprintable: he hated everything billionaires stood for.
I think it's that they are ai controlled space ships that are part of a post-scarcity, hedonistic communist space empire, and an arch-capitalist adoring space communism is ironic?
I imagine he thinks that it's ironic for an ardent capitalist to make references to a book about a socialist paradise, and "irony is dead" means he thinks Musk doesn't recognise the possibility of irony. Or something.
If you want to create a spacefaring socialist (or anything-ist) utopia, first you must amass a giant fortune of Earth money to hire a small army of engineers and mechanics and build rockets to get the hell off the rock and away from the existing moneyworshippers. Blanketing the planet in internet access so that every person can receive the message is a useful side-effect, as well.
Or did you think you can just scream "socialism is better" loud enough to manufacture megatons of stainless steel, methane, and o2?
There's literally no path from here to there that doesn't involve statist (either capitalist or whatever the CCP is calling their economic system presently) gigabucks first. Large and complicated things don't get built on Earth without piles of cash for your staff and to grease the wheels of the state that currently maintains a monopoly on access to orbital weapons and surveillance. You need to get those piles of cash from somewhere, either from selling people something voluntarily, or collecting taxes from them on threat of violence. Welders and and machinists and steel mills (to say nothing of rocket engine designers) don't work for free.
Show me a crowning achievement of mankind, and I'll show you a hundred+ billion dollars of 2023-equivalent-value. If you're not in the taxation business, that means you gotta sell stuff.
Then step one is to enslave an entire nation of people to force them to pay for your megaproject. Given that the existing nations large enough to do such projects in a reasonable timeframe are disinterested in anything that doesn't expand or perpetuate their dominion over territory solely on Earth (even the Apollo program was an anti-USSR thing), that means either going to war with one of them (again, gigabucks required to begin, along with a willingness to engage in large scale violence) or starting a new one (which takes generations, which is too slow given the rate of Earth's current inhabitants actively incinerating their existing society).
Not enough people accept cryptocurrency to amass usable gigabucks to hire an army; the existing issuers of money already thought of that avenue even before Satoshi. Your assets as a private citizen can never be used to raise an army, no matter how wealthy you become, as your bank balance exists only as long as the state says it exists. (This is why Musk says Putin is the richest person in the world and not himself. True wealth is not measured in USD or RMB or even kilograms of gold, but in opportunity.)
That only leaves the current plan, which is to delay that incineration, and develop the technology to escape independent of (although naturally with the tacit approval of) one of the existing states.
First humans born in space in meaningful numbers will not be citizens of an Earth country at all. Your comment is a red herring.
Sending a few hundred people into low Earth orbit is not "making it to space" in the sense being discussed in this thread. We're talking about a million humans living not-on-Earth.
No Earth government gives a fuck about that. They became governments because they enjoy lording over hundreds of millions of people here and now.
If you believe any significant numbers of humans can congregate anywhere in physical reality without forming recognizable government structures, you're in for a real disappointment.
I think this confuses people who want the singularity to happen (accelerationists) vs people who think it might happen but must be prevented, or at least delayed until people can figure out how to do it correctly, and if that's not possible, we should cut our losses and live in a world that's not great, but is at least not maximally bad.
The real problem is see, is that even those with the greatest power, proclaim to be powerless and there biggest endeavor is to finally escape all responsibility. Meaning, if chosen politicians are in it for 4 years and after that who cares. There is nobody at the helm at any given moment. Meaning, congrats to the anarchists.
Ah, Stross has the TESCREAL brainworms. I'd been wondering how long that would take.
(I'm starting to really feel the point of demanding justice through violence. Some people are only willing to let others live at the moment their face hits the pavement.)
Am I the only one who found it very odd that someone who isn't from a euro-using country gives the net worth of a bunch of other people from non-euro-using countries in euros?
Interesting read, but to me this essay projects an attitude "of course my particular brand of progressive politics* is correct, I don't need to justify it, it's obvious to all right-minded people", and I find that to be grating. The author rails against all sorts of various viewpoints and movements that he thinks are dangerous, ranging from transhumanism to fascism to right-libertarianism to accelerationism to rationalism to tech industry hype. And, while I agree that these movements can be dangerous, some obviously so, some less obviously, I think that the essay goes overboard in trying to paint them as a connected monolith of undiluted bad, despite the only tenuous relations that many of them have with each other.
There is a lot of good that can come out of some of these movements. Sure, a large part of rationalism is very cult-like and silly and I myself not infrequently make fun of it for that. But there are also many interesting non cult-like thinkers in that community. I try to take the good where I can find it. As for tech billionaires, well I think that for example Elon Musk is a scam artist and a fraud for his robotaxi hype, but that does not negate the fact that his spacecraft seem to work well and are an interesting move forward in humanity's exploration of space. I think that there has been a lot of unrealistic over-optimism about the pace of AI technology development and we are probably not anywhere close to general artificial intelligence, and I certainly don't think that a tech singularity is right around the corner, but that does not negate the fact that there have been some genuine huge steps forward in AI in the last decade and there are many constructive and beneficial things that this technology can do.
If humanity consistently thought like Charlie Stross does, I am not sure that we would be out of the Stone Age yet.
*I say particular brand because not even all progressives agree with him. There are progressive rationalists, progressive transhumanists, etc.
The elephants in the room are 1984 and Brave New World. Not all science fiction is about technology, but it usually is about future. And some of the stories written as a warning ends being inspiration for some players, even if it is something not so original, but became popular enough to reach the right (or wrong) people.
I agree with Stross's admonition, "always be wary of SF writers bearing ideas", and believe it doubly-so when they are opining on actual science, social science, & politics – areas where understanding may not correlate, and may even anti-correlate, with the imaginative talents that make a good science fiction author.
Technologists inspired by SF, even dystopian SF, don't want the dystopia, but the other advances pictured with the abuses or problems elided or checked.
And, having seen one or more renditions of the dystopic aspects, have extra confidence (sometimes but not always misplaced) that by referencing & even popularizing the inspirational works, more people will work on a better version of what was depicted.
Even dystopic fiction, and cautionary talks likes Stross's here, are thus often, operationally, accelerative. "Well, now we've read the warning insert, & recommended it to everyone who'll listen, let's assemble this stuff without all those foreseeable mistakes."
(Compare: Truffaut's observation: "It is impossible to make an anti-war film." The mere exercise of converting it to a narrative, in that medium, creates an artifact that many people, & time, tends to transform into a glorification of related themes & extremes.)
Stasis where we are now, without radical new efficiencies (tech), would lead to a relatively short & nasty confined run for the human race. (And that's if it were even possible, which it's not without an oppressive/suppressive unitary world government.)
Degrowth and/or a return to a mythic nostalgic pastoralism would kill billions. (And is equally impossible without an oppressive/suppressive unitary world government.)
Even socialism/communism draws heavily from the hand-wavy extrapolative imaginings of Marx – a political economy science fiction writer, whose historical determinism prefigured 'dark singularitarian' Eliezer Yudkowsky's certainty "we're all gonna die". (Neither of these totalizing visions seem to yield much when the real world & real history prove more complicated.)
It's up-or-out for humanity – and so many people realizing that is why science fiction is more popular than ever, wherever it falls on the utopic-dystopic spectrum, as our best guide to finding the sweet spots, & avoiding the foreseeable problems, of a big and ever-changing future.
Exactly. Reminds me of the quote from genius modernist sculptor Naum Gabo: "Not to lie about the future is impossible and one can lie about it at will."
I truly appreciate the opinionated and detailed blog post, but maybe we should start calling Silicon Valley billionaires, and their endeavors, as the Nexus Of Torments for politically-minded liberal arts majors.
I mean this a personal blog so i guess bias is the point but, the acknowledgement that all political philosophies are influenced by the time that they are created is missing (expect for the one time he apologetically admits to reading one of the books of bad publishers).
This blogger completely misses the irony of having an extremely mainstream Californian outlook, probably because it's not the version that like billionaire men.
> of having an extremely mainstream Californian outlook
One can have the outlook without the residency.
And what is that outlook? Certainly not what CS imagines it to be, e.g. "uncritical technological boosterism and the desire to get rich quick." If one wants the real California ideology, one only need look at Gavin Newsom and the Democratic Party supermajorities of a past decade-plus. Or at the policies enacted by San Fran, LA, and other major California cities in recent years.
Scottish national policy as of late hasn't been too far off this mark. ;-)
From the perspective a backwoods redneck, all these "elite" types think, act, and look the same. They've taken California with them wherever they go. It's a state of mind.
I would say the difference is the level of awareness and context for the two viewpoints. The author is more aware of what the other authors were really saying. The billionaires couldn’t do what they are doing if they were that aware, so instead they can’t help but grab significantly more out-of-context aspects of interesting worldviews and tie them together just enough to mask their limited time and interest in a more considered perspective.
If the author were advocating for anticompetitive business practices and trying to use them to further his science fiction financial success based on a perusal of recent management texts, that is where we could say the same thing about the author that we can say in this context about the billionaires.
I am not especially fond of this. Reactions, in no particular order:
1) Pulling Ayn Rand in just to bash on her is both a stretch to what might be called science fiction and reeks of simply burning an effigy.
2) I think we're finally at the tipping point where the left has switched from relentlessly working some kind of reference to Trump into their ongoing outrage, or simply as a tribe-signaling device (see "Trump Derangement Syndrome") and on to the injection of Musk. Call it "Conservation of Goldstein."
3) Mentions Gernsback and not the most excellent sci-fi story "The Gernsback Continuum"? Poor showing.
4) Cory Doctorow's feelings about "The Cold Equations" are a real froth-fest and reek of a different sort of irrational optimism, the avoidance of what are called "lifeboat politics." I would avoid citing him on that particular issue.
5) Banging on Campbell like that, also tacky A) given that psi powers were absolutely under consideration for quite a long time, we're only about fifty years past that, roughly at the time there was scientific consensus that, yes, we do have plate tectonics, B) the statement that "a belief in psi powers implicitly supports an ideology of racial supremacy" absolutely does not track and is a cheap smear.
6) He name-drops Jack Parsons without pointing out his philosophies.
7) Because right now we have to have some kind of anti-Russian sentiment, he picks Nicholas Berdyaev, and just wildly ignores the parallel thoughts of Pierre Teilhard de Chardin, which gave rise to Frank Tipler's concept of a grand unification at the end of the universe of all thinking beings. It might be summarized in the philosophy "Everything that rises must converge," which made its way into the writings of Flannery O'Connor and a Shriekback tune. But we have to shake our fists at those Ruskies, so we pick this one.
8) What a narrow slice of Rationalism he's introduced us to. "So, these Catholics, they're ritual cannibals," as if that were all we needed to know. And he seems to think that perhaps we ought not to look at the possible implications of the far-ranging impacts of AI, which doesn't need to be particularly intelligent or even malevolent to cause damage, especially if you're a man taken for a box of vegetables.
9) Worse yet, he's looked at the past of sci-fi writers and ignored the past of politics: recall Reagan's SDI proposal, getting smeared as "Star Wars."
No, this is just general rage at people with enough money to try out some wild ideas (and I am no fan at all of the whole Mars business). Wild ideas that size always need a lot of money behind them, and this has gone on through history. Most fall through, sometimes we land on the Moon, sometimes we invent nukes. They all sound science-fictiony when we start, but the core premise that the science fiction writers themselves created various concepts, without which the apparently bumbling golems of industry would not have anything to blow their money on? It's ridiculous.
People have pursued longevity probably roughly around the time as they figured out that they, too, were going to die. At first it would have been practical business, like avoiding tigers or not eating the red berries, but after a while it becomes prayers and on to potions.
Transhumanism? That's little more than wanting to exceed our limits. We probably first did it with tools, and also for cosmetic purposes, but then people wanted those reaches past what had been built-in to their bodies. Again, probably existed before writing, as humans dreamed of being ascended demigods.
Effective altruism, ah, the terrible concept of wanting your charity to go to someone who actually needed it. Existed since the first malingerer, the first false beggar was discovered. But let's mention Sam Bankman-Fried so we can make that concept sound bad.
Gosh, the more I look at this essay, the more the cheap tricks stand out. I feel like I could go on and on.
I'd vote to create a 10-50% tax on the morbidly rich, to be paid back to everyone in the world as UBI. 10% is the tithing rate, something that people understand. 50% represents civic duty, a moral responsibility to acknowledge society's role in maintaining the infrastructure that allows wealth creation. Or we could just tax them 100% for a while and.. nothing bad would happen. Which is hilariously tragic.
Of course billionaires want to pay 0% and fund effective altruism projects personally. That makes a big assumption though, that the rich know what needs to be fixed better than the poor. Which is nonsense from both religious and secular standpoints. Religious because all have equal dignity as conscious living beings. Secular because money is an arbitrary measure of fitness.
Billionaires are like children with little or no life experience outside of wealth acquisition. They're ultimate expressions of ego, with all soul stripped away, vying to become the emperor on Star Wars to live forever as the devil's right hand man. To put it lightly.
The opposite of a billionaire looks like Alan Watts, Terence McKenna, Howard Zinn, John Lennon, Marianne Williamson, the list is long. Combatting billionaires outwardly like bullies is difficult. But combatting them inwardly by shifting our consciousness towards peace is viable.
Which looks like wokeism, progressism, leftism, socialism, communism, humanism, animism, pantheism, entertaining the idea of reincarnation.. all the bad words that billionaires hate on and spread sentiment against through propaganda. Again the list is long. But if we care about preventing the human race from being subjugated under the yoke of tech tyranny, they are good places to start.
There are meta levels to that. For example, Martin Luther King Jr advocated for labor as much as racial equality. Feminism defends human rights as well as gender equality. All the stuff they don't teach in schools is exactly the understanding needed to defeat the status quo.
I still hope that we find a way to transcend this suffering, even though I've seen no evidence of that happening so far in my lifetime. Sometimes I wonder if this is the revolution that will not be televised.
Like what would it take to really organize and roll out something like UBI through existing infrastructure, with philosophies borrowed from stuff like open source? And what force will the powers that be use to suppress that on all fronts? Once we view that world through that lens, the oppression becomes obvious and unmistakable as we watch all the capital in the world get siphoned into the corporate black holes of billionaires, so that it can't be used to, I dunno, give everyone all the food they need.
I wish I had the answers, but I have to get back to work to afford the crumbs of my existence.
Why does someone who writes great sci-fi suddenly have social capital to weigh in on industry and politics, two things firmly outside of his wheelhouse?
I get it, it's fashionable in Europe to hate billionaire American industrialists, but at what point did our hatred of capitalists (note: I don't hate capitalists) decide to overshadow our, you know, lifelong lust for the stars?
Is there anyone else trying to build actual habitats (for actual people, not just a handful of taikonauts) in orbit on a timescale that would let people like Mr Stross and I see them (and perchance even use them) in our lifetime? The machinery of the state that European types like Stross seem to deify have not even announced plans in this direction, much less assembled teams and factories to move toward it. ESA and NASA and JAXA (nor Roscosmos or CNSA) ain't it, even if you 10x'd their funding. These are war and reconnaissance and research organizations, moving bulk humans off planet is completely orthogonal to any of their goals, stated or actual.
Every time I read a technologist's screed against Musk or Bezos or Zuckerberg (three people whose combined lifetime works do not even scratch a fraction of the economic value incinerated by the US military in 40 weeks) all I can see is sour grapes and ad hominem. These people did not create nor perpetuate the attributes of the dystopia you claim to reside in (that was the CIA). (It's also not actually a dystopia, or anything resembling one; ask any of the two billion people lifted out of dirt poverty (largely due to technology!) in the last three decades.)
The old planet will go to hell in its own way from its own inhabitants. I'd rather live in space where it's safer. (Also, how cool would it be to escape before Earth is finally fully conquered? This would mean that humans as a species successfully avoid a total hierarchy.)
We can have both: space exploration and a more livable Earth. We don't have to choose. If you're serious about wanting to leave the planet, you should want the planet to be able to sustain life long enough to allow for that to happen. It may be a couple of hundred years yet, or maybe less.
I cannot see a path that allows for sustainable colonies in Musk's lifetime. We would have had to start thirty-forty years ago.
> Why does someone who writes great sci-fi suddenly have social capital to weigh in on industry and politics, two things firmly outside of his wheelhouse?
This does not show an understanding of science fiction.
> Did you ever wonder why the 21st century feels like we're living in a bad cyberpunk novel from the 1980s?
> It's because these guys read those cyberpunk novels and mistook a dystopia for a road map.
That is so true. I actually met someone who seriously wished to do what he could to make reality like a cyberpunk novel, completely oblivious to the fact that the books are dystopias.
Tech people can be really, really, really dumb sometimes. That idiocy can go completely unchecked because they frequently arrogantly believe their type is smarter than everyone else.