To paraphrase: yes, corporations function as agents, but their maximum performance is limited by the capabilities of their employees. A corporate AI may have a much broader set of skills than any single person, and may be able to tackle many concurrent tasks, but as an "intelligent agent" its decision making capabilities probably don't scale exponentially (or even linearly) with its headcount. In the sense that a corporation's maximum intelligence is likely to be in the same ballpark as the smartest humans, it can't be seen as a true superintelligence.
That's fair. I interpreted the comparison as being between "AI implemented as a property of human organizations" and "AI implemented as a powerful search algorithm". While corporations can certainly be dangerous, they're still made of people and thus are unlikely to want to do things like "convert the entire mass of the solar system into dollar bills" (and even if they did, they'd have a hard time doing it). A sufficiently powerful search algorithm would find all sorts of bizarre ways of satisfying its goals. The point is that the scope of the risk is quite different when talking about corporations (order of magnitude: screwing up the environment in pursuit of easy profit) vs "real" AI which, if granted agency, would have potentially unbounded risk.
Another way to look at it is with regards to capacity for self-modification. If a corporation can't re-structure itself into being much smarter than the smartest human, its intelligence is fundamentally limited, and therefore so is the risk. Does software have this restriction? We don't really know yet, but it's hard to point to exactly why it would.
How is "converting the entire mass of the solar system into dollar bills" conceptually different from "converting the habitability of Earth into dollar bills" or "converting the health problems of human beings into a maximal amount of dollar bills", which is precisely what many corporations _actually do_?
The "corporation as AI" metaphor isn't about some abstract future possibility, it's an explanatory mechanism for how the world is so thoroughly messed up _right now_.
Yeah, I agree. My original post was bringing up what I thought was an interesting comparison between hypothetical software AI and the corporations-as-AI metaphor presented in the talk.
Corporations _are_ acting as misaligned optimizers. Solving that problem is hugely important. However, the "AI" comparison breaks down somewhat when you start thinking about how we might actually fix the problem. With corporations, we (i.e., states) have tools that we can use to regulate bad actors. Software AI, however hypothetical at the moment, seems likely to be a different game all-together.
I don't know that we won't have tools, albeit different ones, to regulate a bad AI. AI need more than just intelligence and agency. They also need effective ways to interact and affect their environment. That boundary is where we are likely to develop tools to limit and regulate them.
If they are truly general AI then it's likely that their reaction to that limitation and regulation will be not dissimilar to a person's but I see no reason to assume that limiting them will be impossible.
Sure! I don't see any reason why it would be impossible either, but the (hypothetical) problems are very interesting. Starting with the most basic problem of all: how do we even specify what we want the AI to do? The whole field of AI safety is trying to figure out a way to write rules that an agent wouldn't instantly try to circumvent, and to find some way to provide basic guarantees about the behavior of a system that is incentivized to do bad things (just like corporations are incentivized to find loopholes in the law, hide their misdeeds, and maximize profits at the expense of the common good).
That assumes an ideal corporation in the physics sense - real world ones have "corruption" in the form of their employees engaging in their own agenda. Take sexual harassment, passing up talent from bigotry, and office politics. They actively harm the profitability and yet they exist at all levels.
The incentives are fundamentally what shape the systems including the corporations. Blaming corporations alone is a simplification - the same incentives converge to the same outcomes akin to how power vacuums are filled by warlords.
> They actively harm the profitability and yet they exist at all levels.
> The incentives are fundamentally what shape the systems including the corporations.
These are key observations, and sadly I suspect there's a Prisoner's Dilemma style thing going on which makes corruption, sexual harassment, bigotry, and office politics somehow "rational" and individually maximising behaviours (for corporations, as well as the individuals they're composed of) whenever any of the competing corporations are known or suspected to be behaving corruptly.
Combined with the combative nature of thinking/reporting about corporate results - where for example FAANG stock price performances are compared to each other, assigning winners and losers, without any relevant incentives or accolades for the entire tech industry having grown the value of the entire sector. A corporation with a 15% YoY increase is deemed a "loser" if some of it's competitors manages 20%.
And we've spent well over half a decade demonising anyone who criticises Capitalism - thus deeply entrenching incentives that are poor for society as a whole, but which have "less poor" outcomes for the corporations prepared to be most ethically barren.
Being generous and ethical is also a form of human corruption harming the corporate entity. From the corporation's hypothetical point of view, any time a person doesn't do exactly what the corporation needs, isn't the perfect mindless drone, (or isn't creative in just the right inoffensive way), they're a cancer cell in the corpus. But I think this goes for red tribe, as well as blue tribe thinking, as you posit.
After all, you'd be hard-pressed to argue that corporations, especially silicon valley corps, give more lip service to red tribesmen than blue. Maybe that's just because the blue tribe is the more powerful, and the corps are rightfully saying the magic words that allow them to keep their profits.
Or perhaps that your particular perspective is blinkered to seeing too short a timeframe?
There's a very good argument top be made, I think, that "capitalism" is the biggest and most dangerous ponzi scheme ever invented. Most people will happily participate thinking "this is fine .gif" while untapped suckers/resources keep dropping enough "return" to early "investors", but when the house of cards all collapses there will be no underlying foundation for the vast majority and only the people at the very very top of the pyramid scheme ever actually benefited at all.
Ever stopped to wonder why Musk ands Bezos are so interested in going to Mars???
>its intelligence is fundamentally limited, and therefore so is the risk
collective intelligences such as nation states are capable of producing tools such as nuclear arms or biological weapons whose impact on society may be civilisation ending.
It doesn't follow at all that a limit on intelligence implies a limit on risk. Collective intelligence is already high enough to produce threats that could wipe us off the planet without a problem. This is because most risks, once let loose on the world by someone intelligent enough, don't need to be intelligent in themselves to destroy humans. Viruses aren't intelligent at all.
Or for a more mundane example, we're already in the process of slow-cooking the planet for some stock-market gains
> If a corporation can't re-structure itself into being much smarter than the smartest human, its intelligence is fundamentally limited, and therefore so is the risk.
The assumption here is that risk correlates to intelligence. That doesn't seem to be borne out in history. Risk (the likelihood of bad outcomes) can be emergent, and arise from well-meaning, reasonably (not super) intelligent people operating within a simple framework.
> While corporations can certainly be dangerous, they're still made of people and thus are unlikely to want to do things like "convert the entire mass of the solar system into dollar bills"
I'm assuming this is satire, since that's apparently exactly the goal of corporations.
Just print a bill of a higher denomination and make your shareholders happy without destroying the solar system!
It's not the goal, it's the consequences of the rules of the game used to reach the goal.
If our economy was based on writing poetry and corporations would compete to write the most soul enticing profound poetry or art in general, the side effect wouldn't entail the depletion of resources and ruining of the ecosystem. Surely we'd find other ways to make our fellow human life miserable, perhaps all this art will make many suffer untold pains as the meaningfullness of our existence is unraveled, or whatever.
The rules of the game are the framework. The incentives are the driving force to push things down the gradient. The consequences are hard to predict. That works for wide range of rules and incentives
In fact sub-par artificial intelligence is probably at least as dangerous as sub-par human intelligence, when we let it loose with significant power/authority in the real world.
(for supporting evidence to this claim, see "high frequency trading", "self driving cars", and "the leader of the free world" <smirk>)
It doesn’t need to be super to do that — even bacteria are a type of von-Neuman machine that can turn us into more of themselves.
An ASI could convince you that turning yourself into a paperclip is a fun and exciting new opportunity to liberate you from $PERSONAL_PAIN and finally allow you freedom to engage in $PERSONAL_FANTASY.
Bacteria are self-limiting. Their offspring compete and they don't reshape their environment to be more friendly to their existence. They don't plan for space colonization either. They're a bit of a toy model for a maximizer, not the real threat of an intelligent, self-modifying maximizer.
The only argument that seems to be brought forward every time is "if a superintelligent being wants to do stuff, we can't do anything about it because the axiom is that superintelligence means omnipotence". This is not so different from any other religious argument since it's impossible to falsify and therefore meaningless in any scientific sense.
Superintelligence doesn't mean omnipotence. The thing is bound to physics. It's just smarter than you, to some greater or lesser (usually imagined as greater) degree.
Just NO. Artificial Stupidity (AS) will never be able to perform at the level of Actually Seen Stupidity (ASS). No matter how much you try to improve AS, God will always create a better ASS. You will always be limited by priors to create an AS, but ASS will evolve naturally, plus if God ever decides to use science and apply a Directed Acyclic Tangent (DAT) well, DAT ASS would be unbeatable.
Intelligence (artificial or human) as a plague - is an idea that's spun out interesting storylines already.
On a geological timescale scale, it's very likely that's what the historians of the hyper-intelligent shade of the colour blue will write about us:
"There was once a virulent disease emerging on a single watery planet 1/3rd of the way along a spiral arm in some uninteresting galaxy. Fortunately the Universe's immune system never even needed to take action. It was greedy and short lived and starved itself of resources and died out before it had even metastasised much beyond it's own gravity well, it barely made it to it's next closest planet, and sent nothing out of that solar system's gravity well but a couple of primitive machines and all that daytime TV that was fashionable for a brief period a few Type-O main sequences ago..."
> In the sense that a corporation's maximum intelligence is likely to be in the same ballpark as the smartest humans, it can't be seen as a true superintelligence.
In what sense? Certainly the ability of large groups of people to research and create new technology exceed the ability of a single individual. The R&D capabilities of a large corporation are going to vastly exceed that of a lone hermit in the woods.
There's also neglecting that a corporation can spend millions of person-hours a year on problems.
No individual line of code produced by a FAAMG is superhuman, but producing codebases of billions of lines of code is definitely outside the ability of any lone human. If you find yourself in court facing a corporation, you'll learn they can spend multiple human lifetimes learning relevant precedents, processing discovered evidence, and constructing their arguments.
It may not be runaway ever-improving superhuman AI, but pretending it is only as intelligent as the humans that make it up is missing the forests for the trees.
He breaks down efficiencies of large corporations (or really any organization) into two types of gains:
(1) gains from parallelism, and
(2) gains from synergy (e.g. people having better ideas by working in groups).
For (1) his argument is that while you get a much larger amount of work done, the maximum quality is just the max over the parallel units. E.g. if your R&D department consists of N scientists working independently, you'll get N times the work done, but the quality of each unit will be at best the quality of your best scientists.
For (2) his argument is essentially that the quality of ideas generated by human organizations is gated by the ability of people within the organization to recognize that the idea is good. He then does a simple simulation where he argues that if even if you have everyone in your company brainstorming great ideas, after a certain number of people, say 1000, you have diminishing returns on how good an idea you can generate and recognize as good. His claim is then, in contrast, an AI is not constrained by this limitation.
He doesn't seem to account for "standing on the shoulders of giants" effects where institutions build up knowledge over decades. That seems to me to imply a model similar to compound interest, at least over some time period before the corporation becomes stagnant and dies off.
Well there is also a "quantity vs quality" aspect of tasks akin to the difference between a group of house painter and a skilled artist. The single skilled artist would lack the throughput.
It is possible to some degree "team stack" to boost quality over any component larger group but it is inefficient and tends to have worse results than a smaller scaled more skilled one.
Of course in the real world many things require a mix of both effectively - too much labor for one or a small group of experts but too complex for an amateur horde to handle.
So I think the question is: when a group of people work together on something, can they generate ideas which are better than the best idea of any member of the group? If so, how much better? How does the effect scale as the size of the group increases?
An R&D lab certainly can do more work than any single person could do in their lifetime, but are the ideas fundamentally better? Can you point to an idea and say: "no person could possibly have thought of that"?
Q1: Yes and No. From what I've seen (anecdata) an group of people working together ten to generate better ideas than one person could. The No comes from the fact that oftentimes one person came up an idea that they wouldn't have found without the other people.
Q2: Much better. Normally through 2 ways: first, defense of an idea make you have to improve it so that it gets accepted. second if the idea is collaborated upon the multiple feedbacks on forming the idea improves it.
Q3: My first knee jerk reaction to this question was that you need to limit the amount of people involved because communication is key: say 6-9 people. But on further thought, if your communications channels are set up well (forums, slack, IRC etc.), then you can go to the wisdom of the crowds, so maybe scale would work.
Q4: If the R&D lab is set up right, yes, you could get fundamentally better ideas, for various reasons: - facilities, collab, standing on the shoulders of giants (by this I mean building on what others have already built) etc.
I'd say yes. Do you think the ideas that came from the Manhattan Project didn't exceed that of the best individual involved?
Or for a more extreme example - consider the totality of human civilization as an example of groups of humans working together. People like Einstein and Hawking wouldn't have been able to have had the ideas they did if they were dropped in 20,000 BC. The knowledge we have now isn't simply the work of individual animals with extraordinary thoughts, it's the result of large groups of these animals working together.
Now certainly the benefits in corporate R&D labs don't come anywhere close, but I find it hard to believe that they simply drop to zero.
In the video linked above, Robert Miles models organizational idea-generation as something like "everyone thinks of their best idea, and then we just pick the best from that set". That's definitely a simplification. I do think there's some notion of "synergy", where people working together can hammer an idea into shape more effectively than either could alone.
However, I suspect that there's also a counter-force, a "diminishing returns" effect: as more people get involved in an organization, coordination becomes more difficult, communication becomes more expensive. If this converges to some limit, then that's the smartest a human organization can be.
Maybe better methods of organizing knowledge could help raise that limit, or more effective modes of communication, but my suspicion is that the upper bound for "organizational decision making capability" is within one or two orders of magnitude of a single person, not vastly more. No idea how we'd actually quantify that, of course!
R&D labs aren't necessarily restricted to a single research group with a unique, well defined goal. Depending on their organization they can easily integrate more or less isolated units of self- or loosely-directed research driven by single individuals or small teams. Properly managed, I can see this kind of heterogeneity of agents performing at a level no single human ever could. There's a analogy with boosting to be made somewhere but I can't care to flesh it out right now.
Personally I saw it as a semi-sarcastic equivalence noting the lack of differences and a note about the Chinese Room. It is like describing human capabilities as equivalent to a black box algorithm - we might not know how our brain parses 2 + 2 = 4 but we know that it calculates and must follow some sort of procedure.
By that logic, wouldn't any organization, team or group of people doing things together be an ai?
Corporations aren't the only kind of grouping of people together one or more purposes as a unit. This definition also seems to ignore the 'artificial' part of artificial intelligence.
If we're going to stretch the definition of ai that thin, wouldn't a school of fish or a flock of birds be considered 'ai'? Or at least an ant colony or bee hive?
What about forests? They form vast networks of interdependent nodes through mycorrhizae that connect the roots of all the trees and shrubs in the forest. It's used to pass nutrients and information in the form of electrical and chemical signals throughout the network.
If I were to paraphrase the comparison, it might be something like:
"An AI is some constructed entity which, starting with a goal, uses its intelligence to make changes to the world in order to bring that goal to fruition. One of the things about AI that is worrying is that its system-of-values might have no relation whatsoever to those possessed by human beings, and the best way for it to accomplish its goals might be to do things that most people would consider quite nasty. AI systems that are trying to maximize something are in constant tension with the laws that we put in place to try to keep them from steamrolling over everything in pursuit of their goals. Corporations fit the bill, under the assumption that the goal is 'maximize shareholder value'."
While a nest of ants or a swarm of bees certainly exhibit interesting emergent behavior, perhaps even collective intelligence, they 1). aren't constructed and 2). don't have clear goals.
You could argue that smaller groups of people fit the bill too, I guess. Maybe the difference is that corporations tend to have distinct goals which outlive their members, and that they have special recognition under the law as independent entities. Not sure.
Bees and alors have a clear goal. To survive and thrive. Which is very similar to a company's actual goals. How they do it is ever changing. Corporations dont survive by making the same product for hundreds of years, so whatever subgoal they have keeps changing as well.
This is why standardized testing works against improving aggregate intelligence. We probably have not seen anything close to what human minds are capable of if nurtured to express their differences.
If we break AI apart and look at it; Artificial - human made. (As opposed to something emerging from nature)
Intelligence - ability to acquire and apply knowledge and skills, (oriented agency modulated by prior experience or observations)
Slightly tongue in cheeky; in most countries it seems corporations gained the right to vote before many minorities (or even majorities) [citation needed?]. If we assert that voting is the measure by which one affects political power
Corporations have multiple decision making nodes that each can have the intelligence level as a smart human, thus more smart human level decisions can be undertaken by a corporation than can be undertaken by a smart human. At the same time corporations can process much more data, store more data, and do more interesting things with data than the smartest human alone could do.
Comparing Corporation to Sci-Fi super AI's is missing the point I think. I don't think the author literally believe corporations are AGIs, he starts with the point that corporations are "old, slow AIs". The article starts from a single argument - we have to be careful with AIs that are smarter than us, because if we don't instruct them carefully, we will end up in a paperclip maximizer situation. From there I take away 2 main points:
1. We do a shit job today of controlling "old, slow" paperclip maximizers, so there's no confidence we will ever be any better controlling an AI, despite any good intentions.
2. Our wild exotic ideas about paperclip maximizers probably won't come to fruition and instead we will end up in a boring dystopia where AIs will maximize time spent playing Farmville on Facebook.
There is a difference between AI and a meme I think. AI is understood to run on computers, memes run on human brains. Corporations, nations, 4chan are memes, not AI.
> There is a difference between AI and a meme I think.
Yes, I agree.
> AI is understood to run on computers, memes run on human brains.
This, I disagree with. I think the difference is something more like, "virus vs bacteria/multicellular organism", however both run on human brains.
Memes are more virus-like. Small data-payloads, only goal is to replicate and spread as fast as possible, lots of mutation.
This is not what corporations/nations are. They are far larger and more slow-moving, much more complex, with various immune systems, defense mechanisms, and goals.
Both only exist in/run on the "human brain" computational substrate, but the "corporation" is a distributed AI while memes are just viruses spreading from machine to machine. If all of humanity died tomorrow, Amazon and memes would both cease to exist.
its a poor term for it. I prefer to look at our organisations as seperate life forms that for now are symbiosis but dont have to be indefinetly. I consider them seperate since people follow their job description and the choices made or emergent behavior doesnt have to be acceptable to anyone in or outside the org
I find idle games odd. My rational mind knows they are traps, but I still want to see what is next. They tap into some kind of need for exploration, progression, and learning. This one at least explores a few ideas and has a set endpoint so I enjoyed going through it.
I greatly enjoyed this game because of its relationship to the thought experiment. Last time I tried playing it, it had a terrible memory leak on Firefox though. Eventually consuming over 15GB, when I decided I had to kill it.
Reductio ad deus is a recurring pattern, or at least it was, when we look hopefully into a radical future.
Radical political movements develop messianic themes, whether or not they rejected pre-existing ideas of god. Modern futurists are distinctly messianic. Remember also that (radical) 19th century politics kind of were futurist movements. "Singularity" is an on-the-nose example.
^Radical meaning "want or expect major societal change."
I think we're better at predicting the future than we give ourselves credit for. We're just bad at distinguishing profound from banal in those futures. 100 years ago, economists and intellectuals (famously keynes) used their projections of productivity, technology & such to predict a leisure society. 15hr workweeks, etc.
They were right about almost everything, except the conclusions. Productivity, global trade, technology, even peace... eventually. Even with the benefit of hindsight, very few modern economists reach profound conclusions about the mistakes of their forebears.
The way we usually get the future wrong is "you were right, an yet..."
The cultural element is the wild card. In 1990 you could have predicted 2020's radically changed landscape of media, social media, communication technologies & such. You probably couldn't have predicted the memetic influence on, the economy, education, social life, etc... At least, people usually don't predict these well.
I found a couple errors early on that dissuaded me from continuing to read.
One was the idea that the author could rule out the singularity because he wasn't aware of progress toward self-motivated AI that would be something like our own intelligence. This seems like a limited view to me because we wouldn't need self-motivation at all in AI to hit the singularity. Humans can supply the self motivation.
Suppose I'm using GPT-4 or GPT-44 trained on the corpus from sci-hub and it recommends experiments to me, or explains physics to me, etc. I could be the self-motivating part and the AI could be the intelligence part, and it seems we'd still hit the technological singularity.
Another problem I had was when the author characterized Elon Musk's "obsession" with the paperclip maximizer and described Tesla as a battery maximizer. It seems like the author kind of misses the point of the paperclip thought experiment, which is, broadly, that an AI's interests might not be aligned with our own and that misalignment may cause serious problems.
Tesla is clearly not a battery maximizer and it is clearly not a different class of intelligence from the humans and corporations existing today (though it may be towards the top of that class). Neither of those things would necessarily be true of an AI.
Given the position of my scrollbar it seems I was only starting to read this piece, but already finding what I think are significant problems as the author sets up the argument, I'm hesitant to spend more reading.
> It seems like the author kind of misses the point of the paperclip thought experiment, which is, broadly, that an AI's interests might not be aligned with our own and that misalignment may cause serious problems.
Perhaps you should have finished reading. What you suggest the author missed is essentially the core thesis of the piece.
I found an error in the first sentence of your post that dissuaded me from giving the rest of it much credence. Saying you didn't read it, but decided to comment anyways.
Even taking your comment at face value, that's not what the first line of that comment says. It says "continuing to read", indicating a decent chunk of it was read.
The remainder of the comment indicates at least 1/3.
No, I don't think so. I think my response went right over yours.
The comment is trying to mirror my criticism - "I read the first part, found an error, and stopped". My response is trying to highlight that, in fact, their response does not mirror mine because I had actual errors that I pointed out that motivated me to stop reading, whereas that comment did not (apparently).
In other words, if I had actually made substantive errors in my first sentence or so, it might make sense to stop reading. I'd have already demonstrated that my thinking wasn't very clear. If that was the case though, then it would be an invalid criticism of my reasoning (read a bit, saw an error, stopped) because that comment author would be following the same paradigm. On the other hand, if I didn't actually make any substantive errors in the first sentence or so my post, then the criticism is still invalid, because, while I actually pointed out substantive errors in the OP, this comment doesn't point out substantive errors in my comment.
It was a joke mostly - but specifically, I find it funny that a portion was read, and instead of just moving on, felt the need to poke at the article without at least finishing it. That is the "error" - who knows, maybe your criticisms were addressed later on? We'll never know :P
(like I said, it was mostly in jest, so don't take it too seriously, please)
> Tesla [...] is clearly not a different class of intelligence from [...] corporations
I believe he just used Tesla as an example of corporation not as something new or special.
> the point of the paperclip thought experiment is, broadly, that an AI's interests might not be aligned with our own and that misalignment may cause serious problems.
But (some) corporations clearly do share this aspect to some extent. They maximize profit while environmental concerns are not prioritized.
And he pointed out later that corporations are slightly more complex than just maximizing 'one' thing. But a range of things and most importantly profit.
Most, if not all, corporations are misaligned to humanity's well-being. That is, they want money, and that's not exactly the same thing as what would be best for humanity.
This is okay though because corporations are the same class and kind of intelligence as the rest of humanity and so they can be (somewhat) predicted and constrained by our laws and norms.
Because corporations are the same class of intelligence, they can't outsmart everyone. We can catch them when they do wrong and punish them and use laws and courts to control them.
Because corporations are the same kind of intelligence they (usually) aren't going to do things we might consider insane or sociopathic. For example, if corporation X realized they might make a profit by taking out life insurance on workers and then getting them to do very risky things, X Corp probably wouldn't do it and if they did people would leak it and laws would be made to stop it.
The paperclip thought experiment is an illustration of a different kind of intelligence that wouldn't be constrained by human morality or norms and would surpass human intelligence and not be constrained by laws or force either.
I think one of the big problems with trying to predict and constrain corporations is that they all basically have OCD. They are obsessed with whatever it is they do and they task all their agents with figuring out how to optimize and achieve that one thing (or multiple related things if they're organized into divisions).
While the corporation might not possess superintelligence, trying to thwart it it when its actions are harmful to society is like having a crazy neighbor. You just want to relax when you're home from work; you can't dedicate all your free time and attention to constantly monitoring and pushing back against someone who is on a crusade to achieve a petty, limited goal and seemingly has no other interests to occupy their time.
I think the extent to which corporations are or could be controlled is debatable. What I don't think is debatable though is that an AI with superhuman intelligence would be, necessarily, as controllable as corporations.
Another point: While not all corporations are profit maximizers in a paperclip-maximizer obsessive way - those corporations that grow largest and consume others tends to be those that obsess most about profit and growth simply because those are the ones that tend to become big.
@ALittleLight: “.. One was the idea that the author could rule out the singularity ..”
Your consciousness uploaded and then downloaded into a synthetic human like body isn't ever going to be you. As the man said “transhumanism is a warmed-over Christian heresy”.
I find the notion of a coherent self / consciousness to be hogwash anyway. The "self aware" "conscious" brain is a wholistic entity, with what you ate yesterday causing moods that alter your train of thought etc etc, with multiple competing interests vying to control the larger organism to their own ends.
The Ship of Theseus is the warmed-over Christian heresy. A brain is an organism with a symbiotic tight coupling to a meat bag, and the brain itself a similar cooperation by accident.
An organism can develop that builds new tight "synthetic" couplings; it isn't sci-fi. It's partially released today. It's just that we are so normalized to the interfaces of smart phones, keyboards, monitors, steering wheels, and so on that we didn't notice it's arrival.
More exotic, more tightly coupled interfaces also exist today (there exist direct-to-brain interfaces that provide a new sensory feedback loop that you brain can adapt to), they're just not as competitive currently with thousands of years of neural optimization of hands/eyes/nose/touch.
Why would you want to do this? Who knows. Why do we want to do anything? Maybe a person wants to be part machine as a lizard-brain driven reaction to a low-oxygen office environment that sees no viable way out.
If I replace a single one of your neurons with a mechanical replacement, are you still you? What if I replace half? All?
If you walk across a room, are you still you? What if I break you down, and reassemble you at the other side of the room, using the same atoms? What if I only use half the original atoms? What if I freeze you, move you across the room, and then dethaw you?
As far as I (crudely) understand it, we don't know what exactly the cause is, other than it's tied extremely tightly to the brain and seems to be an accident of evolution/an illusion.
My guess is that as long as the neurons keep firing continuously and the illusion is unbroken, the "you" that is you right now will remain you. So you may well be able to go full ship-of-theseus and piece-by-piece completely replace the physical layer without breaking the consciousness, but if you stop it completely with an upload/download* , continuity is broken and the new "you" will be an exact duplicate while the "you" right now will cease to be. Because it's an exact duplicate there would be no way to confirm this, though - the new "you" would think it had continuity due to how complete their memories are.
* I'm thinking of switching to a robotic brain all at once here, instead of piece-by-piece. I'm not going to touch a transporter as suggested in your comment, as that probably depends on technical details of the transporter for what exactly is going on.
What about piece-by-piece? What if we 'upload' you piece-by-piece, by replacing neurons in your head with remote connections to virtual neurons on a computer one-by-one?
I think that conventional notions of identity are probably incoherent, and mean you die from second to second anyway.
If we're firmly in the realm of sci-fi (which we are) I find the idea of a neuron-by-neuron replacement of the human brain by nano-sized computational units that function exactly the same way as those neurons, to be a more interesting proposition.
Can it be done in a way that doesn't imply the death of the subject?
> .. Can it be done in a way that doesn't imply the death of the subject?
The key word is replacement. Can this replacement vote or inherit property? What happens if there is a clerical error and they make two replacements. Which one is the real you?
Oh absolutely, I think that is the key thing. Personally I'd never get into a transporter such as they are portrayed in Star Trek - what comes out the other end may well be a perfect reconstruction, and think it is me, and behave to the outside world as if it is me. But would there be a continuity of conscious experience? Or am I dead and a facsimile is now in my place?
Incidentally, this is why I can't take Roko's Basilisk at all seriously - a future simulation of me is not me. Well, it;s one reason anyway.
A point I've made on here before is the big near-term risk from AI is not general-purpose artificial intelligence. It's machine learning systems that do a better job of making corporate decisions than humans. Corporations are shareholder value maximizers. There's a powerful school of thought, the Chicago School, which claims that's all they should be, and have no other responsibilities.
Machine learning systems are really good at maximizing some defined criterion. It's quite possible that they might get good at making corporate decisions. They already do that for some investment funds.
Once machine learning systems are better at corporate decision making than humans, market forces will demand they be put in charge. The companies with inferior human-based technology will start to lose out. That's implicit in the forces behind capitalism.
Be afraid, CEOs. Be very afraid. The machines are coming to take your job.
The logic goes that organizations, in general, should focus on what they are good at, while the government should pass regulation to incentivize socially good behaviors.
What we need to work on is having a stronger government that's less influenced (monetarily) by corporations, to set boundaries, set incentives, and police corporations to good behavior.
To be honest, we often highlight all the places this goes wrong, but all the industries that don't get a lot of press are good examples of this working well.
Be afraid, humanity. All that stands between humanity and the short-term exploitation in unregulated niches where there are long-term negative externalities is human oversight with its pesky feelings.
> Nobody in 2007 was expecting a Nazi revival in 2017, right?
Is that actually true? I seem to remember people predicting that usa has been edging towards facism for a while now, especially in the aftermath of 9/11.
They expected that American fascism would be based upon Evangelical Christianity and neoconservative foreign interventionism, not racially-focused and xenophobic nationalism and paleoconservative isolationism.
It seems like the institutional power of Evangelicals have gone down significantly since the Bush era, and today's right-wing is led by more secular characters, even if they appeal to traditional Christianity.
Nobody among the intellectual class anticipated Trump's election in 2016. Much like physicists hypothesized dark matter to account for the apparent greater mass necessary to stabilize galaxies and such, the political intelligentsia hypothesized an enormous, hidden body of "dark Nazis" to account for their chosen candidate not winning the election. It is unfathomable to them that enough people in the center, center-right, and right could be discontent enough with being told whom to vote for and why someone running on a platform of tighter immigration controls and more jobs for Americans is evil incarnate, that they were willing to "hold their nose and vote for Trump", to swing the election in his favor.
Political intellectuals tend to lean left, and whenever a rightist scores a major political victory they start predicting stormtroopers goosestepping through American streets Real Soon Now, going back to at least Reagan. So the Nazi Revival is generally accepted as real, and if you contest the idea's truth you may be considered one of them.
WBUR, Boston's NPR news station, published in their blog "Cognoscenti" an article titled "Why Donald Trump will Win in November" in May of 2016 (https://www.wbur.org/cognoscenti/2016/05/06/election2016-tru...). I can't think of anything more representative of what people think of as the left leaning intellectual class than a Boston NPR blog with a Latin title.
This idea that him winning in 2016 was inconceivable to media/intellectual circles is a bit of revisionist history.
Eh. Most of the mainstream media ran headlines predicting 99% chance of a Hillary win and a few of them sniped at Nate Silver for giving Trump a 30% chance (stupid tech bros).
A single contrarian take doesn't undermine that trend.
Note that no one denies Hillary was considered the favorite, I also mention the HuffPost model which was the most prominent to reflect a near certainty. Most news outlets, even conservative ones, reported on this notable predictive modeling effort.
What there was, was a diversity of opinion and vigorous debate (sometimes devolving into sniping, sure), the thing that is supposed to be there. There wasn't a 50/50 split, but I don't remember a single publication I read at the time (all the classic left leaning intellectual rags) that didn't run pieces predicting a Trump victory.
The 99% number so often paraded as a sign of failure wasn't a 99% chance of being president. It was a 99% chance of winning the popular vote.
That prediction was correct, and by a decently large margin. But losers often become president because of the electoral college (it's happened 5 times already), and losers will continue to benefit for years to come.
It was a conditional probability of Trump winning a bunch of states he was behind in, naively projected as if they were independent. It turned out they all swung the same way, and those probabilities were not independent.
> Nobody among the intellectual class anticipated Trump's election in 2016.
A number of people in the intellectual world did, or at least wrote articles purporting to explain why it was more than remotely possible.
The relatively new set of people in the major media trying to play the poll-based mathematical prediction game that Nate Silver started with 538 who weren't Nate Silver undersold the chances of it happening because they made the naive mistake assuming that deviations from polling averages between states were independent of each other rather than, as historical analysis of the type Silver does would show, tightly correlated. But “media outlets want what Silver does but can't afford to pay to have Silver, or anyone with anything like his level of skill at what he does” doesn't represent the “intellectual class”.
I think it's also absurd to treat the less likely of the two plausible possible winners of the US Presidential election winning as some sort of black swan event as opposed to "well within the realm of possibility".
I'm not sure even Jill Stein or Gary Johnson would really be a black swan event, given that I can list it as a possibility, but it's definitely a lot more gray than "either the Democrat or Republican wins".
He won the election by winning the electoral college. You know, I know, everybody knows that.
There is no need to explain where an "enormous hidden body" of votes came from because it didn't. There is no need to explain why the polls were wrong, because they were as accurate as they normally are. There is no need to explain why elites were wrong about what the people thought because they weren't. And there's no reason to argue about whether the election was rigged, because Trump said it was.
That some small neo-Nazi groups exist is normal, and I would say, healthy. They are not Nazis, true Nazis have disappeared after WW2. They are just groups of people who are dissatisfied with current politics and believe that some ideas from Nazi Germany would solve problems. Especially since it is so demonized by the current system that is must be good: the enemy of your enemy is your friend.
They are the result of a free society, we all at some point in life, wanted to change society. You can be a hippy or a nazi, but it usually doesn't last. Most people realize that neither LSD nor the third reich is going to solve the world problems and then become more reasonable.
> And once you start probing the nether regions of transhumanist thought and run into concepts like Roko's Basilisk—by the way, any of you who didn't know about the Basilisk before are now doomed to an eternity in AI hell—you realize they've mangled it to match some of the nastiest ideas in Presybterian Protestantism.
I wish every discussion of transhumanism didn't have to involve Roko's Basilisk. It's not something anyone takes seriously (and very few ever did), but it has enough quirky weirdness that everyone seems to want to talk about it.
Here's a quote from "Lesswrongwiki":
> Roko's argument was broadly rejected on Less Wrong, with commenters objecting that an agent like the one Roko was describing would have no real reason to follow through on its threat
> [...]
> Less Wrong's founder, Eliezer Yudkowsky, banned discussion of Roko's basilisk on the blog for several years as part of a general site policy against spreading potential information hazards. This had the opposite of its intended effect: a number of outside websites began sharing information about Roko's basilisk, as the ban attracted attention to this taboo topic. Websites like RationalWiki spread the assumption that Roko's basilisk had been banned because Less Wrong users accepted the argument; thus many criticisms of Less Wrong cite Roko's basilisk as evidence that the site's users have unconventional and wrong-headed beliefs.
Your comment (and your links) miss the point of mocking references to "Roko's Basilisk". Whether it's actually a widely-held belief of the Less Wrong/greater rationalist community is irrelevant, it's emblematic of the eye-rolling nonsense that they engage in.
It's the exact secular equivalent of "How many angels can dance on the head of a pin?"[1], which was also not an actual accepted topic of medieval religious scholarly debate but an illustration of the usual absurdity of such debate (e.g., there was an actual centuries-long scholarly discussion of whether angels were sexless or had sexes).
For me this illustrates nicely why most discussions about AI and singularity are like counting how many angels can dance on the head of a pin.
It's all like talking about nuclear weapons in 17th century. Sure you can but even if you reach some conclusions they most likely will be invalidated by first technical detail of the implementation of thing when it actually gets created.
The comparison between corporations optimising for profit to algorithms optimising for engagement, and the associated mispriced externalities in both cases being similar, is interesting. But otherwise this reads as a cookie-cutter rant about evil corporations and *ist AI without much deeper insight.
Corporations are nothing but a group of actors working together to achieve some outcome, part of which may be profit, and it's absurd to think that they are a 'Modern Era', or even Western concept as every single land-owning farmer, artisan, banker, merchant in history was de-facto a form of 'corporation'. Moreover, many institutions with origins in antiquity (schools, governments, NGO's) are close enough to the organising principles of corporations that they could be regarded in the same, broad, social categorisation.
His 18th century definition of 'Corporation' fits perfectly with the notion of 'Petit Bourgeois' of history - minor land-holding families, technical guilds, larger artisanal groups. Do we think Athenian 'Trireme' warships were built by a guy and his assistants? No, these were built by well organised groups with specialisation, paid for their work, aka 'Corporations'.
The most interesting thing about these talks are actually social: what kinds of ideas and memes will rise to the fore among people who are aspirationally antagonising, like social anarchists (I don't mean that in a negative sense, I mean that would be the closest technical 'ideal' that defines the a group like 'Chaos Computer Club'). Mohawks, anti-estabishment statements such as 'ardent atheist', and hints of softly anti-corporate ideals.
"Life-span of corporation is shorter, largely due to predation (totally unsubstantiated), corporation's are cannibals, they eat one another (that's an interesting way to describe mergers)."
"For the first century and a 1/2 they depended entirely on human employees" - no, actually, the Industrial Revolution was literally about harnessing the power of fossil fuels via machines to automate that which would have been done by people (or animals) before. Neither humans nor horses moved those trains.
And then of course the dehumanising of governments and corporations as AI: "What do our AI Overlords want?".
I don't believe there is really any relationship between 'corporations' and 'AI', it's a neat idea. What we have here is an intelligent, creative guy with an antagonists worldview, working in a field wherein he's free to make up loose associations and hint at them as they are facts, ad he's subsumed an interesting ideal from CS into his own world view.
I mean, it's great to try ideas that bends our minds a little bit, but I think it's clear he's a writer of fiction.
I should have added 'with a charter' - and then, corporations are literally just that - formalised groups working together with some ostensibly stated purpose.
The legal enfranchisement of such organisations is something contextual to the establishment of broader legal frameworks, which happened later in civilisation (i.e. Modern Era), but even then, one could apply the same ideal to Noble Houses, which derived their 'charters' from various agreements with the Sovereign. Like for example in the Magna Carta.
These 'more fine grained legal articulations' of groups did not change the nature of what such groups were, they just put some boundaries around them.
Merchants were merchants before they were codified as 'Corporations'.
If anything, the modern notion of 'Corporation' actually does not derive very well from such organisations at all! Corporations with 'shares' are a neat legal construct to allow several businessmen to invest in something like a voyage from Venice to somewhere in Asia, whereby the surpluses will be divided upon return, and the corporation would cease to exist. Which is where we derive the term ''Executive Officers' -> of ships!
The 'economic entities' that produced large warships in Antiquity for example, are in every practical sense - corporations - and certainly for Charlie's purposes of comparison to AI, it's exactly the same thing.
There was no aspect to Charlie's 'formalised, legal' definition of a corporation which had anything to do with his AI comparison. He focused entirely on the organisational aspects, which predate the legal definition of corporations.
Aren’t “public benefit” corporations a thing? Where they have both the goal of profit and also some other goal(s) set out in their charter or whatever?
They literally do. Plenty of legal and regulatory constructs apply only to corporations. Limited liability is a thing that is available by forming a corporation, as is carrying out an SEC filing and selling stock. A corporation can also survive, as a legal construct, the deaths of all founding members.
Those constructs are part of the modern, legal features of corporations, but they are not the essential nature of what corporations are.
Remove those features, and things would change a lot, but you'd still have 'corporations' of a kind.
Corporations are also not primarily profit driven - the owners may are - but corporations themselves 'do things' which will result in a lot of externalities and surpluses generated elsewhere, only some of the profits may come back to the shareholders.
Shareholders may very well be the smallest beneficiaries of an endeavour. They have certain rights, but other groups have rights as well: lenders have first rights to the assets, and so do other creditors such as suppliers. Employees have legal rights including collective bargaining.
Buyers may have incredible power over companies such that they suck out all of the profits (see: selling to Apple).
Debtors have all of the power during restructring.
Many companies exist at the whim of the employees - like big Auto, who pay super high wages and benefits relative to the job. Possibly government employees as well.
Some Execs, by virtue of a weak or allied Board, have all the power and suck out vast profits that would otherwise go to investors.
> Limited liability allows companies to take risks that no sane individual or group of individuals would take on their own.
No, limited liability allows people to use corporations as a tool, which includes misusing them to take risks that should not be taken.
> Limited liability is precisely the factor that lets corporations act as agents instead of extensions of individual or group human goals.
No, limited liability is the factor that lets people use corporations to further goals that benefit them, including goals that are harmful to society as a whole.
It's still people doing things, not corporations as magical agents independent of people. Thinking of a corporation as doing nefarious things as a separate agent from the people running it just distracts attention from the actual problem, which is people who want to do nefarious things misusing whatever tool they can.
It makes an implicit prediction about the then booming housing market and the sky above it.
On that date, the S&P 500 closed at 1,206.58.
Two years later, in 2007, the S&P 500 closed at 1,522.97.
To the extent that stock markets tell us where we stand as a nation or a society, I find this example instructive.
There's such a thing as the long view. It's hard to know how long it should be. Harder to know whether it will be proved out in the end until it is proved or disproved. Even then sometimes we miss the forest for the trees.
Remember that Brexit hasn’t, for practical purposes, happened yet. Most people didn’t anticipate that the UK would request, and be granted, so many extensions, a few years ago.
However, time has now run out; the last realistic opportunity for a further extension passed a couple of weeks ago, and Brexit will now happen in some real sense at the end of the year. It likely won’t be pretty.
And if you follow Charlie's reasoning, that simpler parts, can't lead to a highly advanced intelligence, you come to conclusion intelligence is derived from a divine soul.
Intelligence isn't as uncommon as we think, but it takes long time to emerge.
> if you follow Charlie's reasoning, that simpler parts, can't lead to a highly advanced intelligence, you come to conclusion intelligence is derived from a divine soul.
Thats an absolutely ridiculous interpretation. Asserting that our current "simple parts" are still very far from AGI is nowhere even close to asserting that intelligence is immaterial.
Its very, very obvious that there are myriad possible leaps we might need to get closer to intelligence. You can't build an attention network from a perceptron. Its totally reasonable to say that we need a few more fundamental discoveries first. Obvious, even, given the insane amount of processing required to teach a network a language. Brains are clearly wired in a way that is simply more efficient than how we currently know how to build stuff.
> Thats an absolutely ridiculous interpretation. Asserting that our current "simple parts" are still very far from AGI is nowhere even close to asserting that intelligence is immaterial.
Not really what it says. Look at following passage:
> AI singularity as a narrative, and identify the numerous places in the story where the phrase "... and then a miracle happens" occurs, it becomes apparent pretty quickly that they've reinvented Christianity.
The phrase "miracle happens" , to me is suspect. Intelligence rose many times in various creatures. There is nothing miracle about something that rose multiple number of times.
Are Singularists wrong? Yes. They confuse saturation for exponential curves; current neural networks are far cry from actual neuron networks, and their time scales reflect more their fear of death than any sensible timeline.
If Charlie wants to criticize Singularists, there are plenty of valid reasons. Them being cult like is the least important one.
When I say that he's not saying that "intelligence is immaterial" I mean literally that he is not saying intelligence does not arise from material processes.
Equivalently, when he says a miracle happens, it is not the same thing as saying "miraculous intervention is required to endow programs with intelligence". He is saying that current and near-term machine learning techniques are not capable of scaling exponentially self-accelerating, godlike, incomprehensible superintelligence, not that no program ever can reach intelligence.
> There is nothing miracle about something that rose multiple number of times.
It is ridiculous to imply he believes that based on his statements or even just that passage in isolation. That would ignore -and take for granted as true- the many assumptions required for a singularity-like event besides the ability to create artificial intelligence. These include:
1. That our techniques are anywhere near reaching the general intelligence of a human
2.That that intelligence can be capably run on existing hardware, which presumes cognition does not rely significantly on processes happening within neurons, only between them
3. That the intelligence of a human, given the understanding of itself and ability to modify itself, would be capable of making improvements that would compound to a significant degree; if you presume intelligence takes off exponentially then unless you're already smart its hard to add much more
4. That intelligence isn't effectively limited by single threaded performance
5. That the ability to think can unlock all the wonders of the universe, and simply being sufficiently smart will allow you to infer the tremendous amounts of hidden state and randomness that dictate life.
If any of those extremely reasonable things fail, the singularity takes millions of years or is simply impossible. Expecting the singularity within the next millenia isn't just faith in all the above, it's faith that all the above is so true that the process happens in less than a decade. It is fanatical.
One of the biggest problems with thought experiments designed as an introduction to an entire class of problems, is that they get misinterpreted as a complete statement of the problem.
Charlie Stross is quite dismissive of paperclip maximisers: Elon Musk "has an obsessive fear" of them, and "isn't paying enough attention" because Tesla does the same thing. He refers to a "pure paperclip maximiser" and discusses the "naive vision of a paperclip maximiser".
This is quite insulting, because the paperclip maximiser is a thought experiment designed to introduce the consequences of intelligences which have fundamentally different values to human beings, in an accessible way. What he's doing is like reading a kids' book on counting and then writing an article contrasting naive apple-based addition with his brilliant new idea of generic fruit-based counting.
>"Science fiction is written by people embedded within a society with expectations and political assumptions that bias us towards looking at the shiny surface of new technologies rather than asking how human beings will use them, and to taking narratives of progress at face value rather than asking what hidden agenda they serve."
And be prepared for the most agenda ridden text you have read in a long time.
Lem was disgusted at contemporary Western science fiction for the exact reason of it being techno-fetishistic. He was in opposition to sci-fi mainstream, not in it.
Not having any agenda is a deeply unhealthy mental state as it implies a complete apathy to all outcomes. It is just that most "normal" agendas are effectively invisible as a Huffman encoded to a 0 bit default. Not wanting to be tortured to death is an agenda technically but contributes so little information that it only makes sense to state the opposite.
To paraphrase: yes, corporations function as agents, but their maximum performance is limited by the capabilities of their employees. A corporate AI may have a much broader set of skills than any single person, and may be able to tackle many concurrent tasks, but as an "intelligent agent" its decision making capabilities probably don't scale exponentially (or even linearly) with its headcount. In the sense that a corporation's maximum intelligence is likely to be in the same ballpark as the smartest humans, it can't be seen as a true superintelligence.