That's fair. I interpreted the comparison as being between "AI implemented as a property of human organizations" and "AI implemented as a powerful search algorithm". While corporations can certainly be dangerous, they're still made of people and thus are unlikely to want to do things like "convert the entire mass of the solar system into dollar bills" (and even if they did, they'd have a hard time doing it). A sufficiently powerful search algorithm would find all sorts of bizarre ways of satisfying its goals. The point is that the scope of the risk is quite different when talking about corporations (order of magnitude: screwing up the environment in pursuit of easy profit) vs "real" AI which, if granted agency, would have potentially unbounded risk.
Another way to look at it is with regards to capacity for self-modification. If a corporation can't re-structure itself into being much smarter than the smartest human, its intelligence is fundamentally limited, and therefore so is the risk. Does software have this restriction? We don't really know yet, but it's hard to point to exactly why it would.
How is "converting the entire mass of the solar system into dollar bills" conceptually different from "converting the habitability of Earth into dollar bills" or "converting the health problems of human beings into a maximal amount of dollar bills", which is precisely what many corporations _actually do_?
The "corporation as AI" metaphor isn't about some abstract future possibility, it's an explanatory mechanism for how the world is so thoroughly messed up _right now_.
Yeah, I agree. My original post was bringing up what I thought was an interesting comparison between hypothetical software AI and the corporations-as-AI metaphor presented in the talk.
Corporations _are_ acting as misaligned optimizers. Solving that problem is hugely important. However, the "AI" comparison breaks down somewhat when you start thinking about how we might actually fix the problem. With corporations, we (i.e., states) have tools that we can use to regulate bad actors. Software AI, however hypothetical at the moment, seems likely to be a different game all-together.
I don't know that we won't have tools, albeit different ones, to regulate a bad AI. AI need more than just intelligence and agency. They also need effective ways to interact and affect their environment. That boundary is where we are likely to develop tools to limit and regulate them.
If they are truly general AI then it's likely that their reaction to that limitation and regulation will be not dissimilar to a person's but I see no reason to assume that limiting them will be impossible.
Sure! I don't see any reason why it would be impossible either, but the (hypothetical) problems are very interesting. Starting with the most basic problem of all: how do we even specify what we want the AI to do? The whole field of AI safety is trying to figure out a way to write rules that an agent wouldn't instantly try to circumvent, and to find some way to provide basic guarantees about the behavior of a system that is incentivized to do bad things (just like corporations are incentivized to find loopholes in the law, hide their misdeeds, and maximize profits at the expense of the common good).
That assumes an ideal corporation in the physics sense - real world ones have "corruption" in the form of their employees engaging in their own agenda. Take sexual harassment, passing up talent from bigotry, and office politics. They actively harm the profitability and yet they exist at all levels.
The incentives are fundamentally what shape the systems including the corporations. Blaming corporations alone is a simplification - the same incentives converge to the same outcomes akin to how power vacuums are filled by warlords.
> They actively harm the profitability and yet they exist at all levels.
> The incentives are fundamentally what shape the systems including the corporations.
These are key observations, and sadly I suspect there's a Prisoner's Dilemma style thing going on which makes corruption, sexual harassment, bigotry, and office politics somehow "rational" and individually maximising behaviours (for corporations, as well as the individuals they're composed of) whenever any of the competing corporations are known or suspected to be behaving corruptly.
Combined with the combative nature of thinking/reporting about corporate results - where for example FAANG stock price performances are compared to each other, assigning winners and losers, without any relevant incentives or accolades for the entire tech industry having grown the value of the entire sector. A corporation with a 15% YoY increase is deemed a "loser" if some of it's competitors manages 20%.
And we've spent well over half a decade demonising anyone who criticises Capitalism - thus deeply entrenching incentives that are poor for society as a whole, but which have "less poor" outcomes for the corporations prepared to be most ethically barren.
Being generous and ethical is also a form of human corruption harming the corporate entity. From the corporation's hypothetical point of view, any time a person doesn't do exactly what the corporation needs, isn't the perfect mindless drone, (or isn't creative in just the right inoffensive way), they're a cancer cell in the corpus. But I think this goes for red tribe, as well as blue tribe thinking, as you posit.
After all, you'd be hard-pressed to argue that corporations, especially silicon valley corps, give more lip service to red tribesmen than blue. Maybe that's just because the blue tribe is the more powerful, and the corps are rightfully saying the magic words that allow them to keep their profits.
Or perhaps that your particular perspective is blinkered to seeing too short a timeframe?
There's a very good argument top be made, I think, that "capitalism" is the biggest and most dangerous ponzi scheme ever invented. Most people will happily participate thinking "this is fine .gif" while untapped suckers/resources keep dropping enough "return" to early "investors", but when the house of cards all collapses there will be no underlying foundation for the vast majority and only the people at the very very top of the pyramid scheme ever actually benefited at all.
Ever stopped to wonder why Musk ands Bezos are so interested in going to Mars???
>its intelligence is fundamentally limited, and therefore so is the risk
collective intelligences such as nation states are capable of producing tools such as nuclear arms or biological weapons whose impact on society may be civilisation ending.
It doesn't follow at all that a limit on intelligence implies a limit on risk. Collective intelligence is already high enough to produce threats that could wipe us off the planet without a problem. This is because most risks, once let loose on the world by someone intelligent enough, don't need to be intelligent in themselves to destroy humans. Viruses aren't intelligent at all.
Or for a more mundane example, we're already in the process of slow-cooking the planet for some stock-market gains
> If a corporation can't re-structure itself into being much smarter than the smartest human, its intelligence is fundamentally limited, and therefore so is the risk.
The assumption here is that risk correlates to intelligence. That doesn't seem to be borne out in history. Risk (the likelihood of bad outcomes) can be emergent, and arise from well-meaning, reasonably (not super) intelligent people operating within a simple framework.
> While corporations can certainly be dangerous, they're still made of people and thus are unlikely to want to do things like "convert the entire mass of the solar system into dollar bills"
I'm assuming this is satire, since that's apparently exactly the goal of corporations.
Just print a bill of a higher denomination and make your shareholders happy without destroying the solar system!
It's not the goal, it's the consequences of the rules of the game used to reach the goal.
If our economy was based on writing poetry and corporations would compete to write the most soul enticing profound poetry or art in general, the side effect wouldn't entail the depletion of resources and ruining of the ecosystem. Surely we'd find other ways to make our fellow human life miserable, perhaps all this art will make many suffer untold pains as the meaningfullness of our existence is unraveled, or whatever.
The rules of the game are the framework. The incentives are the driving force to push things down the gradient. The consequences are hard to predict. That works for wide range of rules and incentives
Another way to look at it is with regards to capacity for self-modification. If a corporation can't re-structure itself into being much smarter than the smartest human, its intelligence is fundamentally limited, and therefore so is the risk. Does software have this restriction? We don't really know yet, but it's hard to point to exactly why it would.