> If a corporation can't re-structure itself into being much smarter than the smartest human, its intelligence is fundamentally limited, and therefore so is the risk.
The assumption here is that risk correlates to intelligence. That doesn't seem to be borne out in history. Risk (the likelihood of bad outcomes) can be emergent, and arise from well-meaning, reasonably (not super) intelligent people operating within a simple framework.
The assumption here is that risk correlates to intelligence. That doesn't seem to be borne out in history. Risk (the likelihood of bad outcomes) can be emergent, and arise from well-meaning, reasonably (not super) intelligent people operating within a simple framework.