> Superintelligence will also be bound by fundamental information-theoretic limits
…is mainly why this has not been worrying me much. All issues with modern incarnation of generative ML aside, AGI doomism really does strike me as profitable deity-worshipping death cult.
The biggest threat in the equation will always be humans who deploy tech, not the tech itself.
Saying that humans can be a threat and AI can't, seems to me to be a claim that humans are exempt from information-theoretic limits.
I see general-AI (given that robots already exist) as having the potential to do all that humans can do, including be harmful. Like most, I also model superhuman-AI in a trivial way, as being as capable as the best human in every field — Einstein's grasp of physics, Sun Tzu's of military strategy, Churchill's of rhetoric, etc.
> The biggest threat in the equation will always be humans who deploy tech, not the tech itself.
Naturally; but does that include humans who unwittingly set the AI's utility function with a bug in it and pressed "run", in a way that ends with us saying "the situation has developed not necessarily to [Humanity's] advantage"?
Any tech can be harmful, cf. cryptocurrency or nuclear bombs. My comment mostly referred to taking it to such extreme with a particular tech. Some people also tend, while demonizing (the yet to be confirmed to be possible even in theory) AGI and its capabilities, to completely sideline humans deploying it, as if they would be guilt-free.
> the yet to be confirmed to be possible even in theory
It's trivially obviously possible in theory: take the world expert humans in each of the majority of domains, put them in a room together, and simulate their combined brain activity. It's all ultimately physics, so it's possible.
The hard part isn't the theory, but actually doing that — and not just because we don't know how to scan living brains to measure synaptic weights. There's a lot we don't know about how minds work in general nor how our brains work in particular.
But purely in theory, that method ultimately boils down to known physics.
You can assign guilt and blame as you see fit. In these kinds of scenario, the human who makes the fatal error, who many might call responsible, would likely be going "but it works on my machine!" as they die from some error that may or may not be obvious even in hindsight.
I'd prefer that people treat software in general, of which AI is a subset, with more regard for the ways it can kill people — even back 20 years ago when I did my degree, we had case studies from Therac-25 and the London Ambulance Service: https://en.wikipedia.org/wiki/LASCAD
Even back then our profession knew we needed to be held to higher standards, but didn't want to do what that entailed.
> It's trivially obviously possible in theory: take the world expert humans in each of the majority of domains, put them in a room together
That would be a far cry from what AGI paperclip doomists imagine.
> and simulate their combined brain activity.
If simulating their brain activity is the same as what would happen if they were actually in the room, what we would get is probably 80% arguing over what to do, posturing, etc.
It’s not clear to me how we would cause some intelligence to arise that would be immune to that while being superior in other ways and be sufficiently human to appear as intelligence and not, say, some calculation being performed like computers did for decades.
> I'd prefer that people treat software in general, of which AI is a subset, with more regard for the ways it can kill people
I absolutely agree that us programmers should generally be more aware how “neutral” tech they develop is used to not only to benefit but also to hurt people—be it cryptocurrency, E2EE communications, or indeed very much ML—and work towards alleviating that in practical and ethical ways. I just think in case of ML the threat is more prosaic. (I can be wrong, though I’d hate to be proven so, of course.)
> That would be a far cry from what AGI paperclip doomists imagine.
It is in excess of what is required for a paperclip scenario, despite the origin; paperclipping is a specific example in the category of "be careful what you wish for, you may get it", as it relates to our ignorance about how to specify what we even want to some agent that is more capable than we are (capability can be on any axis, even within intelligence there are many) when that agent does not inherently share our values.
In addition to the examples of smoking an fossil fuels where it has been demonstrated that the corporations preferred short-term profits over public health, there's also plenty of simply mediocre corporations and governments who manage to endanger public safety (and/or in the latter case cause new entries to need to be added to Wikipedia's list of genocides) without being chock full of domain experts — and although we can sometimes (but not always, e.g. Bhopal/Union Carbide) arrest mere corporate leadership, the west is doing the absolute minimum to prevent Ukraine from falling out of a fear that Russia might escalate.
> If simulating their brain activity is the same as what would happen if they were actually in the room, what we would get is probably 80% arguing over what to do, posturing, etc.
So the actual real humans that some AI would compete against are also easy to defeat, right?
That said, we do also have examples of people working together for common causes even when they would otherwise want to do all those things — that's an existence proof that it's possible for a simulation of the same to also get over this issue.
> No one in this equation is some sort of undefeatable god. Humans have the benefit of quantity and diversity.
So they're defeatable, right?
When it comes to diversity, I'd agree this is a real human advantage: "Once you've met one Borg, you've met them all", memetic monoculture is as bad as any genetic monoculture: https://benwheatley.github.io/blog/2019/12/30-18.46.50.html
Quantity, not so much; even today there are single-digit millions of each of hardware to run the larger models and robots if you need to embody them, and the growth trend for both is likely to continue for a while yet before all but the shortest hyper-exponential timelines for AGI.
> Yes, and still I believe there would be a lot of posturing and arguing if you get multiple top people in the room.
And despite that, the Hoover dam and the ISS were actually built, and the pandemic lockdowns happened and the vaccine was distributed. The mere existence of arguments and posturing doesn't mean they prohibit cooperation, even if they're a resistance against it.
…is mainly why this has not been worrying me much. All issues with modern incarnation of generative ML aside, AGI doomism really does strike me as profitable deity-worshipping death cult.
The biggest threat in the equation will always be humans who deploy tech, not the tech itself.