> It's trivially obviously possible in theory: take the world expert humans in each of the majority of domains, put them in a room together
That would be a far cry from what AGI paperclip doomists imagine.
> and simulate their combined brain activity.
If simulating their brain activity is the same as what would happen if they were actually in the room, what we would get is probably 80% arguing over what to do, posturing, etc.
It’s not clear to me how we would cause some intelligence to arise that would be immune to that while being superior in other ways and be sufficiently human to appear as intelligence and not, say, some calculation being performed like computers did for decades.
> I'd prefer that people treat software in general, of which AI is a subset, with more regard for the ways it can kill people
I absolutely agree that us programmers should generally be more aware how “neutral” tech they develop is used to not only to benefit but also to hurt people—be it cryptocurrency, E2EE communications, or indeed very much ML—and work towards alleviating that in practical and ethical ways. I just think in case of ML the threat is more prosaic. (I can be wrong, though I’d hate to be proven so, of course.)
> That would be a far cry from what AGI paperclip doomists imagine.
It is in excess of what is required for a paperclip scenario, despite the origin; paperclipping is a specific example in the category of "be careful what you wish for, you may get it", as it relates to our ignorance about how to specify what we even want to some agent that is more capable than we are (capability can be on any axis, even within intelligence there are many) when that agent does not inherently share our values.
In addition to the examples of smoking an fossil fuels where it has been demonstrated that the corporations preferred short-term profits over public health, there's also plenty of simply mediocre corporations and governments who manage to endanger public safety (and/or in the latter case cause new entries to need to be added to Wikipedia's list of genocides) without being chock full of domain experts — and although we can sometimes (but not always, e.g. Bhopal/Union Carbide) arrest mere corporate leadership, the west is doing the absolute minimum to prevent Ukraine from falling out of a fear that Russia might escalate.
> If simulating their brain activity is the same as what would happen if they were actually in the room, what we would get is probably 80% arguing over what to do, posturing, etc.
So the actual real humans that some AI would compete against are also easy to defeat, right?
That said, we do also have examples of people working together for common causes even when they would otherwise want to do all those things — that's an existence proof that it's possible for a simulation of the same to also get over this issue.
> No one in this equation is some sort of undefeatable god. Humans have the benefit of quantity and diversity.
So they're defeatable, right?
When it comes to diversity, I'd agree this is a real human advantage: "Once you've met one Borg, you've met them all", memetic monoculture is as bad as any genetic monoculture: https://benwheatley.github.io/blog/2019/12/30-18.46.50.html
Quantity, not so much; even today there are single-digit millions of each of hardware to run the larger models and robots if you need to embody them, and the growth trend for both is likely to continue for a while yet before all but the shortest hyper-exponential timelines for AGI.
> Yes, and still I believe there would be a lot of posturing and arguing if you get multiple top people in the room.
And despite that, the Hoover dam and the ISS were actually built, and the pandemic lockdowns happened and the vaccine was distributed. The mere existence of arguments and posturing doesn't mean they prohibit cooperation, even if they're a resistance against it.
That would be a far cry from what AGI paperclip doomists imagine.
> and simulate their combined brain activity.
If simulating their brain activity is the same as what would happen if they were actually in the room, what we would get is probably 80% arguing over what to do, posturing, etc.
It’s not clear to me how we would cause some intelligence to arise that would be immune to that while being superior in other ways and be sufficiently human to appear as intelligence and not, say, some calculation being performed like computers did for decades.
> I'd prefer that people treat software in general, of which AI is a subset, with more regard for the ways it can kill people
I absolutely agree that us programmers should generally be more aware how “neutral” tech they develop is used to not only to benefit but also to hurt people—be it cryptocurrency, E2EE communications, or indeed very much ML—and work towards alleviating that in practical and ethical ways. I just think in case of ML the threat is more prosaic. (I can be wrong, though I’d hate to be proven so, of course.)