It’s not because a technology can be dangerous that it shouldn’t be built. I think that’s the argument a respectable authority once used for nuclear technology. Thus, what if in the near future, someone manages to build an AI that is intelligent enough to pose a real threat to society?
I’m not talking Skynet yet, but a more realistic scenario where a company or individual would, for example, become able to use machine learning to hack most of any computer, financial, or power grid system.
I’m aware influential companies and individuals have invested in the ethical aspect of machine learning, but how would that help mitigate such unethical use of such technology?
My understanding is that unethical use of nuclear technology is discouraged between countries by deterrence, in which case retaliation could ensure that no side ends up winning, but as far as I understand, it doesn’t apply to machine learning, does it?