The idea that people will consume infinite qualities of zero information nonsense and become simultaneously disconnected from base-reality and unable to act, yet also converted into puppets with no will of their own and thus only actors for those who feed them drivel makes no sense.
AI can't simultaneously drive beople into theses opposite states. Because then I could just train an AI to output only "good" information and harvest "good" minds to manipulate for "good" social purposes.
I would make an AI that cultivates communities and never gets tired of the boring parts of organizing.
It's no different than the misinformation from the environment: is that a tiger (predator) or a bird (prey). is that food or poison? Whining about scale is unimaginative.
Your comment is based on a false dilemma that assumes that AI can only have one of two effects on people: either disconnecting them from reality or manipulating them into puppets. However, AI can have both effects simultaneously or in different degrees depending on the context and the individual. For example, someone who consumes a lot of misinformation from AI may become less aware of the facts and evidence that contradict their beliefs, but also more susceptible to influence from those who share or reinforce their beliefs. This can create echo chambers and polarization that undermine rational discourse and social cohesion.
Many people are becoming detached from the facts and evidence that shape reality and turning into obedient tools for others. This situation is commonly known as “the republican party” by people that don't live in the US. NewsMax and OANN have inflicted much more damage than Fox News ever did by spreading lies and propaganda that undermine democracy and public health.
Furthermore, your idea of creating an AI that outputs only “good” information and cultivates communities for “good” social purposes is naive and dangerous. First of all, who decides what is “good” information and what are “good” social purposes? Different people may have different values and preferences that are not compatible or even contradictory. How would you ensure that your AI respects the diversity and autonomy of human beings without imposing your own agenda or bias? Second of all, how would you prevent your AI from being corrupted or hacked by malicious actors who want to use it for their own ends? How would you ensure that your AI does not develop unintended consequences or side effects that harm people or the environment? How would you monitor and regulate your AI’s actions and outcomes without compromising its efficiency or effectiveness?
Finally, your comparison of AI misinformation with environmental misinformation is flawed and misleading. Environmental misinformation is based on natural phenomena that have objective reality and can be verified by observation and experimentation. AI misinformation is based on artificial constructs that have no inherent truth value and can be manipulated by design or accident. Whining about scale is not unimaginative; it is realistic and prudent. The scale of AI misinformation can have far-reaching impacts on human cognition, behaviour, emotions, relationships, health, security, democracy, justice, morality and more. Ignoring or downplaying these impacts is irresponsible and reckless.
AI can't simultaneously drive beople into theses opposite states. Because then I could just train an AI to output only "good" information and harvest "good" minds to manipulate for "good" social purposes.
I would make an AI that cultivates communities and never gets tired of the boring parts of organizing.
It's no different than the misinformation from the environment: is that a tiger (predator) or a bird (prey). is that food or poison? Whining about scale is unimaginative.