I had this idea the other day concerning the 'AI obfuscation' of knowledge. The discussion was about how AI image generators are designed to empower everyone to contribute to the design process. But I argued that you can only reasonably contribute to the process if you can actually articulate the reasoning beyond your contributions. If an AI made it for you, you probably can't, because the reasoning is simply "this is the amalgamation of training data that the AI spat out." But, there's a realistic version of reality where this becomes the norm and we increasingly rely on AI to solve for issues that we don't understand ourselves.
And, perhaps more worrying, the more widely adopted AI becomes, the harder it becomes to correct its mistakes. Right now millions of people are being fed information they don't understand, and information that's almost entirely incorrect or inaccurate. What is the long term damage from that?
We've obfuscated the source data and essentially the entire process of learning with LLMs / AIs, and the path this leads down seems pretty obviously a net negative for society (outside of short term profit for the stake holders).
I've said it before and I'll warn of it again here, my biggest concern for AI, especially at this stage is that we abscond understanding, in favor of letting the AI generate, then the AI generates that which we do not understand, but must maintain. Then we don't know why we are doing what we are doing but we know that it causes things to work how we want.
Suddenly instead of our technology being defined by reason and understanding our technology is shrouded in mysticism, and ritual. Pretty soon the whole thing devolves into the tech people running around in red robes, performing increasingly obtuse rituals to appease "the machine spirit", and praying to the Omnissiah.
If we ever choose to abandon our need for understanding we will at that point have abandoned our ability to progress.
And, perhaps more worrying, the more widely adopted AI becomes, the harder it becomes to correct its mistakes. Right now millions of people are being fed information they don't understand, and information that's almost entirely incorrect or inaccurate. What is the long term damage from that?
We've obfuscated the source data and essentially the entire process of learning with LLMs / AIs, and the path this leads down seems pretty obviously a net negative for society (outside of short term profit for the stake holders).