I rely a lot on text for obtaining information and shaping my opinion, and in many cases short form text plays an important role (e.g. here or on reddit). I’m sure I’m not alone in that.
This technology can at the very least waste my time, confuse me and hide the content that I’m actually looking for. It looks like it can feasibly generate 2-3 sentence comments that make sense in context, but in an automated way, with the purpose of injecting a specific sentiment into a comment section.
I already didn’t like that sometimes it seems comments I think are written by humans might not be (or they might not be sincere). This kind of technology can make that problem a lot larger.
It could flood the internet with so much crap, that is so hard to filter out, that the internet becomes a much less usable source for obtaining reliable information. I think that’s pretty scary.
Would you consider this comment to have less/no value if you found out that it was generated by a bot? What if quality and information density of automated text surpasses human contributions? Will it still be just spam?
It depends. At the moment if I see a reddit post saying product X was really appreciated by a user, most of the time I'll believe that was an actual human appreciating that product. But if modern mass marketing is going to be the injection of seemingly sincere product recommendations into reddit threads that will obviously lose value - the bot comment is lower value than the human comment, and because I can't distinguish them all such comments lose value. Similarly for political statements of support.
I'm sure there is potential for extremely useful bots (e.g. such as article summarization bots on reddit) which increase information. I guess it really depends on who decides to set up a bot, and their goals and implementation.
Many people have no clue that automation has come this far and will judge every comment they read online as sincere. If they're actually not, and many are driven by political and commercial agendas, I think that's a bit dangerous, because people will act on them.
This technology can at the very least waste my time, confuse me and hide the content that I’m actually looking for. It looks like it can feasibly generate 2-3 sentence comments that make sense in context, but in an automated way, with the purpose of injecting a specific sentiment into a comment section.
I already didn’t like that sometimes it seems comments I think are written by humans might not be (or they might not be sincere). This kind of technology can make that problem a lot larger.
It could flood the internet with so much crap, that is so hard to filter out, that the internet becomes a much less usable source for obtaining reliable information. I think that’s pretty scary.