The problem is that these bots are extremely good at generating valid-sounding bullshit.
Human-generated bullshit and bullshit generated by previous iterations of spam blogs used to be relatively easy to identify as bullshit. These models will confidently give you an answer, sounding perfectly plausible, even if it is completely wrong.
I think the biggest lesson to learn from all this is that just because things sound convincing doesn't mean it is accurate. We should probably incorporate this same skepticism when talking to people as we have when talking to machines (but that doesn't mean we should abandon good faith).
Human-generated bullshit and bullshit generated by previous iterations of spam blogs used to be relatively easy to identify as bullshit. These models will confidently give you an answer, sounding perfectly plausible, even if it is completely wrong.