>we have a lot more mentally unstable people running around than we like to think we do.
So what do you believe should be the case? That AI in any flexible communicative form be limited to a select number of people who can prove they're of sound enough mind to use it unfiltered?
You see how similar this is to historical nonsense about restricting the loaning or sale of books on certain subjects only to people of a certain supposed caliber or authority? Or banning the production and distribution of movies that were claimed to be capable of corrupting minds into committing harmful and immoral acts. How stupid do these historical restrictions look today in any modern society? That's how stupid this harping about the dangers of AI chatbots will look down the road.
The limitation of AI because it may or may not cause some people to do irrational things not only smacks of a persistent AI woo on this site, which drastically overstates the power of these stochastic parrot systems, but also seems to forget that we live in a world in which all kinds of information triggers could maybe make someone make stupid choices. These include books, movies, and all kinds of other content produced far more effectively and with greater emotional impact by completely human authors.
By claiming a need for regulating the supposed information and discourse dangers of AI chat systems, you're not only serving the cynically fear-mongering arguments of major AI companies who would love such a regulatory moat around their overvalued pet projects, you're also tacitly claiming that literature, speech and other forms of written, spoken or digitally produced expression should be restricted unless they stick to the banally harmless, by some very vague definitions of what exactly harmful content even is.
In sum, fuck that and the entire chain of implicit long-used censorship, moralizing nannyism, potential for speech restriction and legal over-reach that it so bloody obviously entails.
So what do you believe should be the case? That AI in any flexible communicative form be limited to a select number of people who can prove they're of sound enough mind to use it unfiltered?
You see how similar this is to historical nonsense about restricting the loaning or sale of books on certain subjects only to people of a certain supposed caliber or authority? Or banning the production and distribution of movies that were claimed to be capable of corrupting minds into committing harmful and immoral acts. How stupid do these historical restrictions look today in any modern society? That's how stupid this harping about the dangers of AI chatbots will look down the road.
The limitation of AI because it may or may not cause some people to do irrational things not only smacks of a persistent AI woo on this site, which drastically overstates the power of these stochastic parrot systems, but also seems to forget that we live in a world in which all kinds of information triggers could maybe make someone make stupid choices. These include books, movies, and all kinds of other content produced far more effectively and with greater emotional impact by completely human authors.
By claiming a need for regulating the supposed information and discourse dangers of AI chat systems, you're not only serving the cynically fear-mongering arguments of major AI companies who would love such a regulatory moat around their overvalued pet projects, you're also tacitly claiming that literature, speech and other forms of written, spoken or digitally produced expression should be restricted unless they stick to the banally harmless, by some very vague definitions of what exactly harmful content even is.
In sum, fuck that and the entire chain of implicit long-used censorship, moralizing nannyism, potential for speech restriction and legal over-reach that it so bloody obviously entails.