They reduce the marginal cost of producing plausible content to effectively zero. When combined with other societal and technological shifts, that makes them dangerous to a lot of things: healthy public discourse, a sense of shared reality, people’s jobs, etc etc
But I agree that it’s not at all clear how we get from ChatGPT to the fabled paperclip demon.
The text alone doesn’t do it but add some generated and nearly perfect “spokesperson” that is uniquely crafted to a persons own ideals and values, that then sends you a video message with that marketing .
There are plenty of tools which are dangerous while still requiring a human to decide to use them in harmful ways. Remember, it’s not just bad people who do bad things.
That being said, I think we actually agree that AGI doomsday fears seem massively overblown. I just think the current stuff we have is dangerous already.
But I agree that it’s not at all clear how we get from ChatGPT to the fabled paperclip demon.