The current large language model neural networks have successfully exploited a zero day on human cognition. The output closely resembles high quality writings and as such our psyche will consider it quality too. (See Kahneman's Thinking Fast And Slow on more such.) This is extremely dangerous as there is little to nothing of information in there -- this sometimes can be revealed by negating the prompt and getting the same answer. I can't find the article right now investigating this but there was one. This shouldn't be surprising since all the system is capable of is providing something the answer would sound like. Yes, sometimes it happens to be the answer but unless you already know how to evaluate that there's no way to know.
The biggest danger lies in the future. According to https://www.npr.org/sections/health-shots/2022/05/13/1098071... some 300 000 COVID deaths could've been avoided if all adults gotten vaccinated and without a doubt anti vaxx propaganda paid a role in the low vaccination rate. We know only 12 people were behind most of that propaganda https://www.npr.org/2021/05/13/996570855/disinformation-doze... now imagine the next pandemic when this misinformation will be produced on an industrial scale using these LLMs. Literally millions will die because of it.
Because then we will have even more economic growth based upon providing people things they don't need, and we will have even more advanced AI that will make human interaction even more unnatural.