Look around alignmentforum.org or lesswrong.com, and you'll see loads of people who are worried / concerned at various levels about what could happen if we suddenly create an AI that's smarter than us.
Some do. Personally I think that LLMs will hit a ceiling eventually way before AGI. Just like self driving - the last 20% is orders of magnitude more difficult than the first 80%
I don't think we're close to a situation where they send us into a Matrix. But I can see a scenario where they are connected to more and more running systems of varying degree of importance to human populations such as electrical grids, water systems, factories, etc. If they're essentially given executive powers within these systems I do see a huge potential for catastrophic outcomes. And this is way before any actual AGI. The simple "black box" AI does not need to know what it's doing to cause real-world consequences.
> Do people really think AI will go haywire like in the Hollywood movies?
No, it will just make every inequality even harder to fight. Because computer algorithm can't be biased so every decision it makes will be objective. And because it's really hard to know why AI made decision, it will be impossible to accuse it of racism, bigotry, xenophobia. While rich and powerful will be ones deciding (through hands of "developers") what data will be used to train AIs.
I don't for one, but I still think there could be legitimate safety concerns. LLMs are unpredictable, and the possibility for misinformation in pitching them as search aggregators is pretty large. Disinformation can have, and previously has had, genuinely dangerous effects.