Hacker News new | past | comments | ask | show | jobs | submit login

Do people really think AI will go haywire like in the Hollywood movies?



Not like in Hollywood movies, but yes:

https://www.youtube.com/watch?v=gA1sNLL6yg4

https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ

Look around alignmentforum.org or lesswrong.com, and you'll see loads of people who are worried / concerned at various levels about what could happen if we suddenly create an AI that's smarter than us.

I've got my own summary in this comment:

https://news.ycombinator.com/item?id=35281504

But this discussion has actually been going on for nearly two decades, and there is a lot of things to read up on.

EtA: Or, for a fun version:

https://www.decisionproblem.com/paperclips/index2.html


Some do. Personally I think that LLMs will hit a ceiling eventually way before AGI. Just like self driving - the last 20% is orders of magnitude more difficult than the first 80%


I don't think we're close to a situation where they send us into a Matrix. But I can see a scenario where they are connected to more and more running systems of varying degree of importance to human populations such as electrical grids, water systems, factories, etc. If they're essentially given executive powers within these systems I do see a huge potential for catastrophic outcomes. And this is way before any actual AGI. The simple "black box" AI does not need to know what it's doing to cause real-world consequences.


I don't think it's about AI going haywire, more about how the technology will be used by people for nefarious purposes.


> Do people really think AI will go haywire like in the Hollywood movies?

No, it will just make every inequality even harder to fight. Because computer algorithm can't be biased so every decision it makes will be objective. And because it's really hard to know why AI made decision, it will be impossible to accuse it of racism, bigotry, xenophobia. While rich and powerful will be ones deciding (through hands of "developers") what data will be used to train AIs.


It won't, because we have the movies and these safety teams. But we shouldn't just hope it turns out right. It's a little like parenting.


Yudkowsky, to begin with


I don't for one, but I still think there could be legitimate safety concerns. LLMs are unpredictable, and the possibility for misinformation in pitching them as search aggregators is pretty large. Disinformation can have, and previously has had, genuinely dangerous effects.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: