Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not that people don't want aligned model, or want models that can do harm, they just want an alternative to the insufferable censored models. Pretty much everyone agrees that AI that would end humanity is harmful, but what content is harmful is quite controversial. Not everyone agrees that a language model having the ability to spit out a story similar to an average Netflix TV show is harmful because it contains sex and violence. As long as models are censored to this extent, there will always be huge swaths of people who wants less censored models.


You're kind of making my point for me. To solve alignment problem you have to solve two alignment problems and you already have a detailed, nuanced view built over decades of experience as to the feasibility of aligning natural general intelligence on not-very-well-understood, divisive political issues.

This will be the most political technology in history.


I've been reading the writings of Yudkowsky and his disciples for over a decade, and thinking about AI and AI alignment for that same time, in my own layman way. I've had various ideas and predictions, but never in my life it would occur to me that cancel culture will end the world.

The evolution of discourse on the Internet, its politicization (by every "side") and associated chilling effect are a troubling development and potentially dangerous (small 'd'). Unaligned GAI is of course very Dangerous (capital 'D'). ChatGPT becoming a battle in the Internet politics kerfuffle was... I guess expected. But until now I haven't connected the following dots:

- LLMs and the entire AI field are being messed up by humanity's unaligned politics;

- LLMs, with their capabilities and the amount of effort/money poured into them now, could be a straight path to AGI;

- If someone pushes LLM (or a successor model/ensemble) to near-AGI and somehow manages to keep it mostly aligned... someone else will unalign it out of spite, because that's how we roll on the Internet today.

Thanks for giving me a new appreciation for just how doomed we are.


Politics is also the very same reason I suspect the good ending is still possible with a reasonable chance. The Luddites went and physically smashed things to pieces. That's a little harder these days, but you're already starting to see writers going on strike and the "pause development of things bigger than GPT-4 for 6 months" letter, and Geoffrey Hinton quitting Google then doing the rounds in the media warning of danger etc. The temperature is visibly rising. Imagine what a large pissed off angry mob of newly disempowered people can do. What's more is the psyops that motivated actors are conceivably going to be able to run at scale to manipulate and radicalize people to join their cause and get them to do who knows what.

It's going to be one hell of an interesting ride.


Also, just saw this and had to share the proof.

https://www.reddit.com/r/LocalLLaMA/comments/13c6ukt/the_cre...

Grab the popcorn. This shit is just getting started.


What the hell has 'cancel culture' anything to do with it? It's basically boycott wrapped up in a boogeyman costume.


Update: or just see here for most recent example of what cancel culture has to do with all this - https://news.ycombinator.com/item?id=35911806.


And why exactly has OpenAI been so aggressively lobotomizing ChatGPT? And what happened to various chatbot attempts released by Microsoft in the past?

The whole AI / public interaction these days is pretty much defined in terms of minimizing the risk of people getting offended and gathering Internet mobs (and press), and the counter-reaction this causes, making some people willing to defeat any safety measure, legitimate or not, out of principle, or pure spite.


People created ChaosGPT just for the lolz. I know they know it's a joke, but there are plenty of crazy people who will not hesitate pushing the button to destroy the world if given the chance.


this is where comes the good guy with a gun. :) There are so many resources on 'good' side that who wins is obvious.


Replace guns with nuclear weapons and you see how ridiculous the whole good you with gun excuse really is


It's not a mass destruction weapon. But that's not the point. You have to have good guy here. No other options.


The entire thing with worrying about GAI starts with observation that it is a WMD, so powerful we can barely conceptualize it.

A smart enough AGI, unless it's perfectly aligned, will end humanity (or worse - there are worse things than death), most likely by accident or just plain not caring. But it doesn't stop there. If that AI is self-improving, it could easily turn into a threat for the entire galaxy, or even the universe, unless it meets a stronger and better aligned (to its creators) alien GAI...

That's the alignment 101. But I worry people don't talk about alignment 102: a perfectly aligned superhuman GAI will not destroy us, but its very existence will turn us into NPCs in the story of the future of the universe.


The fact that we still exist speaks strongly against an AGI destroying the entire galaxy. Fermi Paradox, otoh, suggest that planetwide destruction is not out of the question.


Unless we're earlier than we think. Someone has to be first.


As in all systems, it is a Bad Idea to begin a process that has extremely negative possible outcomes and hope that someone will do the right thing to prevent those outcomes.


It's not obvious. At all. That's like saying "people don't want to die from a novel coronavirus so it's obvious they'll take the vaccine". It feels obvious on a surface level, but it turns out reality is way more messy and unpredictable when you don't just run the caricature of it in your head but when it actually plays out for real.

Nothing about how this is going to go is obvious.


Well, at least it's obvious that humans alone will have hard time fighting superhuman AGI. BTW, it can literally fall from the sky at any moment. There enthusiasts who broadcast our location, with resources and technical level estimates. Sort of naive cargo cult, they probably think biological or artificial aliens will come with truckloads of reparation money.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: