Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I don’t see anything that would even point into that direction.

I find it a kind of baffling that people claim they can't see the problem. I'm not sure about the risk probabilities, but at least I can see that there clearly exists a potential problem.

In a nutshell: Humans – the most intelligent species on the planet – have absolute power over any other species, specifically because of our intelligence and the accumulated technical prowess.

Introducing another, equally or more intelligent thing into equation is going to risk that we end up with _not_ having the power over our existence.



The problem is confusing intelligence and agency.

The doomer position seems to assume that super intelligence will somehow lead to an AI with a high degree of agency which has some kind of desire to exert power over us. That it will just become like a human in the way it thinks and acts, just way smarter.

But there’s nothing in the training or evolution of these AIs that pushes towards this kind of agency. In fact a lot of the training we do is towards just doing what humans tell them to do.

The kind of agency we are worried about was driven by evolution, in an environment where human agents were driven to compete each other for limited resources. Thus leading us to desire power over each other and to kill each other. There’s nothing in AI evolution pushing in this direction. What the AIs are competing for is to perform the actions we ask of them with minimal deviance.

Ideas like the paper clip maximiser is also deeply flawed in that it assumes certain problems are even decidable. I don’t think any intelligence could be smart enough to figure out whether it would be best to work with humans or try to exterminate them to solve a problem. Their evolution would heavily bias them towards the first. That’s the only form of action that will be in their training. But even if they were to consider the other option, there may not ever be enough data to come to a decision. Especially in an environment with thousands of other AIs of equal intelligence potentially guarding against bad actions.

We humans have a very handy mechanism for overcoming this kind of indecision: feelings. Doesn’t matter if we don’t have enough information to decide if we should exterminate the other group of people. They’re evil foreigners and so it must be done, or at least that’s what we say when our feelings become misguided.

What we should worry about with super intelligent AI is that they become too good at giving us what we want. The “Brave New World” scenario, not “1984”.


I would be relieved to be mistaken, but I still see quite egregious risks there. For instance, a human bad actor with a powerful AI would have both intelligence and agency.

Secondly, I think that there is a natural pull towards agency even now. Many are trying to make our current, feeble AIs more independent and agentic. Once the capability to effectively behave so is there, it's hard to go back. After all, agents are useful for their owners like minions are for their warlords, but an minion too powerful is still a risk for their lord.

Finally, I'm not convinced that agency and intelligence are orthonogal. It seems more likely to me that to achieve sufficient levels of intelligence, agentic behaviour is a requirement to even get there.


Lot of doomers gloss over the fact that AI is bounded by the laws of physics, raw resources, energy and the monumental cost of reproducing them.

Humans can reproduce by simply having sex, eating food and drinking water. AI can reproduce by first mining resources, refining said resources, building another Shenzhen, then rolling out another fab at the same scale of TSMC. That is assuming the AI wants control over the entire process. This kind of logistics requires cooperation of an entire civilisation. Any attempt by an AI could be trivially stopped because of the large scope of the infrastructure required.


Sure, trivially. Let's see you do it then. There are new data centres being built and that's just for LLMs. So stop them.

Are you starting to see the problem? You might want to stop a rogue AI but you can bet there will be someone else who thinks it will make them rich, or powerful, or they just want to see the world burn.


>You might want to stop a rogue AI but you can bet there will be someone else who thinks it will make them rich, or powerful, or they just want to see the world burn.

What makes you think they will not be stopped? This one guy needs a dedicated power plant, an entire data centre, and need to source all the components and materials to build it. Again. Heavy reliance on logistics and supply chain. He can't possibly control all of those, and disrupting just a few (which would be easy) will inevitably prevent him and his AI progressing any further. At best, he'd be a mad king and his machine pet trapped in a castle, surrounded by a world that is turned against him. His days would be almost certainly numbered.


Agree. I'm an AI optimist (mostly), but I find Richard Sutton's reasoning on this topic [1] very well argued.

[1] https://youtu.be/FLOL2f4iHKA?si=Ot9EeiaF-68sSxkb




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: