Hacker News new | past | comments | ask | show | jobs | submit login

I'm not sure this will ever be solved. It requires both a technical solution and social consensus. I don't see consensus on "alignment" happening any time soon. I think it'll boil down to "aligned with the goals of the nation-state", and lots of nation states have incompatible goals.



I agree unfortunately. I might be a bit of an extremist on this issue. I genuinely think that building agentic ASI is suicidally stupid and we just shouldn’t do it. All the utopian visions we hear from the optimists describe unstable outcomes. A world populated by super-intelligent agents will be incredibly dangerous even if it appears initially to have gone well. We’ll have built a paradise in which we can never relax.


What's the difference between your "agentic AIs" and, say, "script kiddies" or "expert anarchist/black-hat hackers"?

It's been obvious for a while that the narrow-waist APIs between things matter, and apparent that agentic AI is leaning into adaptive API consumption, but I don't see how that gives the agentic client some super-power we don't already need to defend against since before AGI we already have HGI (human general intelligence) motivated to "do bad things" to/through those APIs, both self-interested and nation-state sponsored.

We're seeing more corporate investment in this interplay, trending us towards Snow Crash, but "all you have to do" is have some "I" in API be "dual key human in the loop" to enable a scenario where AGI/HGI "presses the red button" in the oval office, nuclear war still doesn't happen, WarGames or Crimson Tide style.

I'm not saying dual key is the answer to everything, I'm saying, defenses against adversaries already matter, and will continue to. We have developed concepts like air gaps or modality changes, and need more, but thinking in terms of interfaces (APIs) in the general rather than the literal gives a rich territory for guardrails and safeguards.


> What's the difference between your "agentic AIs" and, say, "script kiddies" or "expert anarchist/black-hat hackers"?

Intelligence. I'm talking about super-intelligence. If you want to know what it feels like to be intellectually outclassed by a machine, download the latest Go engine and have fun losing again and again while not understanding why. Now imagine an ASI that isn't confined to the Go board, but operating out in the world. It's doing things you don't like at speeds you can scarcely comprehend and there's not a thing you can do about it.


But the world is not a game where you "win" by intelligence; very far from it. Just look at who is currently in the White House.


> Now imagine an ASI that isn't confined to the Go board, but operating out in the world.

I don't think it's reasonable at all to look at a system's capability in games with perfect and easily-ingested information and extrapolate about its future capabilities interacting with the real world. What makes you confident that these problem domains are compatible?


That’s not what I was saying at all. I was using Go as an example of what the experience of being helplessly outclassed by a superior intelligence is like: you are losing and you don’t know why and there’s nothing you can do.


I completely agree with you. Chess/Go/Poker have shown that these systems can become so advanced, it becomes impossible for a human to understand why the AI chose a move.

Talk to the best chess players in the world and they'll tell you flat out they can't begin to understand some of the engine's moves.

It won't be any different with ASI. It will do things for reasons we are incapable of understanding. Some of those things, will certainly be harmful to humans.


> What's the difference between your "agentic AIs" and, say, "script kiddies" or "expert anarchist/black-hat hackers"?

The difference is that a highly intelligent human adversary is still limited by human constraints. The smartest and most dangerous human adversary is still one we can understand and keep up with. AI is a different ball game. It's more similar to the difference in intelligence between a human and a dog.


> we just shouldn’t do it.

I think what Accelerationism gets right is that capitalism is just doing it - autonomizing itself - and that our agency is very limited, especially given the arms race dynamics and the rise of decentralized blockchain infrastructure.

As Nick Land puts it, in his characteristically detached style, in A Quick-and-Dirty Introduction to Accelerationism:

"As blockchains, drone logistics, nanotechnology, quantum computing, computational genomics, and virtual reality flood in, drenched in ever-higher densities of artificial intelligence, accelerationism won't be going anywhere, unless ever deeper into itself. To be rushed by the phenomenon, to the point of terminal institutional paralysis, is the phenomenon. Naturally — which is to say completely inevitably — the human species will define this ultimate terrestrial event as a problem. To see it is already to say: We have to do something. To which accelerationism can only respond: You're finally saying that now? Perhaps we ought to get started? In its colder variants, which are those that win out, it tends to laugh." [0]

[0] https://retrochronic.com/#a-quick-and-dirty-introduction-to-...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: