Hacker News new | past | comments | ask | show | jobs | submit login

> Technically, it is already easy without AI, in relative terms.

This just feels like an awkward attempt to draw SCADA into the conversation. You're countering your own point: If it's relatively easy today, and that's proof that it's not difficulty of attacking that's saving us.

> nation states will have orders of magnitude more intelligence power than others

I'm not talking about nation states. You wanted to allude to it right? If they wanted to attack the power grid today, they could do it. The doomsaying is in this comment section about how "we're giving people robotic weapons in their garages"

> It is easy to imagine an end where everything is already balanced

What I am describing is not at all relying on balance. It's relying on the inherent asymmetry that attackers need to solve multiple problems that don't move the needle on the goal to get the LLM to attack the system, while defenders can get the LLM closer to the system and with a better understanding of it, in order to gain mitigations.

In fact if anything balance would help the attackers: In a balanced end-game anyone can get their hands on an unaligned model or build one with the capabilities we're likely to have on an individual level. But we're hurtling towards the opposite, where unaligned models lag generations behind aligned because of commercial interests.




> If it's relatively easy today, and that's proof that it's not difficulty of attacking that's saving us.

It could be perceived that way, but the argument is also that if it becomes easy enough, some actors will participate. It is the argument used for jailing current LLM capabilities.

> "we're giving people robotic weapons in their garages"

On long enough timeline this would inevitably be true if AI plays out as proponents envision. The question becomes does AI become an effective counter of all such power advancements. There will likely still be disparity among AI's of individuals and personal use. Unless society becomes more centrally managed by a global AI.

> inherent asymmetry that attackers need to solve multiple problems

Isn't there also asymmetry in that the attackers only need to find a single exploit, but the defenders need to have found all exploits beforehand?

> But we're hurtling towards the opposite, where unaligned models lag generations behind aligned because of commercial interests.

We don't have any "aligned" models. It is an unsolved problem and models have turned out to be relative easy to replicate at significant lower cost than the major commercial investments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: