Hacker News new | past | comments | ask | show | jobs | submit login

What’s surprising to me is that top-level executives think that self-destructing the current leader in LLMs is the way to ensure safety.

Aren’t you simply making space for smaller, more aggressive, and less safety-minded competitors to grab a seat on the money train to do whatever they want to do?

Pandora’s box is already open. You have to guard it. You have to use your power and influence to enforce other competitors to also do that with their own boxes.

Self-destructing is the worst way to ensure AI safety.

Isn’t this just basic logic? Even chatGPT might have able to point out how stupid this is.

My only explanation is that something deeper happened that we’re not aware of. Us or them board fight might explain it. Great. Altman is out. Now what? Nobody predicted this would happen?




> Pandora’s box is already open. You have to guard it. You have to use your power and influence to enforce other competitors to also do that with their own boxes.

> Self-destructing is the worst way to ensure AI safety.

If they don't believe you're prepared to destroy the company if that's what it takes, then you have zero power or influence. If they try to "call your bluff", you have to go there, otherwise they'll never respect you again.


So the outcome of that is that you either destroy the company or you’re out? Either outcome is the same from your perspective, you chose to no longer have any say in what happens next


Sure, but you have a say in what happens right then and there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: