Hacker News new | past | comments | ask | show | jobs | submit login

> Most of the ai safety arguments revolve around “once it’s powerful enough to cause damage it’ll be too late to come up with a strategy”

I don't see any reason to accept this argument. The AI safety people should also prove their assertions, not expect us to take them at face value.




How do you prove a counterfactual?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: