Hacker News new | past | comments | ask | show | jobs | submit login

Excession is literally the next book on my reading list so I won't click on that yet :)

> With this metaphor you seem to be saying we should, if possible, learn how to control AI? Preferably before anyone endangers their lives due to it?

Yes, but that's a big if. Also that's something you could never ever be sure of. You could spend decades thinking alignment is a solved problem only to be outsmarted by something smarter than you in the end. If we end up conjuring a greater intelligence there will be the constant risk of a catastrophic event just like the risk of a nuclear armageddon that exists today.




Enjoy! No spoilers from me :)

I agree it's a big "if". For me, simply reducing the risk to less than the risk of the status quo is sufficient to count as a win.

I don't know the current chance of us wiping ourselves out in any given year, but I wouldn't be surprised if it's 1% with current technology; on the basis of that entirely arbitrary round number, an AI taking over that's got a 63% chance of killing us all in any given century is no worse than the status quo.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: