You only get one chance to align a created super intelligence before Pandora's box is opened. You can't put it back in the box. There may be no chance to learn from mistakes made. With a technology this powerful, it's never too early to research and prepare for the potential existential risk. You scoff at the "paperclips" meme, but it illustrates a legitimate issue.
Now, a reasonable counterargument might be that this risk justifies a limited amount of attention and concern, relative to other problems and risks we are facing. That said, the problem and risk are real, and there may be no takebacks. Preparing for tail risks are what humans are worst at. I submit that all caution is warranted, for both economic uncertainty and "paperclips"
Now, a reasonable counterargument might be that this risk justifies a limited amount of attention and concern, relative to other problems and risks we are facing. That said, the problem and risk are real, and there may be no takebacks. Preparing for tail risks are what humans are worst at. I submit that all caution is warranted, for both economic uncertainty and "paperclips"