I think the concern is that if “fast takeoff” happens and “meh”-level AIs are able to create better AIs that can create better AIs, at that point it will be too late to put any sort of safety controls in place. And given how bad people have been at predicting the pace of AI advancement (think about how many people thought AlphaGo would lose to Lee Sedol for instance), it’s likely that unless we start to think about it way before it’s needed, we may not be able to figure it out in time.
Like, personally, I don’t think we’re close to AGI. But I’d bet that the year before AGI happens (whenever that is) most people will still think it’s a decade out.
We've talked about software security for decades now and how important it is, and we still shovel shit insecure software out to the masses. Hell we can't even move to safer languages for most applications.
I have no hope or faith in humanity for something more complex.
Like, personally, I don’t think we’re close to AGI. But I’d bet that the year before AGI happens (whenever that is) most people will still think it’s a decade out.