It does not refer to a fast takeoff where an AGI self-improves to ASI over multiple years.
It’s also not referring to the AGI threshold. If foom happens it’s probably unambiguous; there is now an entity that is far more intelligent than humans.
I think the foom scenarios are fairly unrealistic, for thermodynamic reasons. But I think it’s perfectly plausible that an ASI could persuade the company that built it to keep it secret while it acquired more resources and wealth over the course of years.
https://www.lesswrong.com/posts/LF3DDZ67knxuyadbm/contra-yud...
It does not refer to a fast takeoff where an AGI self-improves to ASI over multiple years.
It’s also not referring to the AGI threshold. If foom happens it’s probably unambiguous; there is now an entity that is far more intelligent than humans.
I think the foom scenarios are fairly unrealistic, for thermodynamic reasons. But I think it’s perfectly plausible that an ASI could persuade the company that built it to keep it secret while it acquired more resources and wealth over the course of years.