You're presuming that there will only be a single origin. I think that's false. Even if "we" build AGI "in the safest and most ethical way possible", there will be a "they" who doesn't.
If the technology creates a rapid positive feedback loop, singularity-style, then I might agree. If not, though... well, if not, it won't matter as much, because there will be less to fear from an AI that can't rapidly improve itself.