At the end game, a "non-safe" superinteligence seems easier to create, so like any other technology, some people will create it (even if just because they can't make it safe). And in a world with multiple superintelligent agents, how can the safe ones "win"? It seems like a safe AI is at inherent disadvantage for survival.
The current intelligences of the world (us) have organized their civilization in a way that the conforming members of society are the norm and criminals the outcasts. Certainly not a perfect system, but something along those lines for the most part.
I disagree that civilization is organized along the lines of conforming and criminals. Rather, I would argue that the current intelligences of the world have primarily organized civilization in such a way that a small percentage of its members control the vast majority of all human resources, and the bottom 50% control almost nothing[0]
I would hope that AGI would prioritize humanity itself, but since it's likely to be created and/or controlled by a subset of that same very small percentage of humans, I'm not hopeful.
That suggests that there are scenarios under which we survive. I'm not sure we'd like any of them, though "benign neglect" might be the best of a bad lot.