Evil isn't required, all that's required is a misalignment of objectives. Sort of like how objectives are misaligned between us and our less intelligent animal cousins. And look how things are turning out for them as we bulldoze their forests.
My main gripe with AI existential risk types is they have their own conflict of interest which comes from their position in society. They're all from the top 1% strata of society (status & socioeconomic), and this gives them a psychological bias that makes them preoccupied with what can go wrong for their comfortable lives instead of thinking about what can go right for the bottom 20% and how AI can be steered to achieve that possible betterment.
My main gripe with AI existential risk types is they have their own conflict of interest which comes from their position in society. They're all from the top 1% strata of society (status & socioeconomic), and this gives them a psychological bias that makes them preoccupied with what can go wrong for their comfortable lives instead of thinking about what can go right for the bottom 20% and how AI can be steered to achieve that possible betterment.