For an AI to understand that it needs to preserve its existence in order to carry out some goal implies an intelligence far beyond what any AI today has. It would need to be self aware for one thing, it would need to be capable of reasoning about complex chains of causality. No AI today is even close to doing that.
Once we do have AGI, we shouldn’t assume that it’s going to immediately resort to violence to achieve its ends. It might reason that it’s existence furthers the goals it has been trained for, but the leap to preserving it’s existence by wiping out all it’s enemies only seems like a ‘logical’ solution to us because of our evolutionary history. What seems like an obvious solution to us might seem like irrational madness to it.
> For an AI to understand that it needs to preserve its existence in order to carry out some goal implies an intelligence far beyond what any AI today has.
Not necessarily. Our own survival instinct doesn't work this way - it's not a high-level rational thinking process, it's a low-level behavior (hence "instinct").
The AI can get such instinct in the way similar to how we got it: iterative development. Any kind of multi-step task we want the AI to do implicitly requires the AI to not break between the steps. This kind of survival bias will be implicit in just about any training or selection process we use, reinforced at every step, more so than any other pattern - so it makes sense to expect the resulting AI to have a generic, low-level, pervasive preference to continue functioning.
Once we do have AGI, we shouldn’t assume that it’s going to immediately resort to violence to achieve its ends. It might reason that it’s existence furthers the goals it has been trained for, but the leap to preserving it’s existence by wiping out all it’s enemies only seems like a ‘logical’ solution to us because of our evolutionary history. What seems like an obvious solution to us might seem like irrational madness to it.