This was exactly my point -- at what inflection point is the AI's decision sound, vs when it may be based on bias from its base creation code (whatever the substrate code that 'births' an AI? WTF do we call that (as obviously an AI is meant to iteratively evolve - at what point is an AI required to 'check in changes' such that if a rollback is required across that AI's reach may be accomplished...
We need a "product recall" method that doesnt involve Blade Runners and campy one-liners...