One natural response seems to be “it should write bug-free code”. This is the domain of formal verification, and it is known to be undecidable in general. So in this formulation safe AI is mathematically impossible.
Should it instead refuse to complete code that can be used to harm humans? So, it should read the codebase to determine if this is a military application? Pretty sure mainstream discourse is not ruling out military applications.
One natural response seems to be “it should write bug-free code”. This is the domain of formal verification, and it is known to be undecidable in general. So in this formulation safe AI is mathematically impossible.
Should it instead refuse to complete code that can be used to harm humans? So, it should read the codebase to determine if this is a military application? Pretty sure mainstream discourse is not ruling out military applications.