Hacker News new | past | comments | ask | show | jobs | submit login

How could a code completion tool be made safe?

One natural response seems to be “it should write bug-free code”. This is the domain of formal verification, and it is known to be undecidable in general. So in this formulation safe AI is mathematically impossible.

Should it instead refuse to complete code that can be used to harm humans? So, it should read the codebase to determine if this is a military application? Pretty sure mainstream discourse is not ruling out military applications.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: