Hacker News new | past | comments | ask | show | jobs | submit login

Sure! I don't see any reason why it would be impossible either, but the (hypothetical) problems are very interesting. Starting with the most basic problem of all: how do we even specify what we want the AI to do? The whole field of AI safety is trying to figure out a way to write rules that an agent wouldn't instantly try to circumvent, and to find some way to provide basic guarantees about the behavior of a system that is incentivized to do bad things (just like corporations are incentivized to find loopholes in the law, hide their misdeeds, and maximize profits at the expense of the common good).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: