Hacker News new | past | comments | ask | show | jobs | submit login

Things that require understanding of causation will be safe longer. Progress like this is driven by massive datasets. Meanwhile, real world action-taking applications require different paradigms to take causation into account[0][1], and especially to learn safely (e.g. learning to drive without crashing during the beginner stages).

There's certainly research happening around this, and RL in games is a great test bed, but people choosing actions will safe from automation longer than people not choosing actions, if that makes sense. It's the person who decides "hire this person" vs the person who decides "I'll use this particular shade of gray."

[0] The best example is when X causes Y and X also causes Z, but your data only includes Y and Z. Without actually manipulating Y, you can't see that Y doesn't cause Z, even if it's a strong predictor.

[1] Another example is the datasets. You need two different labels depending on what happens if you take action A or B, which you can't have simultaneously outside of simulations.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: