Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>> If you’re trying to do impossible things, this effect should chill you to your bones. It means you could be intellectually stuck right at this very moment, with the evidence right in front of your face and you just can’t see it.

I know this article is not about AI, even remotely- but, oh my dog, it so is. It's like, you can do really well if you carefully select the borders of your problem domain- ImageNet, Go, Phenix, AZ; but when you try to use the same super powerful tools in an unconstrained situation (imagine a self-driving car in Mumbai; or playing Warcraft) then all those little noisy, unpredictable, unmodellable details in the real world kick your models' accuracy to the curb.

In fact, I think most AI folks have figured this out by now and that this realisation is a very big reason why AI has advanced with leaps and bounds in recent years. But we're still up against impossible odds here.

And this should be put to the attention of the Singularitarians- you don't know what you don't know yet. It might look like things are about to go exponential, but you never know what's behind the next bend. As Solon said to Croesus, "Count no man happy until the end is known".



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: