Hacker News new | past | comments | ask | show | jobs | submit login

Neural nets only need linearly more data for optimal performance: https://www.lesswrong.com/posts/midXmMb2Xg37F2Kgn/new-scalin...

At the same time, fine-tuning sample efficiency increases with scale, so at some point you can possibly one-shot learn state and get rid of exponential searches, solving NP-Hard problems with heuristics. Sounds like a free lunch to me. At least if you can afford a net large enough.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: