Hacker News new | past | comments | ask | show | jobs | submit login

My personal approach is to read papers that seem interesting. Of course I usually do not have the necessary background in everything that's mentioned, but I treat those cases as black boxes. E.g. if the paper says that they use X to do Y I'll simply assume that you can do Y using X. If I think that the details of X are important, I dig deeper. Sometimes just by reading the corresponding Wikipedia article, sometimes by looking at the references in the paper. Then repeat recursively.

That approach has the advantage that you'll learn about techniques roughly proportional to their current popularity, but it has the disadvantage that explanations in papers tend to be brief and you have to put them into a coherent whole yourself.

If you prefer textbooks, I heard about http://www.deeplearningbook.org/ but didn't get around to reading it. In addition to neural networks, you'll probably also want to read about classical statistics and probability theory, since that's the origin of concepts like conditional random fields, which can be mixed with neural networks but are unlikely to be covered by literature on deep learning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: