Hacker News new | past | comments | ask | show | jobs | submit login

I watched all of the 2018 Part 1 videos and it was not my style.

One thing I liked is the positive attitudes of Jeremy and Rachel, but there was not much theory and a lot of time was spent on questions and answers for the participants of the lecture that I don't think necessary.

I wanted to see definitions and "why", but the course spends too much time on "how". Sometimes I see "why" in the lecture, but many times it is just an answer for the question from a random participant and the answers seemed not well prepared, so it makes it hard to deeply understand the content.

I think fast.ai (at least Part 1) may be good after taking other deep learning courses and before participating in the Kaggle competitions that when you are already familiar with the theories.




Top-down vs bottom-up depends to some extent on what you enjoy, vs what you need to be patient about. With our top-down approach you do get to all of the 'why', but only after understanding 'how'. So it's good for people who want to get started doing stuff right away, and don't mind waiting a bit to understand the details. It means you can start experimenting and building a good intuition for training models, which I believe is the most important skill for a practitioner.

On the other hand, with the bottom-up approach of Andrew Ng in deeplearning.ai, you start with a lot of 'why', and later on get to 'how' (although in less detail and fewer best practices than we show). So it's good for people who want to understand the theory right away, and don't mind waiting a bit to understand how to use it.

A lot of our students did Andrew's course after ours, and many did it in the reverse order. All have reported finding the combination more helpful than either on their own. When we describe 'why' it's mainly with code, whereas with Andrew it's mainly with math - so which you prefer will also depend on which notation and framework you're more comfortable with.

(But I promise - you do get all the 'how' with us, particularly in part 2! Our students have gone on to OpenAI, Google Brain, and senior AI leadership positions at well known startups, as well as writing and implementing new papers. Here's an example of a student who just implemented a paper that was released within the last month: https://sgugger.github.io/deep-painterly-harmonization.html#... )


Granted, each person has their own learning style. Some do better with theory, some do better with practical application.

In my case, I stopped doing data MOOCs entirely once I realized I learned 10x faster just by reading documentation and working on a project from the bottom-up, since MOOCs rarely highlight the annoying gotchas present in real-world data applications.


I can see how The courses would be tough to digest without some theory from other sources, but I have found them to be invaluable for tips and tricks. They are loaded with practical information.

Much of Deep Learning is still experimental in nature and requires quite a bit of educated guessing. A number of times I have been stuck on a particular deep learning problem and a passing comment from one of the fast.ai videos has given me the perfect insight.


I'll have to agree with this one. I had watched, and done most of the assignments, in the 2017 part-I of the course. While I did get stuff working and got results, I was left unsatisfied.

I recently went through the videos, of Andrew Ng's Coursera course to get feel for the theory and intuition,and I'll be taking another stab at fast.ai courses to compare his assignments with Andrew Ng's.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: