I remember taking this course at CMU, I had no knowledge of deep learning before this course. After this course, I had trained over 75 models in the assignments, implemented a pytorch backend and a significantly large course project that I was confident to launch my career in Deep Learning. Cannot recommend this course enough, you need to do all the assignments to get maximum value out of it and it can be intense. But think of it as a bootcamp for machine learning. I still find these course materials useful in interviews.
>You will need familiarity with basic calculus (differentiation, chain rule), linear algebra, and basic probability.
So if I'm at the point that math skills, rather than programming skills are my barrier to interesting courses like this one, does anyone know of any good resources? I don't seem able to teach myself calc from a book like I did Python.
Do you have any links for someone who did maths and stats in uni and basically forgot most of it (except very basic algebra)? I would like to wake up my near-dead math brain cells.
I took this course the first semester it was given. There was one TA, and now there's 24! Fun fact: The TA was the writer of Aqua's 90s hit "Doctor Jones".
I am a 1year experienced software engineer in a small company. I have been learning Machine Learning recently for company's project. Do you recommend me this course? I want to learn the concept systematically.
These resources were helpful for me. Note that, [1] and [2] are concerned about systematic understanding rather than hands on. [3] is a hands on exercise to build neural networks from ground up.
1. A fantastic resource and best resourse IMO, for getting probablistic perspective about machine learning from ground up:
I agree, this only is useful for passive learning from watching lecture videos, which is not an ideal way to soak in the material. Even if the quizzes and assignments are just instructions id be happy to experiment on my own.
I think for someone who hasn’t seen the material at all before it would be a lot for a semester. They don’t know what backpropagation is but by the end will understand a diffusion model? It’s ambitious, I think.
The other thing is this seems to be very CNN heavy. Four lectures on the topic seems like a lot.
Also, I don’t see embeddings explicitly mentioned as a topic. They’re a huge component of industrial research, and creating good embeddings and retrieving them quickly is a topic I feel students should also be exposed to. (Yes, they mention “representation” with autoencoders but quite frankly the code bit is generally not useful for similarity metrics.)
Finally, it would be nice to expose students to multimodal learning. Something like CLIP would be pretty neat to expose students to. It’s a great insight when you realize that you can train projections of multiple modalities into a shared high dimensional space. If they’re going to cover diffusion models certainly complexity isn’t a concern.
Can you recommend any tutorials/resources about designing and training simple multi-modal models like CLIP? Or should I just be reading the papers and following along?
Well, the core idea is to train a text encoder and an image encoder jointly with in-batch negatives. In other words, a two tower model maximizing the diagonal and minimizing everything else. They have pseudo code in the paper.
> I think for someone who hasn’t seen the material at all before it would be a lot for a semester. They don’t know what backpropagation is but by the end will understand a diffusion model? It’s ambitious, I think.