If you're interested in Deep Learning or any area of ML it's fairly safe to assume you have a background in linear algebra, probability, calculus and obviously some basic programming.
If you're not interested in learning these areas, it's also safe to say you aren't really interested in deep learning either. Which is not to say if you don't already know these areas you aren't interested in deep learning, but if you don't know them and are interested in deep learning you're likely already studying them.
I say this because deep learning and the vast majority of ML really just boil down to an application of these basic tools. Deep learning/ML without the linear algebra, probability theory, calculus and coding isn't really anything at all.
I’ve taken uni level courses for all of that but it’s been… 15+ years and is entirely out of my brain. I appreciate the feedback. Sounds like I need to brush up and I appreciate your point about it being a mathematical concept at its foundation
I'd say slightly more. Maybe it's just because I attended a state school, but I think my first semester calculus class was all single variable (20 years ago now, so my memory is rusty). You really to understand gradients and jacobians for ML, which I think was calc III for me. But you can skip curl and div part I guess.
Not much of your calculus book will be relevant beyond the first couple of chapters, but you'll live and die by the numerical-method sword. The idea is that you need the analytic insight from the former to understand the latter.
Are you talking about the submission course or Stanford?
Because There is a list of pre-reqs for the submitted course [1] and to be honest I feel like they are the standard requirements for you to fully understand DL may except signal processing stuff that might be taken as optional.
Lots of great resources listed here. But I think there is "Understanding Deep Learning" missing from the list [0]. In my opinion, Simon J.D. Prince accomplished a true feat with his book, not only through the material itself but also with the notes attached to each chapter, linking directly to advanced references (free literature review), exercises that really challenge your understanding of the material, and great notebooks with code that truly materialized the concepts learned (free exercises to give to students if you teach a DL class, but this community is probably not the targeted audience for that).
Instead Anthropic co-founder siblings the Amodeis made it. Couldnt help but notice that Marc Benioff who owns Time also is the Major investor(think Khosla for OpenAI) into Anthropic. So i would take "times 100" with a side of pickle.
For specifically understanding transformers, this (w/ maybe GPT-4 by your side to unpack jargon/math) might be able to get you from lay-person to understanding enough to be dangerous pretty quickly: https://sebastianraschka.com/blog/2023/llm-reading-list.html
The "Understanding Deep Learning" book covers more recent models as well: https://udlbook.github.io/udlbook/ (free PDF and Jupyter notebooks available)
The deep learning book is a great choice, as many have mentioned.
I've been making a course that has a little less theory, and a little more application here - https://github.com/VikParuchuri/zero_to_gpt . Videos are all optional (cover the same content as the text).
I kind of want to get into this as a rusty full stack developer of several years, but I have no idea how feasible it is to try to get into this field in any capacity with 6 months of study.
If you remember what derivatives are and are decent with math (and some probability), you have what it takes (!), and can pick up most of this easier than say React, in the sense that the incline of the ramp is much less significant, but it will take more time overall (so more of a marathon than a sprint).
> basics in signal processing (Fourier transform, wavelets).
Are wavelets really basics in signal processing? We definitely didn't cover this my signals and systems class in EE in either grad school or undergrad.
These Deep Learning (and ML) courses are turning into the equivalent of productivity tools. Many are very high quality but won't turn you into a ML/DL expert. The core issue is you need to be spend the time to complete, learn and apply in your own settings. That internal drive is something beyond what these can do and that's the crux of the issue. People keep hoping some magic course will do it for them, but just like the productivity tools, there is no golden or silver bullet. Good old fashioned sit on your butt and do the work ;-).
What does this have to do with productivity tools? This is based on a university course that teaches you the fundamentals of deep learning. Of course it won't turn you into a ML/DL expert, that's not the point. Anyone completing this course on their own in their free time definitely has the internal drive that you say is lacking so I honestly don't get your comment at all.
https://m.youtube.com/playlist?list=PLoROMvodv4rNyWOpJg_Yh4N...
They've posted a significant volume of CS lectures if you go to their channel. They're pretty good.