I completed Andrew Ng's Coursera course and one thing it did not do is make me understand neural nets from scratch. Probably, you and I have different interpretation of "from scratch".
I actually tried to implement a neutral network from scratch by following 3blue1Browns videos, and using the same handwritten number data set. But I got stumped when I realized I didn't have a clue how to choose the step size in gradient descent, and it's not covered in the videos. Despite that problem I'd say the 3B1B videos are excellent for learning the fundamentals of neural networks.
Yeah, the autograd choice struck me as odd. Given how simple the model is, it feels like it would have been easy to show how to compute gradients. The whole benefit of having this super simple toy problem is that we can reason about the meaning of individual weights - it's a perfect opportunity to build clear intuition about gradients and weight updates. Switching to torch is just substituting one black box for another - to a novice reader, the torch code is just magical incantations.
This could be the start of a breath-first approach, where you start with very little code, and then dig deep into things like autograd or "backprop" as you get interested in such details.
It seems to me that trying to give explicit formulas for gradients is just swamping the beginner with unnecessary details that don't help to build intuition. I think the author made exactly the right choices.
It used to be that some NN tutorials would swamp the beginner with backprop formulas, which beginners were forced by their professors to memorise. I don't think this succeeded at doing much; it only made the subject seem more complicated than it needed to be; and I think it should all be abstracted away into autograd.
The difference is that autograd isn't something you should already know if you're learning neural networks. Many "from scratch" tutorials implement backprop because this is a key part. I think your comment is a bit facetious and you're not acting in good faith.
I think NNs are going to be a challenge as complexity grows.
I'm trying to make mobs behave autonomously in my 3D action MMO.
The memory (depth) I would need for that to succeed and the processing power to do it in real-time is making my head spin.
Let's hope Raspberry 5 has some hardware to help with this.
At this point I'm probably going to have some state machine AI (think mobs in Minecraft; basically check range / view then target and loop) but instead of deterministic or purely random I'm going to add some NN randomness to the behaviour so that it can be interesting without just adding quantity (more mobs).
So the inputs would be the map topography and entities (mobs and players) and the output whether to engage or not, the backpropagation would be success rate I guess? Or am I thinking about this the wrong way?
I wonder what adding a _how_ to the same network after the _what_ would look like, probably a direction as output instead of just an entity id?
Neural nets and machine learning in general are good for problems whose solutions are hard to hand-code. If you can hand-craft a solution there's no real need for machine learning and it might simply take up resources you need more elsewhere.
This is actually something I see as complementary, so the AI on the server has to decide the general direction and then on the client these flocking simulations can be applied for scale if you have the processing power.
I'm thinking each group of mobs has a server directed leader, who's position is server deterministic and then to save bandwidth the PvE minions and their movements can be generated on each client, just tracking when they are killed.
I'm not scared of desyncing in the details. As long as the PvP stuff is coherent.
Having experience writing crowd simulations in both games and VFX in film, I suggest you take a look at Massive, and perhaps read some of their documentation to learn how this successful implementation handles crowd simulations: https://www.massivesoftware.com/
I remember doing this in PHP(4? 5?) for my undergrad capstone project because I had a looming due date and it was the dev environment I had readily available. No helpful libraries in that decade. Great way to really grok the material, and really lets me appreciate how spoiled we are today in the ML space.
1- In the instruction `hidden_layer.data[index] -= learning_rate * hidden_layer.grad.data[index]`where was the `hidden_layer.grad` value updated?
2- from what I've understood, we'll update the hidden_layer according to the inclination of the error function (because we want to minimize it). But where are `error.backward()` and `hidden_layer.grad` interconnected?
Great questions, I struggled with this part the most when I was learning it.
`.grad` is set by `autograd` when calling `backward()`
Probably the easiest way to understand this is to play a bit with `.grad` and `backward()` on their own, with the first code sample in the `autograd` section [1].
If you are interested in learning what makes a Deep Learning library, and want to code one, for learning experience, you should check out- Minitorch [0].
How important is it to learn DNN/NN from scratch? I have several years of experience working in the tech industry and I am learning DNN for applying it in my domain. Also for hobby side projects.
I did the ML Coursera course by Andrew Ng a few years ago. I liked the material but I felt the course focused a little too much on the details and not enough on actual application.
I agree. I really like his vulnerable, authentic content. He does not come across as a know-it-all but as someone who is genuinely curious about the world.
https://www.3blue1brown.com/topics/neural-networks
https://www.coursera.org/learn/machine-learning