Hacker News new | past | comments | ask | show | jobs | submit login
Neural network from scratch (sirupsen.com)
208 points by bretthopper on Jan 5, 2022 | hide | past | favorite | 38 comments



If you actually want to understand and implement neural nets from scratch, look into 3Blue1Brown's videos as well as Andrew Ng's course.

https://www.3blue1brown.com/topics/neural-networks

https://www.coursera.org/learn/machine-learning


I completed Andrew Ng's Coursera course and one thing it did not do is make me understand neural nets from scratch. Probably, you and I have different interpretation of "from scratch".


I agree, Andrew Ng's course is overrated.


I actually tried to implement a neutral network from scratch by following 3blue1Browns videos, and using the same handwritten number data set. But I got stumped when I realized I didn't have a clue how to choose the step size in gradient descent, and it's not covered in the videos. Despite that problem I'd say the 3B1B videos are excellent for learning the fundamentals of neural networks.


I found Andrew Ng’s Deep Learning Specialization much better for understanding neural networks than the machine learning course. https://www.coursera.org/specializations/deep-learning


"from scratch" but uses autograd and glosses over backpropagation.


Yeah, the autograd choice struck me as odd. Given how simple the model is, it feels like it would have been easy to show how to compute gradients. The whole benefit of having this super simple toy problem is that we can reason about the meaning of individual weights - it's a perfect opportunity to build clear intuition about gradients and weight updates. Switching to torch is just substituting one black box for another - to a novice reader, the torch code is just magical incantations.


This could be the start of a breath-first approach, where you start with very little code, and then dig deep into things like autograd or "backprop" as you get interested in such details.

It seems to me that trying to give explicit formulas for gradients is just swamping the beginner with unnecessary details that don't help to build intuition. I think the author made exactly the right choices.

It used to be that some NN tutorials would swamp the beginner with backprop formulas, which beginners were forced by their professors to memorise. I don't think this succeeded at doing much; it only made the subject seem more complicated than it needed to be; and I think it should all be abstracted away into autograd.


He doesn't implement matrix operations, floating point addition / multiplication either.


The difference is that autograd isn't something you should already know if you're learning neural networks. Many "from scratch" tutorials implement backprop because this is a key part. I think your comment is a bit facetious and you're not acting in good faith.


I could definitely see an argument that knowing gradient descent is a requisite just as much as knowing matrix multiplication.

Both of these mathematical concepts are being abstracted over and are being used by the author's neutral network implementation.

Now whether you consider using other people's libraries for this as being "from scratch" is up to you.


I bet he didn't even create the universe.


    import torch as scratch
    from scratch import nn


You can't do this in Python.


You can do anything in Python.

    import antigravity
https://xkcd.com/353/


Yeah, but the Python for it would be more like:

  import sys
  import torch
  sys.modules['scratch'] = torch
  from scratch import nn


Interesting find. Just FYI, this repo has been the OG for several years, when it comes to building NN from scratch:

https://github.com/eriklindernoren/ML-From-Scratch



I think NNs are going to be a challenge as complexity grows.

I'm trying to make mobs behave autonomously in my 3D action MMO.

The memory (depth) I would need for that to succeed and the processing power to do it in real-time is making my head spin.

Let's hope Raspberry 5 has some hardware to help with this.

At this point I'm probably going to have some state machine AI (think mobs in Minecraft; basically check range / view then target and loop) but instead of deterministic or purely random I'm going to add some NN randomness to the behaviour so that it can be interesting without just adding quantity (more mobs).

So the inputs would be the map topography and entities (mobs and players) and the output whether to engage or not, the backpropagation would be success rate I guess? Or am I thinking about this the wrong way?

I wonder what adding a _how_ to the same network after the _what_ would look like, probably a direction as output instead of just an entity id?


Have you tried using something more efficient and precise, for example a flocking algorithm?

https://www.oreilly.com/library/view/ai-for-game/0596005555/...

Neural nets and machine learning in general are good for problems whose solutions are hard to hand-code. If you can hand-craft a solution there's no real need for machine learning and it might simply take up resources you need more elsewhere.


This is actually something I see as complementary, so the AI on the server has to decide the general direction and then on the client these flocking simulations can be applied for scale if you have the processing power.

I'm thinking each group of mobs has a server directed leader, who's position is server deterministic and then to save bandwidth the PvE minions and their movements can be generated on each client, just tracking when they are killed.

I'm not scared of desyncing in the details. As long as the PvP stuff is coherent.


Having experience writing crowd simulations in both games and VFX in film, I suggest you take a look at Massive, and perhaps read some of their documentation to learn how this successful implementation handles crowd simulations: https://www.massivesoftware.com/


I remember doing this in PHP(4? 5?) for my undergrad capstone project because I had a looming due date and it was the dev environment I had readily available. No helpful libraries in that decade. Great way to really grok the material, and really lets me appreciate how spoiled we are today in the ML space.


I remember one of the first things I saw on the web (in 1996) was a neural net simulator, written in javascript of all things: https://web.archive.org/web/19961226105339/http://www.ozemai...

from 1997, a bit more fancy: https://web.archive.org/web/19990117022955/http://www.hav.co...


Interesting read, but there's a few things I haven't understood. In the training [function](https://colab.research.google.com/drive/1YRp9k_ORH4wZMqXLNkc...):

1- In the instruction `hidden_layer.data[index] -= learning_rate * hidden_layer.grad.data[index]`where was the `hidden_layer.grad` value updated?

2- from what I've understood, we'll update the hidden_layer according to the inclination of the error function (because we want to minimize it). But where are `error.backward()` and `hidden_layer.grad` interconnected?


Great questions, I struggled with this part the most when I was learning it.

`.grad` is set by `autograd` when calling `backward()`

Probably the easiest way to understand this is to play a bit with `.grad` and `backward()` on their own, with the first code sample in the `autograd` section [1].

[1]: https://sirupsen.com/napkin/neural-net#automagically-computi...


Thank you!


FYI Markdown type links don't work on HN, that's why you see people type footnotes [0]

[0] like this


For those interested in simple neural networks to CNN and RNNs implemented with just Numpy (including backprop):

https://github.com/parasdahal/deepnet


If you are interested in learning what makes a Deep Learning library, and want to code one, for learning experience, you should check out- Minitorch [0].

[0]: https://github.com/minitorch/


How important is it to learn DNN/NN from scratch? I have several years of experience working in the tech industry and I am learning DNN for applying it in my domain. Also for hobby side projects.

I did the ML Coursera course by Andrew Ng a few years ago. I liked the material but I felt the course focused a little too much on the details and not enough on actual application.

Would learning DNN from a book like 1. https://www.manning.com/books/deep-learning-with-pytorch or 2. https://www.manning.com/books/deep-learning-with-python-seco...

Be a better approach for someone looking to learn concepts and application rather than the detailed mathematics behind it?

If yes, which of these two books (or alternative) would you recommend ?


Nice! I made a gpu accelerated backpropagation lib a while ago to learn about NNs, if you are interested check it out here: https://github.com/zbendefy/machine.academy


I was disappointed when i realized this isn’t Sentdex’s NNFS. He makes really good content.


Link to book website: https://nnfs.io/

Link to one video: https://www.youtube.com/watch?v=Wo5dMEP_BbI


I agree. I really like his vulnerable, authentic content. He does not come across as a know-it-all but as someone who is genuinely curious about the world.


this is exactly that I have been looking for -- and I wonder how I missed it until now in my search for ML tutorials. Many thanks.


You can't definitely start Neural network without learning other concepts.


I did this for my AI class. You can watch the result here: https://www.youtube.com/watch?v=w2x2t03xj2A




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: