Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Layered – Neural Networks in Python 3 (github.com/danijar)
56 points by danijar on Dec 16, 2015 | hide | past | favorite | 5 comments



Such projects are nice for educational use but not so much further when you don't have any GPU implementation.

It's pointed out in the Readme that this was for education but it still wasn't clear whether it's supposed to have a practical use outside of that (some educational projects develop into something useful). If it has (or maybe even if not), it should have a small comparison to other frameworks (maybe Python frameworks only), e.g. Theano, Blocks, Keras, Brainstorm, Neon, .... I think Brainstorm (https://github.com/IDSIA/brainstorm) or Neon (https://github.com/NervanaSystems/neon) are somewhat close to your framework, because they are not based on Theano and thus do not do automatic symbolic gradient calculation but have explicit backprop code. Also, you really should point out what GPU implementation you have (if any), if it supports multi-GPU, and/or maybe other distributed setups.


Interesting, but really slow to train.

Partially because of the class __call__ instead of function calls among other things.


Thanks for taking a look. Do you really think __call__ affects performance that much? I'll look into OpenCL to improve performance.


The lib looks good otherwise. Might be a useful educational tool if anything.

In my tests it's 2x slower, but it might not be the main reason. I didn't profile it at all.

Another thing is that it seems like it doesn't use Atlas to scale to all the cores even though my Python is linked against it.


It got me excited. But slow to train. :-(




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: