Hacker News new | past | comments | ask | show | jobs | submit login
Deep neural nets and the purpose of life (medium.com/nitin_pande)
14 points by nitinpande on Sept 15, 2016 | hide | past | favorite | 6 comments



> We as a unit do not matter. What matters is the emergent behaviour (the neural pattern) out of the collective work of everyone who is at this layer of the cosmic deep neural net.

This and similar statements are a species of neo-platonic monism: anti-humanist and system-centric. It is a deeply disturbing and I would argue reactionary thought. If we do not matter, how does one oppose injustice or genocide? If "we are just flowing" how does one imagine the possibility of resistance or dissent?


You could replace "DNN" in this essay with... anything. Seriously, it just says that DNNs "can act as both machine and storage," which is true of any system if you want to define it that way. And pointing out that everything in the universe is one system is just the beginning of physics. DNNs are superficially inspired by parts of the brain, but there's nothing magical about them.


Fair points. Would love to hear the other 'anythings' that can replace DNNs (something that optimises and stores knowledge that can be passed down generations to further build more sophisticated systems).

I thought DNNs are a good framework to contemplate the reason as well as mechanics of our existence. And using the framework I came to the thought that we probably do not have any grand purpose.

Also, I think saying that the universe is a DNN system is different from saying that it is 'a system'.

Agree DNNs are not magical for someone who has been working on them forever but for a new entry into the ML field, believe me they are pretty magical :)


I might just not be sure what you mean by DNN here. A deep neural network is a specific architecture, consisting of input and output layers of discrete nodes, connected through many hidden layers of nodes, with each node able to perform simple operations on the signals passing through it. So I don't see any way you could model the universe or a seed as a literal DNN; I interpreted the main reason for the analogy that they're a system in which storing knowledge (in an NN, as the weights of activation) is an intrinsic part of the way the system is active.

From there, you really can interpret most things as systems with embedded knowledge that both defines their activity and is adjusted by new activity. The position of each atom in an object is intrinsically both the information about that object, and the way that it behaves: as changes are made, the atoms react accordingly, affecting the behavior of the system.

You can view a seed as a "trained model" only insofar as the information on how to become a tree is encoded in the seed. The specific properties of DNNs (hidden layers that increase in abstraction) aren't really present, and anything can encode information: a set of clouds could be considered a trained model on how to make a hurricane, or a single person could be viewed as containing a trained model on how to start a company. Seeds are highly optimized through evolutionary pressure, but that applies to all complex systems, not just DNNs, and optimization is not just non-unique to DNNs but not even necessarily present in the architecture.


Yea sounds like the definition homoiconicity or lisp a bit: "In a homoiconic language the primary representation of programs is also a data structure in a primitive type of the language itself."

https://en.wikipedia.org/wiki/Homoiconicity


Nice! Had not read about this concept earlier. Will research more on it. Thanks for sharing!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: