Hacker News new | past | comments | ask | show | jobs | submit login

https://en.wikipedia.org/wiki/Fixed_action_pattern

More generally, reptiles are born with nearly all the behaviors they'll need throughout life. Why wouldn't humans be born with some?




>action patterns are said to be produced by the innate releasing mechanism, a "hard-wired" neural network, in response to a sign stimulus or releaser

This is exactly what I'm talking about. Just like a baby deer "instinctively" can walk, but wobbles around for the first few hours, what you're seeing is something very similar to a purpose evolved neural network structure who's weights are being set through the principle of firing and wiring together (I forget what it's called).

I can't believe I got -4 for that!

Edit: hebbian learning. Point is it's probably far too much information to encode in DNA, but if you structure your neural network properly, you encode, how could I put it, the general topology of the problem you are attempting to solve, and through reinforcement learning "fill in the blanks" by training weights (or hebbian learning which functions similarly).


There's even a lot of procedural building during fetal development, with limited input.

Brain scans of not-yet-born babies shows specific kind of brain waves.


Your hypothesis feel correct, and is one of two three parts that I feel are missing from current deep learning networks.

1) Pre-existing structures that are already specialized for the necessary tasks, but untrained. We kind of mimic this with transfer learning, and by discovering more appropriate general architectures by hand.

2) Training while inferring. We very crudely approximate this by releasing updated models every month but I think it would be best if also performed at the edge. Google has begun doing this, I have hope for 'federated learning'[0].

3) 20+ years of exaflop training.

More narrowly focused to this article, I believe researchers keep finding that models which are architected to solve the most "general case" possible to solve consistently perform better on highly specific tasks than models trained only on those specific tasks. Definitely creating models that understand general physics follows that trend. Although I suspect, (as I believe you do), that scaling will be hampered without some sort of ML "fixed action patterns".

My thinking about this topic has been strongly guided by a special issue of Scientific American: Mind that I read in 2013 [1]. The issue was hard for me to find today because it's not listed in the usual archives, due to being a special edition. SCIENTIFIC AMERICAN MIND September 2013 Volume 22, Issue 3s

The whole issue is devoted to optical illusions and what they can tell us about how our brain uses evolutionary shortcuts to efficiently determine things in the real world. "In the wild", these shortcuts improve accuracy and speed of inference. But with artificial stimuli, they can lead us astray, and do in the case of artificially generated optical illusions.

As for the -4 (which is the maximum negative you can go on HN) I think some people just saw the first part and clicked downvote at that point.

> I don't think that's quite right. I believe that humans are essentially born as blank neural networks

I wouldn't worry about the vote counter. "Those who play for applause, that's all they'll get." -Wynton Marsalis' dad.

Following up like this to clarify for us idiots is really the best thing to do, maybe editing the original comment for clarity if you really feel like it.

0: https://ai.googleblog.com/2017/04/federated-learning-collabo... 1: https://www.scientificamerican.com/magazine/special-editions...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: