Hacker News new | past | comments | ask | show | jobs | submit login

This whole area of research is more hype than substance.

First note that what they tout as being "the brain" is actually just a very very simplified model of the brain. If you really want to model the brain there's a hierarchy of ever more complicated spiking neural nets. Even simulating a single synapse in full detail can be challenging.

Having said that, the fact that some models used in practice have been found to be equal to a neuroscientific model is not really that impressive since it does neither explain the inner working of the brain, nor modern ML models. Unfortunately, Quanta magazine editors are riding too much on the hype wave to notice that.

Note also that Whittingon's other work on predictive coding networks also is not really solid. It was a pretty irritaring experience to read some of his work. That makes me skeptical of how rigorous his claims are in this case.




> Even simulating a single synapse in full detail can be challenging

I wonder how common it is for serious commentators on NN / brain relationship to hold this supposition that in order for the two to be called... let's say "functionally similar/equivalent", that there would have to be some kind of structural equivalence in their most basic parts.

Neurons (and their synaptic connections etc.) developed in a biochemical substrate which is going to bring a certain amount of its own representational baggage with it, i.e. elements which are loosely incidental to "what really matters" in creating the magic of the brain—and we should not expect those features to reappear in artificial NNs (as they are by definition incidental): bringing them up could only establish a trivial non-equivalence imo.

I'd like to see more discussions about NN / brain relationship mentioning which level/kind of equivalence they're refuting/confirming when refuting/confirming.


That has never been thought by anyone but computer scientists who never looked at a biology textbook. To begin approximating what a lone spherical synapse would actually do you'd need to solve 2^n coupled second order differential equations where n is the number of ions used.

That is before you throw in things like neuro transmitters and the physical volume of a cell. Simulating a single neuron accurately is beyond any super computer today. The question is how inaccurately can we simulate one and still get meaningful answers.

Then how we do it 100e9 more times.

source: https://news.ycombinator.com/item?id=32407028

There is interesting discussion there.


While I'm neither a biologist nor a CS PhD, I want to call out the fallacy that simulating a system to a sufficient degree requires simulating each individual molecule in exacting detail.

We've gotten quite far with ideal gas laws without needing to simulate every particle, we used kerosene to get us to the moon without needing to simulate all the reaction species of kerosene combustion, etc.


> there would have to be some kind of structural equivalence in their most basic parts.

Of course there doesn't have to be such an equivalence and I didn't whatn to imply one. What I did what to imply though was that unless there is some relationship between NNs and the brain there is no meaningful way to translate results from one to the other. And currently, AFAIK, we do not have a good "dictionary" for that. Something like what RandomBK has mentioned is still missing.

That being said I would also like to see more NN/brain relationship discusssions. Currently the discussions are really at a super basic level, there were a number of papers out there whether "the brain" does backpropoagation, which was pretty useless science because, again, the brain was modelled in a pretty crudenway. (The literature is huge and I don't claim being omniscient, so perhaps there is something out already there.)


Intentional/incidental biomimicry of high level neural behaviors and structures by ML/AI researchers is hardly a new kid on the block. Sure, with neuron centered approach research is still at the level of discerning urban activity by looking at lighting patterns, but it has obvious practical value even if transformers and diffusion are only loose approximations of what's actually running in wetware.


What would the obvious practical value be? It's not so obvious to me, I would rather say "limited practical value"


General purpose transformers and diffusion models are of limited practical value? Or biomimicry inspired research and design in general?


You were talking about the practical value of comparing Transformers etc. to neural structures: "neuron centered approach research is still at the level of discerning urban activity by looking at lighting patterns, but it has obvious practical value".

This type of research has limited practical value, for the reasons outlined in various comments I made here. Transformers etc. have a lot of practical values.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: