Hacker News new | past | comments | ask | show | jobs | submit login
What Can AI Get from Neuroscience? (2007) [pdf] (gatech.edu)
45 points by koopuluri on Aug 22, 2015 | hide | past | favorite | 30 comments



It's going to be the other way around. The brain is a knotted mess of connections between lots of different neuron types, and I don't believe the complexity of the brain is a necessary feature of Intelligence. Because of this extra complexity, the brain will serve as a poor model system for Intelligence.

I think deep neural networks running on GPUs will teach us more about Intelligence. Eventually, Neuroscientists will have to transfer ideas from machine learning to notions of how the brain works.


It's going to be the other way around.

I don't know if it's so much that neuroscience needs AI to understand the brain or not. I mean, maybe. I'm not exactly an expert on either, but I know more about AI than I do neuroscience. But my take has always been more in line with what somebody else said about how "in order to learn to fly, we didn't build a bird".

To me, that gets to the core issue. In order to build "artificial intelligence", we don't necessarily have to build something that works exactly like the human brain does. I mean, it's called artificial intelligence for a reason.

That's certainly not to say that studying the brain is worthless. Far from it. But I do tend to think that a better approach to building an AI is to just focus on building something that displays intelligence (however you want to define that) without worrying about whether or not it is essentially a human brain replicated in silicon.



How can the brain [1] be a poor model of intelligence, if our notions and very definitions of intelligence are based on the intelligence it provides?

[1] Of course, considering only the brain is wrong and reminiscent of the dualist myth. By brain I mean the nervous system, which is a open micro system embodied in a larger and also open macro system - the organism.


Look at how birds fly. Then look at how planes fly.


Yes, birds are self replicating units that subsist on food available in their environment. Without food or water a bird will, like any other animal, die in a few days. They're small and flexible. Their range is quite far, depending on species, and some of them use tools.

Planes on the other hand are artifacts, manufactured to spec. They rely on highly specialized and rarefied fuel without which they will catastrophically fail immediately. Their range in tremendous and they've become an integral part of the human species.

Or did you mean the difference between flapping and fixed wing?


We want to build AI. We can't build bird. We can build plane.


More to the point, building planes was a better way to discover and deeply understand the principles of aerodynamics than studying birds. Similarly, building working AIs will be a better way to discover and deeply understand the principles of intelligence and consciousness than studying brains.


Brains are the only pieces of hardware on the planet that exhibit intelligent behavior. I think it's a bit arrogant to think we can do better when we don't even know how the thing works!


The brain is the product of random trial-and-error filtered through selection (Evolution). This process was simple enough to give rise to Intelligence without the need of an example. Now, I'd like to think we can at least do as much using our own brains instead of relying on dumb luck.


We have lots of examples of hardware and software that does things-previously-thought-intelligent much faster than brains do them.


My take is that's because brains don't do those sorts of things natively. Meaning, no there isn't some level of abstraction in animal or human brains that does lambda calculus. Which paradoxically makes us think that being able to do lambda calculus as mark of intelligence when nothing is further from the truth, it's just our brains are bad at it.


I realize that computers do things better than brains in many cases, but they are nowhere near as flexible or adaptable. Then again, I don't care about narrow AI, so raw performance isn't particularly important to me.


Causal induction is a damn powerful thing.


Actually, anything intelligent wouldn't have used hardware, form factors, or mimics of form factors...Once.

Anything with a form factor or using hardware, is rather retarded, and useless...


I agree with you to an extent -- it's all computation, after all -- but I don't think we're too far off from understanding the brain.

We know a good deal about how the individual cells work at the molecular / electrophysiological level, and how simple circuits function; and we also have a fair understanding of the gross structural organisation of the brain.

It's the middle layer that's the big unknown: how simple circuits link up to form complex behaviours, but it's not an impossible problem. I suspect looking at the smaller nervous systems of invertebrates will give us the conceptual understanding of these interactions, which we can then scale to understand mammals. We're largely waiting on the technology to map these circuits, and the computational tools to simulate and analyse them.


What I can't wrap my head around is the difference between "computation" and consciousness when it comes to intelligence. If intelligence is just the ability to receive the right signals and then output the right signals to reach a certain goal, then the more inputs and outputs you have, the more intelligent you are. But then this is not how we usually think of intelligence. Intelligence is like the ability of a limited organism with limited inputs and outputs to make the most of a situation. We can already design a super intelligent system that will process inputs exactly right and output exactly right, but you need the central control and the limited body that will enable some kind of will and prioritization, and I haven't even gotten to conscious experience yet.

Rather than being complex computation, intelligence is about flexibility, but it may be that due to the complex nature of our biology, all that flexibility is just a lot of inputs and outputs, in which case we're back to square one and the limitations of the body etc. In some odd way, intelligence is quite disappointing


> I don't believe the complexity of the brain is a necessary feature of AI.

What makes you say that? It seems to me that that's the biggest thing, its connectivity, that separates the brain and its self/mind making ability from the rest of the organs.


Having lots of connections may be important to Intelligence, but the brain is a knotted mess of connections. It's the knotted part that I am objecting.


There seem to be some interesting mechanisms that arise due to the mess. The author mentions that delays, rather than viewed as a problem, can be viewed as something that conveys additional information about the environment.

These delays are a result of the complexity. Sure, replicating the knotted mess might not be an approach that works, but using the mechanisms that arise in the brain to handle complexity would be very useful.


I think they might be arguing against un-knowability as having the opportunity to perform structural and load bearing functionality. Nature works so well because there's no watch-maker, nobody has to have a finite set of skills, time perception, or model to create something useful/interesting/adaptive like a mind and body complex.


> Having lots of connections may be important to Intelligence, but the brain is a knotted mess of connections.

Ok, thank you for the clarification. I still don't see why that's a problem for you. Is it your intuition? Practical experience? Academic knowledge?

I for one am pretty generous with what I think it takes for intelligence, or for that matter the universe its self, to emerge. I don't think the universe is at all constrained by our imagination.


Sounds very reminiscent of what Jeff Hawkins has been talking about. His company, Numenta, is specifically about creating AI by understand the neocortex. Fascinating stuff.


How about the ethical side? If one copies the human brain more or less exactly, then ethics will almost certainly become a big issue.


You would be amused how many of my computational colleagues completely fail to see the ethical issues of having a perfect working simulation of the human brain in a computer.


How does it matter if it's an exact copy? I would argue that a completely different implementation of real intelligence still prompts the same ethical questions.


>If one copies the human brain more or less exactly

Is there a reason to think that that is even a possibility?


At this point its difficult to say if it is or isn't a possibility.



We could seek to build an animal brain, which would also be very useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: