Hacker News new | past | comments | ask | show | jobs | submit login

I think the cortex is attractive to AI research because its highly uniform micro-anatomy suggests there is a simple algorithm behind general intelligence to be found within. While intelligent behaviours appear to reside in the cortex, the limbic system is heavily involved in learning and cognition generally. Additionally, animals lacking a cortex, like birds, are still capable of many intelligent behaviours typically regarded as cortical. I wonder whether the cortex may be a kind of FPGA wired up by the limbic system, not naturally containing the machinery for general learning, and a red herring to the pursuit of general intelligence.



> While intelligent behaviours appear to reside in the cortex, the limbic system is heavily involved in learning and cognition generally.

We understand the limbic system to process motivation and emotion; could the limbic system's activation during cognition not be "explained away" by it being recruited to process the emotional or motivational aspects of sensory or memory data? (Just like how there is visual cortex recruitment when using visual imagination, etc.)


It's important to recognize that the cortex is a relatively late addition to the structure of the brain. Brains have been learning about the world and how to behave appropriately long before some mammals developed such a large region here. It's much more likely that the core functionality to be an autonomous agent in the world is encoded in the limbic system (e.g. basal ganglia, hypothalamus, mid brain and brain stem regions).


Certainly those regions are doing something in humans; and certainly those regions had those functions in species without a cortex. But the implication that these regions' presence in the human brain implies they're the ultimate arbiter of the same functionality there, isn't guaranteed to be true.

Brains really do use "subsumption" (in the https://en.wikipedia.org/wiki/Subsumption_architecture sense) to accomplish various features; there are motor signals that some earlier-evolved part of the brain would be emitting if they were the only thing "online" in the brain, that are actively suppressed or "overridden" by another, later-evolved part of the brain, often a part that only "comes online" later in brain development. (Thus "early instincts" that disappear during development, like the infant diving reflex.) Often, these subsumed neural processes reappear when the region suppressing them is damaged, as in the normally-suppressed human lordosis reflex, or the normally-suppressed-after-infancy suckling reflex in most mammals.

There's no evidence that I know of that the limbic system and the cortex are in this sort of relationship; I'm just saying that this kind of relationship isn't unprecedented as a thing human brains do.

If such a relationship were to exist between the cortex and the limbic system, then both regions could be said to be "in charge" of cognition; sort of like a television tuner and a VCR can both be in charge of the image being displayed. One state machine (the TV; the cortex), passing input through to another state machine (the VCR; the limbic system) only in some subset of its states (the right "channel"; the right arousal state.)


> There's no evidence that I know of that the limbic system and the cortex are in this sort of relationship

It's true that in mammals (possessing a neocortex) the cortex takes over various processes traditionally performed by the limbic system. Complete removal of the cortex in rats demonstrates that behaviour can revert to entirely limbic control: https://www.ncbi.nlm.nih.gov/pubmed/564358

While the cortex can definitely be / become "in charge" of complex intelligent behaviours, it may still need the brain's phylogenically older machinery to bootstrap it.


>https://en.wikipedia.org/wiki/Subsumption_architecture

Looks like a kind of "deep" architecture:

"It does this by decomposing the complete behavior into sub-behaviors. These sub-behaviors are organized into a hierarchy of layers. Each layer implements a particular level of behavioral competence, and higher levels are able to subsume lower levels (= integrate/combine lower levels to a more comprehensive whole) in order to create viable behavior. For example, a robot's lowest layer could be "avoid an object". The second layer would be "wander around", which runs beneath the third layer "explore the world". Because a robot must have the ability to "avoid objects" in order to "wander around" effectively, the subsumption architecture creates a system in which the higher layers utilize the lower-level competencies. "

One can imagine how backpropagation style training on "explore the world" or other high-level scenarios would result in formation of "avoid objects" kernels at the lower levels similar to how image recognition deep nets produce Gabor like kernels at the lower levels. We do have some theorems on optimality of such deep networks for image recognition and i wouldn't be surprised if similar optimality was present for behavioral deep networks. Also, it seems that like in case of image deep nets, the deep architecture for behavior naturally allows for transfer learning by reusing the lower layers too - like one would reuse the low/mid layers of image deep net, one would naturally reuse the lower "avoid objects" layers (and may be some more complex aggregate behaviors from "mid-levels") for other tasks.


The architecture of the subsumption system is fairly well mapped: from basic optical edge detectors in the banks of the calcarine sulcus to high levels of sophistication as you move forward (object recognition, numeracy, vocabulary in the parietal, then motor function, then judgement and grammar in the frontal and prefrontal cortical areas). The additional dynamic of taking over or controlling traditionally limbic behavior (emotion) is an additional process which I would characterize as a blend of subsumption and adaption: the cortex aquires data and abstracts it, but also, as an adaptive response.


Exactly. I think when people talk about intelligent machines today what they are most after is autonomous, robust, common sense orientation and agency. It's not the ability to do algebra or reason logically, because we can actually already built systems that are quite good at that.

What we really want is a robot or a car that can orient itself in the abscence of structured data and doesn't glitch out into the wall once it loses its objective. And even primitive animals are very good at that.


You're ignoring a lot of its function. The hippocampus (within the limbic system) is necessary for long-term memory. If your hippocampi are removed, you still act more of less the same, but you can't remember new events or make semantic memories. Attention is also an important function of the limbic system.


The famous case is that of Henry Molaison (HM): https://en.wikipedia.org/wiki/Henry_Molaison


I know the brain isn't a computer, but why do we think examining the structure is going to reveal an algorithm for intelligence? Isn't that like taking an iPhone, cutting it into slices and studying it with the hopes of finding how GarageBand works?


Funny you'd mention that, there was a study from Jonas and Kording [0] that considered a microprocessor as an organism and applied analytic methods used in neuroscience to see if they can figure out how it processes information.

[0] Jonas, E. and Kording, K.P., 2017. Could a neuroscientist understand a microprocessor?. PLoS computational biology, 13(1), p.e1005268.

https://doi.org/10.1371/journal.pcbi.1005268


I think it is named after a 2002 study "Could a biologist fix a radio" which advocated that the current methods in biology were inadequate to understand a living body. It was a plea to do more System Biology, but even if it is widely known it does not changed much the way biology is done.

https://www.math.arizona.edu/~jwatkins/canabiologistfixaradi...


In computer science, once you understand a data structure, its algorithm is often obvious [1]. Maybe the brain will be like that too, i.e. once you get its structure, you get its algorithm for free.

[1] https://news.ycombinator.com/item?id=20047607


But examining the circuits of a computer doesn't tell you anything about a data structure, does it?

What would an iPhone schematic tell you about how GarageBand works?


Have you ever seen one of Professor Sussman's talks where he analyzes a circuit diagram?

For example, see Prof Sussman's 2011 StrangeLoop talk (circuit analysis example begins at ~25 min mark)...

We Really Don't Know How To Compute! https://www.infoq.com/presentations/We-Really-Dont-Know-How-...

If you have the hardware schematic or the physical hardware and enough time, you can figure out what the hardware does. You can determine what its constraints are, and if you understand it well enough (for simplicity's sake, let's say you understand the hardware up to the level of the engineers who designed it), you can tell what the hardware system can and can't do and what type of codes are required to make the hardware work. You can tell at a low level what the GarageBand developers had to work with when they designed their game. And once you know the required codes, you can write software to generate the codes to make it work. And if you're really good and have the right tools, you can analyze the hardware and/or model the data flows to determine what the optimal data structures must be based on the hardware capacity constraints and data flow.

Google "reverse engineering hardware chip circuits" or watch Ken Shirriff's 2016 Hackaday talk...

Reading Silicon: How to Reverse Engineer Integrated Circuits https://www.youtube.com/watch?v=aHx-XUA6f9g


It would tell you that GarageBand was a structure of processor opcodes that executed on the processor and had the ability to interface with a storage device, a video screen and an audio processor, among other things, and that the opcodes for GarageBand were likely stored on said storage device.

Maybe you wouldn't understand GarageBand yet, but you'd have a solid set of next steps for your research.


Would the layout of transistors indicate that opcodes exist?


or we could do it like the way physicists design their experiments-smash two iPhones together at tremendous velocities and see what pops out. Can you imagine early anatomists taking the same approach because the Catholic church forbade cutting open the body.


But before we can exam the algorithms on an iphone we need to understand the hardware, and cutting it open and looking at the structure is definitely a step on that path.


Most A.I. researchers believe the brain is a computer.


For people unfamiliar with the acronym (from Wikipedia): "A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing..."


I wonder whether the cortex may be a kind of FPGA wired up by the limbic system, not naturally containing the machinery for general learning, and a red herring to the pursuit of general intelligence.

The challenge is to discover the nature of "general intelligence", I'd say. To me, it seems like GE would not actually be a set of specific behaviors but rather a processes that integrates, extends, mediates between, connects etc specific behaviors. Probably whatever-does-GE would both program and be-programmed-by the limbic system and whatever other systems it relates to, and given this the cortex like a good candidate given that it accompanies the human ability to active "generally" (and it is quite possible that similar functionality might reside in a different system in different brains see comment by derefr: https://news.ycombinator.com/item?id=20328311).


I also find it interesting that birds don’t have a neuro cortex: my pet parrot exhibits a few very intelligent behaviors.

The newer thousand brain theory feels close to correct to me. I used the older HTM for a fairly quick time series anomaly detection experiment and it was generally promising. I would be curious how well a thousand brain approach would work.


"Homolog of mammalian neocortex found in bird brain"

https://www.sciencedaily.com/releases/2012/10/121001151953.h...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: