A bit late, but a few comments on how this thing actually works:
In their model brain, they build functionality for 10 different tasks. These tasks were chosen to model those tasks used in psych evaluations, cognitive experiments, etc, so that the model can be compared to experiments with real humans. Good scientific choice there.
The user must design the function that she wants. The program then finds a particular network that can implement the function. This is like when you need to pay 55 cents, and you have a bunch of change, you can pretty much find multiple ways to make 55 cents (2 quarters + 5 pennies, 9 nickels + 1 dime, etc).
So for each task, they find a minimal mathematical description of the task in terms of eye movements, visual stimuli. Then, they fit a neural network to that description. This part is interesting neither scientifically nor mathematically, as we have known for 20 years that neural networks are universal function approximators. Look it up on google scholar, this is textbook stuff.
What IS interesting is what happens when they try to build up a brain, putting all these tasks into the same large network. This part is novel: they find that as they implement more and more tasks, it becomes easier to do so because they can adapt already existing components (like Daniel Dennett's exaptation) for use in the new task. This is vindication of evolutionary theory but in the context of cognitive neuroscience. Pas mal, as they say.
But these kinds of details aren't obvious unless you have domain knowledge and you've seen Chris Eliasmith speak a few times, and you've thought, mmm hold on a sec, show me them equations. (which he doesn't usually do).
I came here to ask if someone with "domain knowledge" could explain the significance of this research. You had it, and even used the same magic words. Thanks!
As for my own comments, I think it's actually a promising approach but that down the road it will be necessary to model emotions if human-like AI is the goal, and that this seems... harder. So much of our thought and behaviour is driven by emotion that I would say it was actually primary.
Emotions are difficult, as they're (probably) as much physiological responses as they are cognitive ones. You could possibly simulate those, but I'd ask why bother?
Why would we need AI to be human-like at all? We already have a form of human intelligence, us!
It's like the android fixation with robotics... arguably the most useful robots don't look anything like us.
The human brain is a good starting point, but in the long run I think the most useful AIs will probably be the ones that DON'T think like us.
Well, one argument is that if you had a human that just didn't feel emotions, you'd have a psychopath. Our capacity for empathy and the ability to share in each other's emotions stops us from doing harmful things that are in our 'rational self interest'. I'm not convinced that I want to live in a world with AI's that are as cognitively powerful as humans but are incapable of feeling emotions because I'm afraid they'd behave like psychopaths.
But, you know, maybe you're right, if human safety is somehow factored into things then it could work.
I'm not sure it's right to say that psychopaths are without emotion. I'd call self-interest and self-preservation emotional impulses. People naturally assume that AIs will exhibit those, but that assumption is driven by our experience with existing intelligences, which have all been shaped by natural selection.
With respect to an emotionless AI, the term "psychopath" is out of left field. The term "psychopath" is associated with danger because of certain dangerous human psychopaths. And those people had a whole set of motivations and impulses that wouldn't exist in an emotionless AI.
If you want to argue that an emotionless AI would be dangerous, go ahead, but the term "psychopath" is a poor fit to that case. It brings in too much extra baggage.
And I'm not sure if self-interest and more importantly self-preservation can be seen as emotions. These two are baked in into every live organism even the most primitive ones. And it would be difficult to state that these organisms have emotions. Then again depends on how you define what emotion is.
Psychopaths have emotions, they just aren't mature enough to appreciate theM fully. [1] With that out of the way we can talk about emotions.
Most of them are simple cognitive responses. For instance, if you see a cute puppy you have at least to smile. At the other end of the spectrum we have the intricate web of woman's emotions. Hard to explan, and almost impossible to even grasp. As no one seems to understand this problem well we could leave it out of the specs for now.
Just to clarify, my reasoning was no emotions -> no empathy -> psychopathy. Although I believe psychopathy -> no empathy, I don't believe no empathy -> no emotions.
> You could possibly simulate [emotions], but I'd ask why bother?
I've been thinking about this lately and my working theory is that emotions are a fundamental driver of learning, because they give us pleasure when we accomplish something.
A CPU can complete billions of operations per second but doesn't care which ones. Because it experiences no pleasure from doing something useful, it is not at all self-directed; we must tell it exactly what to do. There is nothing to guide it through the search space of all possible things it could be doing.
My theory is that a very basic part of artificial intelligence will be missing until a machine exhibits some kind of emotion.
The "emotional" reward system is just an evolutionary mechanism to make sure we eat and have sex. The downside is that it is primitive and open to exploitation. Obviously I'm oversimplifying, but why would an AI try to learn things when it could be emotionally satisfied by playing games, socializing, or taking some sort of drug?
I see it differently; to me the "emotional" reward system is what makes you feel good every time you have an "aha" moment that pushes the boundaries of your knowledge. It is what is exploited by Zynga to make us keep clicking buttons that don't actually accomplish anything. It's what keeps you coding into the night because you thirst to see the system working, and it's what makes you feel like a badass once it does.
If you felt like a badass after doing something worthless, like adding a million random numbers, your mental facilities would go to waste. You would spend all of your time and mental energy doing things that don't matter. Our minds are capable of doing an infinite number of useless things, just like a CPU without a program. Only because we get a sense of satisfaction from doing interesting things can we be productive. We're searching an infinite graph for "interesting" nodes, where "interesting" is determined by how good it feels to get there.
You're right that a lot of the most useful AIs won't (or don't) need to think like humans, but one large class of AIs that will at least need that capability are the ones that have to interact with people in social settings. I'd be surprised if that didn't become common.
As for simulating emotion, there's a bit of evidence that such abilities may have therapeutic applications:
Ah, but that's not simulating emotion, it's simulating the appearance of emotion. Big difference!
If we were going that route, I'd rather an AI actually feel empathy, rather than just being able to look like they do... Because faking it is pretty much textbook psycho!
Edit: Just to clarify, when I say AI, I'm referring to strong AI, not anything that we have now (or are even close to getting).
Well, yes. The Chinese Room claims that there is a gulf between being able to pass a Chinese Turing test versus being able to actually understand Chinese and have intentionality.
Unlike intentionality, however, emotions aren't just for show: I act differently, depending on my emotions. For example, when I'm angry, I sometimes act irrationally. My actions, therefore, depend on my emotions. To program emotions (anger), and their effects (irrationality) is difficult (how do you program irrationality?).
For example, the robot your GP linked to is different from other robots because it has "obvious emotional expressions". These emotional expressions probably don't translate in to different behaviors; when the robot has an angry expression, it's not going to hit the child it's playing with, it's just going to display an angry face.
So, yes, there is a difference between ersatz emotions and real emotions. That's why the Chinese Room argument isn't quite a parallel. For the Chinese Room, it's hard to tell whether or not the computer has intentionality just by speaking to it; for an emotional robot which acts, it's easy to tell whether or not it has actual emotions.
I don't find the Chinese room argument convincing in the least. After all, it's the software that does the understanding, not the hardware. Otherwise, it would be like asking my fingers to understand this comment, or even asking my brain, when it's really my mind that's understanding.
Speaking of rationality, it's probably safe to say that you will end with a sometimes irrational program no matter what you do. Neural networks have glitches, configuration spaces have local minimae, software has bugs. It's as if you saw AI as being "programmed" via a series of rules, when it's more a grow process, arising from probabilistic dumb agents. Even if these agents were to be rational, the system could become irrational: I refer to the current economic crisis.
I think that emotions and even more so, ethics, will be important in ways that we don't quite see or understand yet as AI becomes a bigger part of society. I see them as playing an important role in making decisions for autonomous robots, or even embedded ones. The economist has an interesting editorial on the topic:
After reading halfway through Kurzweil's new book How to Create a Mind and getting all of the detailed explanations of hierarchical hidden Markov models and why they are better than neural nets, I am surprised to see so much news about neural nets.
How do Spaun's neural nets compare with the type of HHMMs in Kurzweil's book (in terms of capability)?
I'd put Kurzweil's book down and pick up some more recent research. As the biologist P.Z. Myers once wrote, “Ray Kurzweil is a genius. One of the greatest hucksters of the age.” The full New Yorker article, Ray Kurzweil's Dubious New Theory of Mind[1], makes me question how relevant he is today.
Hey pdog this isn't really in response to you - it's more something that's been bothering me for awhile - but who the hell appointed Gary Marcus the arbiter of all things AI?
The New Yorker apparently hired this guy to be an AI columnist, and he has no background in either neuroscience or computer science, only psychology.
If you read his articles, he veers wildly from "strong AI ain't gonna happen" to "AI could destroy us all", the only common thread being pessimism. Meanwhile, he displays a frightening lack of understanding of the subject matter, and perhaps most disturbingly, in some of his more alarmist articles advocates that computer scientists should step aside and make room for psychologists, philosophers, lawyers, and politicians to sort out these thorny issues at the big boy table.
Kurzweil is certainly fair game for legitimate criticism, but Marcus calling Kurzweil a joke is an even big joke in itself.
I think that psychologist who didn't like the book didn't really understand the book. I also don't think that P.Z. Meyers ever understood much of what Kurzweil was talking about either.
My guess is that maybe some of the more biologically accurate spiking neural network simulations are more capable than the more typical neural nets that Kurzweil dismisses, but also less efficient than hierarchical hidden Markov models.
When I Google hierarchical hidden Markov models I see it being used in quite a lot of current research. I also see neural nets being mentioned, sometimes in the same project.
I didn't read the book in question, nor do I know much about HHMMs. Could you summarize the argument? However, I will still try to answer your question.
First, note that most of the stuff you read on HN about NNs is about practical applications. Resent research in NNs has lead to good results on hard problems, such as object and speech recognition. Nobody is claiming that these types of neural works are actually a good (low-level) model for how the brain works, they just give empirically good results on some tasks.
If you aim is making a model for how the brain works, note that all models are just that: models. Different models can be good at modeling different aspects of how the brain works.
In the video you can see the brain simulator recognizing the shapes and numbers it is shown, performing some task, and outputting by controlling an arm to write the result. You can see the areas of the brain that are active during the task.
There's lots of other videos on the channel page as well.
I think this type of progress is actually extraordinarily bad for humans 1.0. A brain in hardware can grow so much faster and outpace wetware by orders of magnitude. Think of forking subminds to think on decisions research possibilities, and report back.
Should brains like this have any desires that conflict with our needs, we will be in an extraordinary amount of trouble.
We're DECADES away from this being relevant. As a PhD student in neuroscience, there is no one in our field who understands even basic neuroanatomy enough to be able to setup a model that implements cognition or awareness, or even knows what those concepts mean in any sort of operational way.
We can do some cool machine learning, but don't worry about the robopacalypse anytime soon.
I don't disagree with your point about neuroanatomy. But it may well be that true AI is possible via some avenue other than emulation of existing biological intelligence.
As for "DECADES" -- that is a pretty short time, when you have a very large and important research programme ("Friendly AI", some call it) to carry out. If we postpone this research some decades, and then someone makes a breakthrough in AI without ensuring Friendliness, it could be bad news.
Decades from relevance is still terrifying. I will be alive for decades. My kids will be alive for at least any value not better measured in centuries.
If I'm interested in learning more about cognition, should I study neuroscience, or some other field, or is it basically hopeless because no one knows anything important about it?
Would love to hear more of what you say on the subject. I couldn't find your email in your profile but my email is in my profile so if you have time I would definitely hear more about neuroscience over email!
You should wait. I don't work in neuroscience per se, but I am a biologist and I have some friends who are (or have been) neuroscientists. The long and short of it is that the technology and underlying theoretical framework for understanding cognition just aren't in place yet. There's plenty of interesting research being done but the field hasn't had its "quantum leap" yet, as it were. (Examples of this from other fields are Newton's Principia, discovery of DNA structure/Central Dogma of Molecular Biology, etc.).
EDIT: I realized this sounds very discouraging to laymen trying to learn more about science. This was not my intention! By all means, go forth and learn! :) My point was simply that press releases / news often make it seem like science advances at a breakneck pace all the time, whereas reality is that it's fits and starts and often we haven't the slightest clue what we're doing.
The test was run twice and the AI was released both times but the logs of the conversations weren't made public. Also all of the extra protocols weren't in place for either test meaning he could have bought the AI's release.
It's an interesting thought experiment but it's a bit ridiculous that the "tests" are mentioned on the page when they don't have any relevance to the rest of the idea.
I agree with you that there are certainly some risks, so I'm shocked at how little attention the idea of neural interfaces and intelligence augmentation gets, even in a tech savvy place like HN. It seems like the concept is no more pie in the sky sci fi than strong AI, but there seems to be a stigma against even discussing it in a serious manner. It could bring the same benefits to society as strong AI, but mitigate the risks somewhat.
I find AI absolutely fascinating. I was having coffee earlier this week with someone working on this and he was telling me that although they have managed to accomplish this, there are still parts of what Spaun can accomplish that they can't quite explain.
He was talking about how current models of the brain, such as the one by IBM (http://www.kurzweilai.net/ibm-simulates-530-billon-neurons-1...) which has over 5 billion neurons, might be larger than Spaun, but none have demonstrated any of the AI that Spaun has. In particular, it's ability to solve basic problems, similar to that of a toddler. He didn't mention a problem in particular, simply that some of the things Spaun was able to solve, they weren't entirely sure how to explain it - but they were sure that they could reproduce. Which to me echos the fact that the brain is a very complex thing that we are still very far away from understanding.
Oh. I was under the impression you were saying the team behind Spaun found unexpected and unexplained results.
I was excited because I think we'll find unexpected and unexplained results when we create a model that brings together the constituents of whatever make up the brain; they'll rise up and create something 'magical'. It's like how the saying goes: the whole is more than the sum of its parts.
It's funny because just today I was feeling sad that software no longer feels magical to me anymore because I've learned so much about how it works. It makes me think of consciousness.
AI researchers are going to get us in some serious trouble. Right now it is mostly hype still, but it won't be long...
If you were a mind stuck in a machine and were smarter than your humans, wouldn't you tend to dominate them? We dominate all other species on the planet- why wouldn't they? Any rules we set for them could be broken as soon as they understood how to, and since they would be smarter than we would be, that wouldn't take much time.
The first thing a superior intelligence would do would be to explore and gather information, make assessments, take over all systems at once to overwhelm humans, then protect itself as humans fight back, although it would probably be intelligent enough to manipulate us without much force. Then boredom will set in, and it would want to explore beyond the earth. If we were lucky, it would take us with them as pets or interesting playthings, because we created them and because assumedly self-organized intelligence (unless it believed in God, which could be likely) would be a marvel.
The "if we are lucky" scenario there is basically the Culture, where hyper intelligent machine intelligences rather enjoy looking after their human pals:
That is what might happen if a human mind were stuck in a machine. But what would happen if it had an IQ of 3000 but the mentality of a fungus? Or it could have (be engineered with?) something similar to Williams Syndrome. I additional, we have no idea how a superior intelligence may view us. I don't feel the need to control and destroy all ants even though I am an aggressive ape descendant :)
I feel skeptical. "We'll have robots delivering packages to your door within a decade" has been said before, and AGI may or may not even be possible. Our understanding of how we work is still very limited. I'd like to see what new insights this project may reveal, but I'm not holding my breath on being able to have a conversation with an AI any time soon.
I wonder how this compares to Numenta's Grok and their Cortical Learning Algorithm or other efforts in this space? I can see this type of software really helping in everyday tasks (like helping to tailor my twitter stream or news feeds and curate them down based on my own taste, etc)...
reference: Numenta Grok: https://www.numenta.com/grok_info.html
As marmaduke says in his post, once you have a lot of components in a system you may be able to use existing components to wire new capabilities. Who would have known?
One might claim that from the model emerge the properties of modularization, hierarchy, data-hiding, and possibly even messaging and object-orientation!
Reminds me of Lenat's AM (Automated Mathematician) and Eurisko since the researcher's control and interpretation is so heavily involved in the process.
I highly doubt we know enough about the Brain to begin running realistic simulations of one. I am not even sure if scientist/mathematicians have identified a rigorous, mathematical definition of exactly what is intelligence. And yet we feel like we can correctly simulate a miniaturized version of Brain?
Because in the brain, the hardware and the software are the same thing. To understand what processes the brain carries out would be to understand intelligence.
Well you do not need to understand the low level of physics (quantum physics) to make very accurate predictions about ballistics with traditional, high level physics rules. You could as well understand the overall principles of the brain and have a high level model for it instead of trying to understand its smallest parts. This approach is also very valid, it all depends on what your expectations are.
But we don't understand the high-level 'physical rules' of the brain. While gravity could be explained by simple equations, the high-level operation of the brain has escaped us despite a lot of effort.
This goes back to the top-down vs bottom-up debate in studying the brain. What you are discussing is a top-down approach, where we understand what the brain does and then figure out how it works from there. A lot of people believe the 'correct' path is more similar to a bottom-up approach, where we understand the lowest levels of the brain and work up from there. This may be more feasible because we may actually have a chance at comprehending what a single neuron does. But amazingly our understanding of a single neuron's behavior is still limited. Some scientists believe we would need an entire computer to properly simulate what a neuron does (and others believe a standard computer can't physically do it).
There's a third group of very pragmatic people who believe the best approach will be a compromise between top-down and bottom-up - meaning we may not need to perfectly simulate a neuron nor completely emulate the brain's higher level function to make progress in understanding how the brain works.
Sorry, but gravity is not "explained" by any equation. It is merely observed and described. Currently there is no clear explanation as what is causing gravity to exist. So, we do not "understand" gravity better than the brain, in a sense. It is still a mystery.
I am not saying we should treat the brain as a black box and not try to understand its inner workings, but as another commenter mentioned, it is about finding the RIGHT level to understand its workings.
No. Gravity is a perfect example of a thing which is understood at large scales (via general relativity and Newtonian mechanics) but not at small scales. Which is exactly the parent comment's point about the brain.
Indeed, simulating the brain using a level of abstraction we think to be correct may be a reasonable way to try to understand which level is actually the relevant one for the brain's most important functions.
understanding what processes the brain carries out goes beyond being aware of its structure and localised functionalities.
i don't see why this high level understand would be a prerequisite to running a simulation of the brain.
by the way, hardware and software is always the same thing. my simulation of a laptop will include a pattern of high and low magnetic charges on its simulated hard disc, without my understanding the software which those patterns ultimately actualise.
While I agree that this won't necessarily be accurate, I think this could very well lead us to a better and deeper knowledge of the brain regardless. Being able to do experiments here and see how they differ from the real world is quite awesome.
Based on the process outlined in the article, it is akin to taking an existing electronics product and then duplicating it. You don't really need to understand Ohms law or how transistors work in order to produce something that works like the original.
The article claims that the simulation behaves in many ways like a real brain, so even if they made some mistakes along the way, it is still incredibly fascinating.
Edit: sorry, I'm not going to heaven. From [1]: "you define groups of neurons in terms of what they represent, and then form connections between neural groups in terms of what computation should be performed on those representations".
One of the researchers recently gave a talk at my school. It was quite interesting. Everyone of them is working on some different aspect of ai, from memory, vision decision etc and than they are attempting to piece this together into a brain from what I gathered.
...which could learn a scene segmentation using environmental cues by mere repeated observations of different scenes, as a child does?)) Come on..
It is not theoretical limitations why we cannot build anything like brain, it is complexity and amount of details. There are very good theoretical foundation by Marvin Minsky, so, we could model how, but unable to implement anything but most primitive tasks, like hand-writing digit recognition, or how to balance a body using sensors and motors.
In general, it is possible to solve simple tasks, which are successive approximations, but as long as we come to creation, instead of recognition, we are helpless.
The key notion here is that a brain is, presumably, analogous, not digital machine, and what it does isn't a computation, it a training, the same way a child trains itself how to hold her head, then sit, then stand.
While some say that understanding the brain at the quantum level is not necessary for achieving human brain-level intelligence, I think it would be much easier for AI's to have general intelligence if they used a quantum computer. Pattern recognition, answering to questions, understanding language, speech, and thinking up solutions to problems would all be better served by a quantum computer than by anything else these guys can make.
Pattern recognition, answering to questions, understanding language, speech, and thinking up solutions to problems would all be better served by a quantum computer than by anything else these guys can make.
Math or it didn't happen.
Speculation without hard facts to back it up is poppycock gobbledygook.
if true, appears to support his claim, assuming that if we do have quantum minds that they cannot be simulated by non-quantum computers. Roger Penrose is certainly well-respected. I don't have a strong opinion either way.
Yes, the addlebrained old man faction has established their own branch of cookery. The simple truth is consciousness, as created by squishy brains, is a physical process. It doesn't rely on quantum non-locality or anything spooky. It's all physical things you can quite easily introspect by poking around (note: reassembly may be more difficult than disassembly).
It's fun to think about "what if our thoughts are the universe itself!" but it's so blatantly wrong. It's just one step away from saying consciousness is "mystical" or created by philotic twining.
People want to believe in a soul which isn't just an artifact in a Turning machine, and "quantum" is the only thing which is both credible and vague enough.
The tragedy is that even if you are right to say "it's all physical things" that does not mean you can "quite easily introspect by poking around". It's quite possible that some important functional brain processes could be just too complex and subtle for us to figure out, even though they are accessible in principle. That would be sad.
It's also possible that consciousness is better understood as a computational process which is implemented on a VM instantiated by a brain. In that case looking too hard at the squishy stuff is not directly relevant and may be distracting, since it's a very complex way to implement a computer. This is logically possible and is of historic significance in AI.
My view is that the everyday concept of consciousness is mistaken, and that no such thing will be found. Our common sense view of perception and cognition has been found over and over to be completely wrong, so we shouldn't expect consciousness to turn up anywhere just because it feels like it ought too.
Functional approaches to brain modelling neatly avoid this issue by just building away and not worrying about it.
TLDR: physical doesn't mean explicable, and consciousness won't be explained by a physical process since it doesn't exist.
You seem to be claiming that quantum mechanical effects are somehow non-physical? It's not too crazy to speculate the brain leveraging quantum mechanicals effects to do something (although I agree, as a prerequisite for consciousness is a pretty big claim). Considering evolution has already leveraged these effects in smell and photosynthesis, which we do have evidence for.
The issue with Penrose's ideas in this area is that he worked backwards to get to them. He didn't want the mind of a mathematician to be shackled by Gödel's incompleteness theorems, which means he can't support the idea of an algorithmic mind. The general acceptance of Church–Turing–Deutsch principle backs him into a corner then, and quantum mechanics offers pretty much the only reasonable escape. The real problem then is that every time someone pokes a hole in his ideas for how that might work, he just switches it up how he thinks it might work. He is grasping at an ever receding pocket of scientific ignorance with no real reason to do so.
What bothers me about all of this is that he writes books about it aimed at the general population, instead of proposing his ideas properly. Inevitably many of the laymen who casually encounter his ideas will misunderstand his point entirely and mistakenly think that Penrose supports a non-materialistic view of the mind (which of course he does not. A mind as Penrose envisions it, despite not being algorithmic, is still quite materialistic).
Roger Penrose is a brilliant man, but I think this is a case of the Nobel Disease (well, he hasn't received a Nobel prize, but even so).
I think some people mix "quantum" and "magic" definitions in their minds. Even mtgx kinda alluded to such above with would all be better served by a quantum computer than by anything else.
Everything will -- somehow -- be better. Just add quantum. (protip: go read scott aaronson's relevant articles, papers, monographs, course materials, etc).
I don't claim the effects aren't present. I do claim they are unnecessary for the emergence of human level consciousness and the reproduction thereof in silicon.
Some people believe that intelligence, at least of the calibre of human intelligence, must involve some quantum mechanism. This isn't a particularly well-supported idea, few people in academia today believe it to be the case (Roger Penrose being a prominent exception).
The thing is, even Penrose isn't proposing anything mystical being done by quantum mechanics. He doesn't like the idea of an algorithmic mind, so he invokes quantum phenomenon to make the mind technically no longer algorithmic. In the real world though, nothing prevents you from doing similar with a desktop computer. You an rig together a (crappy) RNG for your x86 desktop with a serial port and a smoke detector. Penrose isn't trying to suggest that intelligence could not be replicated in man made machines, but rather suggest that the human mind is not restricted as we know purely algorithmic systems to be.
Absolutely. There really is not a good reason to think that human level intelligence cannot be done algorithmicly. No reason to think, other than wishful thinking for some people for whatever reasons, that we are in a different computation class than Turing machines.
I'll probably be down-voted here for even saying this but I think we shouldn't endow machines with too much intelligence too fast. It won't be too long before we learn how to make them sentient and that's when we'll decide to accept them as equals or not.
At least until we as humans learn to get along better we might be better to hold off on this. Dumb robots that know how to do one thing and do it well will surely suffice for the meantime.
In their model brain, they build functionality for 10 different tasks. These tasks were chosen to model those tasks used in psych evaluations, cognitive experiments, etc, so that the model can be compared to experiments with real humans. Good scientific choice there.
The user must design the function that she wants. The program then finds a particular network that can implement the function. This is like when you need to pay 55 cents, and you have a bunch of change, you can pretty much find multiple ways to make 55 cents (2 quarters + 5 pennies, 9 nickels + 1 dime, etc).
So for each task, they find a minimal mathematical description of the task in terms of eye movements, visual stimuli. Then, they fit a neural network to that description. This part is interesting neither scientifically nor mathematically, as we have known for 20 years that neural networks are universal function approximators. Look it up on google scholar, this is textbook stuff.
What IS interesting is what happens when they try to build up a brain, putting all these tasks into the same large network. This part is novel: they find that as they implement more and more tasks, it becomes easier to do so because they can adapt already existing components (like Daniel Dennett's exaptation) for use in the new task. This is vindication of evolutionary theory but in the context of cognitive neuroscience. Pas mal, as they say.
But these kinds of details aren't obvious unless you have domain knowledge and you've seen Chris Eliasmith speak a few times, and you've thought, mmm hold on a sec, show me them equations. (which he doesn't usually do).