Hacker News new | past | comments | ask | show | jobs | submit login
Stanford Brainstorm Chip to Hints at Neuromorphic Computing Future (nextplatform.com)
59 points by rbanffy on April 10, 2017 | hide | past | favorite | 47 comments



I'm from the lab working on this platform. A lot of people in this thread are claiming that there is no use for these spiking neural nets and I have to disagree.

Brainstorm can run Spaun, the world's largest functioning and behaving brain model. [1]

It can also run Adaptive Control circuits, which lead to unprecedented adaptability in arm control to undefined forces, such as changes in arm length, medium the arm is acting in and weights applied to the arm. [2]

For an overview of how programming with spiking neural networks is different than normal programming, see this blog post I wrote almost a year ago. [3]

Finally, framing spiking neural networks in opposition to Deep Learning networks is innacurate and harmful. We're currently investigating biologically plausible backprop and have already converted ConvNets to spiking nets for significant power savings [4]. Personally, I'm looking forward to an integration of biologically plausible networks with Deep Nets to completely theoretically unite all the approaches to brain modelling in general so we are no longer limited by metaphors [5].

[1] http://science.sciencemag.org/content/338/6111/1202

[2] http://rspb.royalsocietypublishing.org/content/283/1843/2016...

[3] https://medium.com/@seanaubin/a-way-around-the-coming-perfor...

[4] https://arxiv.org/abs/1611.05141

[5] https://medium.com/@seanaubin/deep-learning-is-almost-the-br...


Hi Sean, perhaps you can help us understand the claim that analog hardware SNNs are more energy efficient than analog hardware non-spiking networks (e.g. based on floating gate transistors [1])?

Also, in the paper [2], Table 4 is a bit confusing: what is the "traditional hardware" they're referring to?

Finally, it's not clear from that paper, or from this "Brainstorm" article, that you actually have a working chip, capable of running Alexnet. Is that so? What is the current status of the actual hardware?

[1] https://arxiv.org/abs/1610.02091 [2] https://arxiv.org/abs/1611.05141


I'm not sure it's fair to compare the Brainstorm chip to the paper you linked to, since they take different inputs (binary vs. continuous) and they implement different systems (purely feed-forward vs. recurrent dynamic systems). But basically, the reason I would imagine why rates require more energy than spikes is just basic math. Given that power is the area under the curve, you can picture that intermittent spikes would consume less power than sending a continuous rate signal. There are other advantages to spikes vs. rates, such as robustness to noise/failure.

In table 4, I think they're comparing to GPU hardware, since that's what he was running everything on.

I think the current status of the hardware is prototypes are being produced. The most recent paper [1] used a circuit simulated in SPICE, so I'm guessing they're pretty close to production? I'm not sure, because: a) I haven't heard anything from my lab-mates in a while. b) Even if I did hear something from them, I'm not sure I'm allowed to talk about it. Hardware development is way more secretive than software development.

[1] http://compneuro.uwaterloo.ca/files/publications/voelker.201...


Sure, let's look at the "area under the curve". What would be this area to perform an equivalent of a weighted sum followed by a non-linearity, for the best spiking network? Note that we must ensure the same precision, as when using the analog "continuous rate" signals. If SNN produces less accurate results, then it's really apples to oranges comparison. Don't forget to take into account both static and dynamic power. Also, there's a question of speed: can you build an SNN chip which runs faster given the same power budget, and the same target accuracy?

If they are indeed comparing SNN to a GPU in that paper, then it's just plain horrible! 3 times better efficiency than a 32 bit FLOPs digital GPU (which is probably 2-3 generations old by now)? That's not going to impress anyone. AFAIK, the best digital DL chips are already at least 10 times more efficient than latest GPUs, and analog DL chips claim to have at least another order of magnitude improvement. In my opinion, that's still not enough: if you want to build an analog chip (spiking or not), it better be at least 1000 times more power efficient than the best we can expect from Nvidia in the next couple of years. Otherwise it's just not worth the effort (and inflexibility).

So, the bottom line, until there's a working chip, capable of running Alexnet, we have no guarantees about it's overall energy efficiency, or speed, or accuracy, or noise robustness. Simulating a tiny portion of it in SPICE does not really provide much insight. When the chip is built, and working, then we can compare it to the best analog "continuous rate" chip running the same model, and only then we will be able to see which one is more efficient. Until that time, any claims that spikes are more efficient are unsubstantiated.

On the other hand, if you can devise an algorithm which is uniquely suited for spiking networks (biologically plausible backprop, or whatever), then sure, it's quite possible that you will be able to do things more efficiently. So, my question is, why try mapping DL models, trained on "traditional hardware" to SNNs, which weren't designed to run them? Why not focus instead on finding those biologically plausible algorithms first? If your goal is to understand the brain, wouldn't it be more reasonable to continue experiment in software until you do? Why build hardware to understand brain? That's not a rhetorical question, perhaps there are good reasons, and I'd like to know them.


That's totally fair to want to wait for a comparison until there's actual hardware produced. Especially for comparisons exclusively involving DL models. Make initial "area under the curve" argument was rhetorical and not sufficiently empirically founded.

> If they are indeed comparing SNN to a GPU in that paper, then it's just plain horrible!

Yeah, the paper talks about how this is preliminary and bigger savings are expected for video. However, I must concede as before that without the analog hardware built there isn't much point discussing this.

> So, my question is, why try mapping DL models, trained on "traditional hardware" to SNNs, which weren't designed to run them? Why not focus instead on finding those biologically plausible algorithms first? If your goal is to understand the brain, wouldn't it be more reasonable to continue experiment in software until you do? Why build hardware to understand brain?

Eric Hunsberger [1] has been doing most of the work in this domain, so I'm going to be awkwardly paraphrasing my conversations with him. Eric wanted to make Spaun's [2] vision system better. To do that, he knew he was going to need ConvNets or at least build something off of them. So he started to see if he could bring ConvNets into the domain of SNNs to understand them better. Once he did that, he started looking into if he could learn the SNN ConvNets using biologically plausible back-prop [3] which is where he's at now.

That's really only a branch of our research. To understand the brain, we have a lot of really different methods that are more based in Symbolicism, Bayesianism and Dynamicism. We do start in software [2], but software is slow, even on a GPU. When we get faster hardware, we're able to explore the algos more quickly. Also, we got funding to build/explore analog hardware from the Office of Naval Research, so that's where this project is investigating.

To summarize, the DNN-to-SNN-adaptation algos aren't the only thing targeting this hardware, but a small slice of a family of algos that are defining the requirements of the hardware.

(I hope this post confirms that I understood and accept your argument and isn't me having a case of "must have the last word"-ism)

[1] http://compneuro.uwaterloo.ca/people/eric-hunsberger.html [2] https://nengo.github.io/ [3] http://cogsci.stackexchange.com/q/16269/4397


I appreciate your response, especially the backstory of running a convnet on SNN hardware.

A couple of remarks:

1. Your own response to your stackexchange question: "The random synaptic feedback weights only accomplish back-propagation through one layer of neurons, thus severely limiting the depth of a network."

Didn't they show in the paper how it could work through multiple layers?

2. software is slow, even on a GPU. When we get faster hardware, we're able to explore the algos more quickly

I'm not sure what you're referring to by "faster hardware", but somehow I doubt you will beat a workstation with 8 GPUs in terms of speed, if your goal is to explore algos more quickly. More importantly, what if the next algo you want to explore does not map well to the hardware you built? For example, what if we realize that relative timings between spikes are important, and most of the computation is based on that, but your hardware was not designed to exploit this "race logic" principles? Suddenly your custom system became much less useful, while your GPUs will simulate that just fine. There's a possibility that instead of exploring the best or most plausible algorithms, you will limit yourself to algorithms which map well to your hardware.

3. What do you think about HTM theory by Numenta? They strive for biologically realistic computation, but they don't think spikes are important, and abstract them away in their code.

p.s. the reason I'm involved in this discussion is I'm trying to decide whether to accept an internship offer to work on Bayesian inference algorithms for SNN chip.


(When does this thread get closed by Hacker News as being too old? If/when it does is there somewhere public you want to continue the discussion? If you'd like we can move it to the Nengo forums at forum.nengo.ai)

1. Dammit. I misread that paper. You're totally right that they do show it works for multiple layers.

2. By faster hardware, I mean neuromorphic hardware such as BrainScaleS and Spinnaker. The software we use, Nengo, is pretty dependent on the speed of a single GPU, since it's really hard to separate the networks across multiple GPUs. You're right that there's always the possibility that our newer algorithms don't map well onto the specialized hardware we've built. The reason why we think we're ready to at least implement a few hardware prototypes is:

- The principles that underlie our algorithms, the Neural Engineering Framework, have been around for 15 years and are pretty mature. The software built to support these principles, Nengo, has been through six re-writes and if finally pretty stable. - Some of the hardware implementations are general enough that they can handle pretty drastic changes in algorithms. For example, Spinnaker is just a hexagonal grid of stripped-down ARM chips. - Even if the hardware ends up limiting what algorithms we can implement, we can probably re-use a lot of the design to implement whatever the new algorithms require.

3. I've been meaning to investigate the HTM theory of Numenta for years (they were actually the first people to get me excited about brain-motivated machine intelligence), but never got around to it. I'm also super unclear on the relation between HTM, the Neural Engineering Framework and the newer theory FORCE. I'll write a question on cogsci.stackexchange.com to motivate myself to dig in.

Where is the internship happening? Will you be working with [Sughanda Sharma][1]? She's my lab-mate working on that using the Neural Engineering Framework.

[1] http://compneuro.uwaterloo.ca/people/sugandha-sharma.html


Sure, let's move to Nengo forums. Do you mind creating a topic there and sending me a link?

I actually don't know much about spiking NNs (software or hardware), but Spaun seems like the only real competitor to HTM. Unlike Spaun, Numenta's algorithms are not currently limited by computational resources, because they focus on a rather small part of neocortex (2-3 layers of a single region, working with tiny input sizes), and they abstract spikes. Numenta claims that if we understand what spikes are doing (computation and information transfer), then there's not need to emulate them exactly, we can construct algorithms which do the same thing using traditional computations and data structures. Instead, HTM wants to understand what the layers do and how they interact.

There have been an interesting discussion on Numenta mailing list, with James Smith, who is developing his own spikes-focused theory of the neural computation: http://lists.numenta.org/pipermail/nupic-theory_lists.nument...

I believe he has been working on it since, and plans to publish a book.

The internship offer is from HRL, a small company in Malibu. It's quite possible that they are looking at the ideas of your lab-mate, and they might even ask me to implement them. I'm still deciding though.


Here's the link to the new topic. I'll let you start the discussion there to make sure I've transferred between contexts correctly.

https://forum.nengo.ai/t/the-usefulness-of-neuromorphic-hard...


I still dislike the fact that neuromorphic designs are always funded and heralded as the next coming of Turing when they have no results to back it up.

Take ConvNets, they're ruthlessly empirically validated and have results to show for it. By contrast, these spiking neural nets were simply assumed to be somehow better (hey, because brains!) and they proceed to mint chips off the idea.


Spiking neural networks do have many advantages over conventional rate based networks. They can, for example, handle time which a convnet cannot do without resorting to frames.

They also have a lot of disadvantages, the main one being that no one has any idea of how to use them properly yet.


ConvNets can handle sequences just as well (cf. TDNNs), you just treat time as another dimension. Whether you want to call this a "frame" or not is subject to question.

https://arxiv.org/pdf/1611.02344.pdf

With an explicit (potentially arbitrarily large) memory component, they can also capture long-term dependencies that don't have to be maintained in, say, the convolution parameters between presentations of data. This is one thing we want to pursue with the Faiss library at FAIR.

Regardless of how something is implemented, it's the what that is important. What's the actual algorithm or algorithms that the brain uses? It's likely easier to explore this question in floating-point math than putting together a lot of spiking neuron like things and trying to cargo cult your way to the answer.


It's interesting you used a Feynman reference here, as he was actually involved spawning the neuromorphic field at Caltech with Carver Mead [0].

You're thinking about this as if it's purely an algorithm problem and that computer architecture should always be designed to do the algorithms bidding.

Neuromorphic is flipping the problem around and creates efficient architecture where algorithms do the architectures bidding.

The former is much better in the cloud with massive armies of general purpose computers while the latter is much better on the edge for anything implantable (very little heat can be generated or else you cook) or needs super low power (less than 1W).

Having worked on both sides of the spectrum I think there's room for both and the relative research funding seems reasonable; much more invested in straightforward neural network research with GPUs etc versus the neuromorphic field where it's mostly a handful of Caltech folks left at places like Stanford, UCSD, GT, John's Hopkins, or UF.

[0] https://en.m.wikipedia.org/wiki/Carver_Mead


RNNs can handle the idea of time.


Not very well though. You still need to operate on frames because you need a time interval for capturing the firing rate that conventional networks model. As you make the interval smaller, which you need for quick reaction times, you make the network more noisy and unstable.

SNNs can work in real time.


Are you implying that spiking networks are somehow faster than level based ones?

We can build analog level based networks and I don't see any reason why they would have to be slower than spike based ones. And as far as the noise, it's also unclear to me why a spiking network would be more noise resistant or stable. Do tell.


The prevailing theory for why the brain uses spiking networks is that it is much more energy efficient to transmit "all-or-nothing" spikes versus propagating analog membrane potential across very large axonal trees. Contemporary machine learning is focused on rate-based neural networks with continuous activation levels representing the firing rate of the neurons. This makes these networks unable to measure the time difference between spikes arriving at a single neuron, an ability which is known to play a critical role in your auditory system for example. It is unknown how it is used in other parts of your brain. However, in recent years the concept of dendritic computation, defined as computation being done by the complex interaction of synaptic activations as they pass down a dendritic tree before reaching the neuron soma, has gained a lot of attention. If you do not create networks where time is a "first-class citizen" you will be totally unable to do this. Of course this might be more of an academic interest depending on your point of view.

The power efficiency argument from biology translates well to silicon when you want to build large structures. Here you cannot have 32-bit floats being passed around to thousands of target synapses, so you must make do with less.


That's not necessarily the point, the point is, first and foremost, that they've been unable to demonstrate good performance on any real benchmarks. You argue that they're good at things which need to represent "time", but the fact of the matter is that they're still far behind RNNs/LSTMs.

And also, your assertion that you "can't have 32-bit floats" and therefore we should go analog (which is what this project wants) is a rather poor understanding of the challenges facing computer architecture in the modern day. The problem is data movement, not the computation. For example, Lawrence Livermore National Laboratories estimates that the cost of moving a 64-bit word 1mm ON CHIP will be approximately equal to the cost of a 64-bit FLOP on 10nm.


>The problem is data movement, not the computation.

This is why I said it is a problem passing them around.

>the fact of the matter is that they're still far behind RNNs/LSTMs.

Technologies that are far more developed purely based on the number of man-years poured into them.


RNNs and LSTMs were better upon introduction.

Man-years were poured into LSTMs because they were promising and effective, not the other way around.


The way brain computes is shaped by biological constraints. It's true that using spikes in a brain is very power efficient, however, we are discussing building computation systems in silicon, and I don't understand why people believe using spikes in silicon is more power efficient than using analog voltage levels (or currents).

I'm not sure what you tried to say about 32 bit floats: are you saying it's easier or more efficient to represent a 32 bit float using time difference between spikes than using a voltage level? it might be true, but I don't see how.


It is definitely easier to transmit spikes across a silicon chip than analog voltages or digital numbers. Analog voltages will be subject to voltage drops as they traverse the chip and require a massive number of wires because you cannot multiplex the voltages without extremely expensive (area wise) analog multiplexers. Transmitting digital numbers likewise require a larger number of wires.


It is definitely easier to transmit spikes across a silicon chip than analog voltages

Why? We are not trying to transmit spikes, we are trying to transmit information. It's not at all clear to me that encoding a signal as a series of spikes is somehow more efficient than placing a certain voltage level on a wire. More specifically, given a particular noise floor, and a desired data rate, you either use binary encoding (spikes), but have to use higher frequency, or you use lower frequency but pack more bits into each voltage level. This would probably be affected a lot by the required frequency range.

One example comes to mind: T1 transmission lines (NRZ encoding), and DSL phone lines (QAM). I don't know which one is more power efficient, and not sure if that was even a concern there, but it would be interesting to know how they compare in terms of power efficiency, and how that changes when we increase data rates or noise levels.

Analog voltages will be subject to voltage drops as they traverse the chip

The spikes will be subject to exactly the same voltage drops, what's your point?

and require a massive number of wires because you cannot multiplex the voltages without extremely expensive (area wise) analog multiplexers

I'm not sure what you mean here. What do you want to multiplex, and why spikes are better for that?


this is square waves vs sine waves, but triangle waves are where it's at.

the brain may use square waves because it is part of a strangely evolved biology which has required things like fear and adrenaline, and uses long, salty, high impedance connections.

the brain's architecture may not be the best for all jobs


We have a lot of result to back it up, as I mention in my comment on the article [1]. However, Turing machines and spiking neural nets have different goals. Spiking neural nets are distributed, analog/continuous, asynchronous and approximate. Turing machines are usually individual and with discrete state. I discuss this further in the blog post I link to in my comment.

[1] https://news.ycombinator.com/item?id=14084846


There's nothing in this article about Turing. The claims are that Moore's law is dying.


> spiking neural networks

Move along, nothing to see here. Just a continuation of an old research direction that has nothing at all to do with the current revolution in machine learning, except in name. Human designed spiking neural networks simply do not work well at solving any real world problems.

The important research in hardware for machine learning is focused on taking designs that work, such as convolutional nets and LSTM RNNs, and mapping them to silicon in the most efficient way possible. There's a lot of really exciting stuff going on there, and none of it has anything to do with spiking neural networks.


There are thousands of people invested in the next CNN, GPU, CPU, TPU, or FPGA architecture. Having a few smart people with demonstrated experience try an idea that doesn't fit the current biases in the field is perfectly reasonable funding allocation and great for science. If there weren't neural net holdouts getting funding doing something that didn't seem like it would work we'd be still using SVMs or arguing about probablistic graphical models.

If you think this approach isn't warranted, I suggest that your example of an LSTM chip is also not sensible. First why not something simpler like GRU, secondly it's not compute efficient for the memory stored as compute scales poorly due to the matrix multiply for the size of your hidden state vector. If in a couple months we have a new differentiable memory architecture (a la neural Turing machine) all the work you did building a chip is outdated!

Starting with first principles of what you can efficiently do with current process technology combined with an understanding of computational neuroscience is what Kwabenas lab excels at.

It's actually very difficult to simulate the mixed analog digital asynchronous chips coming out of Kwabenas lab or elsewhere in the industry.

There are plenty of other academic chips that fail after tapeout. Kwabenas lab has a proven track record of taping out chips that does what he claims. So from a hardware perspective I'd rather bet on a guy who has successfully taped out dozens of chips in his lab, and have the rest stick to simulation.


The comment above yours reminds me of all the times deep learning had been dismissed in the past as useless until computers finally were fast enough to make it actually usable. Now they are heralded as the best thing in the world.

People always dismiss everything which doesn't fit the current favorite of the month. Short term thinking, the bane of the world.


>The important research in hardware for machine learning is focused on taking designs that work

I'm sorry but that is a really glib remark. The best working neural net is the human brain, and the current direction of neural networks have completely abandoned resembling a brain. While backpropogation has led to impressive results we shouldn't forget that it was never found in neurobiology, and it's pretty antithesis to how the brain operates. In my opinion machine learning should reconcile with neurobiology, but it's too obsessed with the results backprop is giving them. Frankly all "machine learning" right now is impressive exercises in high dimensional differentiation using backprop.

I even remember a talk by a University of Toronto professors saying that even though neurobiologists have never found any support for large scale backprop at the heart of learning, maybe they should look again because neural nets are working so well with it. I would say they actually abandoned empiricism at this point.

Keep in mind that if we assume the brain to be akin to an evolutionary system like DNA then backprop is even more agrecious because it's like saying the sunlight and the organism is conspiring to optimise. That the sun, or sperm/egg are getting feedback from the organism and it's fitness to finetune how they mutate the offspring.


Given the choice between algorithms that work and algorithms that crudely mimic the brain but don't work, I'll choose algorithms that work every time.

Why waste millions of dollars fabbing chips for algorithms that we know don't work very well? It's cargo cult science, thinking that if we just build brain imitations without even understanding how the brain works then it will magically produce AI.

Spiking proponents should focus on simulation and brain measurement until they figure out how to simulate something that works. At that point we can start making chips to improve efficiency. Meanwhile, the machine learning people may end up arriving at an AI that works as well as the brain or better despite operating on different principles, and that would be just fine!


> Given the choice

Why do we have to choose ? Surely it is better to attempt both.

Throughout the history of science knowing when something does not work is just as important as knowing when something does work.


Of course we should investigate both. But when spending finite research dollars, we do have to choose. Should we fab chips for both? No, fabbing chips is very expensive. We should only fab chips for the one that works. The other can be investigated just fine in simulation.


For the most part I completely agree. However, while it might seem reasonable to presume the brain is generating consciousness, it remains a presumption - and an ill-defined one, at that.


As I mention in my comment [1], there are already a lot of designs that do work on very real world problems. Also, we've already started mapping CNNs onto the silicon for greater power efficiency. But you're right to say there's still a lot of work to do.

[1] https://news.ycombinator.com/item?id=14084846


Realtime video object recognition at 10^-5 the power draw of CNNs.

Roughly simulating the # of neurons in a human brain for $700 rather than $70M.


Perhaps someone can clear up my confusion, but I classify neural networks as just one category of algorithms. It doesn't strike me as something that will replace our binary systems anytime soon because, well, we haven't shown any systems being replaced. And I'm talking about things like an Operating System, programming language, audio driver, web browser, and so on. So I'm confused why "neuromorphic" chips are a big deal, apart from just being able to run our computer vision algorithms faster.

Additionally while inspired by the brain, that's all they are, inspired. There is still hundreds of differences between our neural nets in our head and the ones we build on the computer. The article talks about "brain-inspired architectures that route for efficiency and performance.", but aren't our own brains efficient because they literally grow new pathways on demand? I feel like this sort of flexibility is something we will never see in silicon, and I don't think we even have a computational understanding of it at all at the moment.


Brains do not really grow new pathways, they are more or less set in stone after development finishes.

The goal of neuromorphic chips is also not to replace operating systems for the same reason that quantum computers won't replace conventional ones. They have strong sides and weak sides.

When you say that all the networks are is inspired by biology, it is true, but varies hugely between implementations.

Boahen is of the "true" neuromorphic school, people who use actual analog silicon neurons. These people are exclusively found in academia as all of industry only uses digital techonology. This includes the coveted IBM TrueNorth "neuromorphic" processor.

Thus the goal of these projects is not just to create supercomputing architectures, but to create systems that mimic the way the brain works and thereby learn how it manages to perform all these complex operations.


> Brains do not really grow new pathways, they are more or less set in stone after development finishes.

Neuroplasticity and Neurogenesis has been observed well into adulthood though [1].

> Thus the goal of these projects is not just to create supercomputing architectures, but to create systems that mimic the way the brain works and thereby learn how it manages to perform all these complex operations.

Sounds like a lot of expensive guesswork to me! Good luck to them I guess.

[1] https://web.stanford.edu/group/hopes/cgi-bin/hopes_test/neur...


There have been successes of this line of research. For example, touchpads and Synaptics came from thinking about how to build analog capacitor networks [0]. There are spiking cameras that can produce significantly higher range and temporal resolution than conventional approaches [1]. Like neural networks, the field sort of quieted down in the 90s but perhaps now is a new time for the field.

If you view the purpose of academics to train the next generation rather than simply advancing the field, the quality of reasearchers and engineers that come out of Kwabenas lab is superb. They easily work at companies like Intel, SpaceX, NIH, etc.

[1] https://inilabs.com/

[0] https://en.m.wikipedia.org/wiki/Carver_Mead


Very true, but the neuroplasticity observed is markedly different from forming entirely new pathways. It is more like changing the weights of a network already in place.

The neurogenesis is also limited to the hippocampus and olfactory bulb. Your cortex does not have a steady supply of new cells.

Making these chips is really not all that expensive. Most groups produce only test chips in multi-project wafer runs, with a cost of 1-5k$ per chip. It is just like normal research, you have to make up experiments and then actually do them. So far there are a substantial number of successes in the domain of low-power neural networks.


The neural networks that have exploded in popularity lately are nothing like actual neurons. They're usually just elements of a matrix, and the "layers" are just matrix multiplications. "Neuromorphic" ones are still pretty simple compared to the real thing but they are much more complex than the other neural networks. They try to physically model interactions between neurons, so each neuron is constantly sending and receiving and self-tuning. It's nothing like a matrix multiplication.

Edit: It also runs efficiently on very different hardware. https://www.nextplatform.com/2017/02/15/memristor-research-h...


Indeed we do not have a good, computational understanding of what makes brains efficient. It's plausible that growing new pathways could contribute to the efficiency of the brain, but is that the only reason the brain is efficient? What about the brain's spiking communication? What about the synaptic and dendritic dynamics? Are any of these "features" useful or just vestigial byproducts of evolution? Without that computational understanding, it's hard to make satisfying arguments one way or the other about what exactly makes the brain efficient.

You can cast neuromorphic chip development as a path to realizing that understanding. Recall how convolutional neural networks only realized their heyday once the hardware resources to implement them (i.e. GPUs) was readily available. Once GPUs made running CNNs practical, people found all sorts of new ways to understand and use neural networks. In other works, the computational model and the implementing hardware must be co-developed before something actually useful is possible.


As I mention in the blog post I link to in my comment [1], the idea isn't to replace.

As for whether spiking neural nets are true representations of the human brain, I think it comes down to choosing the right level of abstraction for the problem. Should your model include neurogenesis? Should you stop at the quantum level? I think the answer is, that it depends. [2]

[1] https://news.ycombinator.com/item?id=14078487 [2] https://www.sciencedirect.com/science/article/pii/S095943881...


Kwabena Boahen spoke in the Stanford EE Computer Systems Colloquium April 5. See http://ee380.stanford.edu for series schedule, http://ee380.stanford.edu/Abstracts/170405.html for abstract and links, https://youtu.be/vHlbC74RJGU for the talk published to YouTube.


"Brain-like" can mean either of two things, the circuit approach or the algorithm approach. 1) Circuits: Its components look like neurons (analog internals, spikes, etc). Algorithmic: 2) It performs the same real-time 3-D tomography/simulation/control which real brains perform. This second approach is unusual in ignoring "neurons" entirely, but has the advantage that 3-D reconstruction from undersampled, quantized inputs is a well-posed computational problem, although it does require a difficult-to-imagine continuous, self-amplifying 3-D representational medium to make it work. My caclulations indicate the second approach is at least 10^8 more efficient than the circuit-based approach at doing what brains actually do (https://arxiv.org/pdf/1409.8275.pdf).


"Brain-like" is not a binary property. A recurrent neural network is more brain-like than a Turing machine. Numenta's Cortical Learning Algorithm is more brain-like than an RNN. Blue Brain simulator is more brain-like than CLA. And so on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: