Neuromorphic hardware has been usable for 10 years now. Since then, algorithms for neuromorphic hardware (i.e. spiking networks) always performed 'almost as good' as ANN solutions on GPUs (meaning: inferior). But each year a new generation of GPUs comes out, using modern processes, with excellent toolchains. In a direct comparison of power efficiency, GPUs win over NMHW most of the time.
I would love to see Spiking Networks and NMHW take over machine learning but it has such a long way to go. And I seriously doubt the strategy, followed by most players, to try to beat good old ANNs at their own game.
Unless we identify a problem set where event-based computing with spikes is the inherently natural solution, I find it hard to imagine that spiking networks will ever outcompete ANN solutions.
> Unless we identify a problem set where event-based computing with spikes is the inherently natural solution, I find it hard to imagine that spiking networks will ever outcompete ANN solutions.
i'd guess that domain would be real time (unbuffered / unbatched) processing of raw sensory data. it seems reasonable that biological neural systems evolved for optimal processing of sensory information encoded temporally in spike trains, yet the few papers on neuromorphic computing i've seen tend to try and hammer spiking neural networks into a classic batch based machine learning paradigm and then score them against batch based anns.
On the other hand, even biology often uses rate codes which are inefficient and limited in what they can represent, compared to all those timing-sensitive codes, like latency codes, rank order codes, phase codes, pattern codes, population codes etc.
And when we look at the technical domain, event-based vision cameras spew out what could pass as spikes; but even in that area spiking networks have proven too limited compared to event-based algorithms that were only vaguely bioinspired. And this technology took about 20 years from conception to making a breakthrough on the market.
So the question is whether spiking networks are indeed the future of computation. But without doubt, the concept is very interesting academically. A bit like Haskell :D
> And when we look at the technical domain, event-based vision cameras spew out what could pass as spikes
i have not seen these, i'm curious. do they try and mimic early stages of the human visual system? (ie, a mechanical v1, with outputs that actually look like the spatial and frequency tuning that is often found in v1 neurons?)
> So the question is whether spiking networks are indeed the future of computation. But without doubt, the concept is very interesting academically. A bit like Haskell :D
or if it will be something we hand code at all... i suspect that the future of computation will be derived by the machines themselves. if one can use GANs to generate entire novel cryptosystems (i read a while back that google was doing this), it seems only natural that they could be used for finding optimal computational paradigms.
although many would argue that optimal computation is computation that is best understood by humans.
The original incarnation goes by the name of Dynamic Vision Sensor (DVS), marketed by inivation, an ETH Zürich spinoff. Prophesee is another manufacturer with their own IP. I think Sony makes event-based camera's too; perhaps others as well (Samsung? or was it Huawei?)
They mimic the retina, each pixel emitting events ('spikes' if you wish) when luminance crosses a threshold. There is no frame clock, each pixel works asynchronously. The technology is known for extremely low latency, high temporal resolution and ultra-high dynamic range. Have a look ;)
That just triggered me ... I actually have worked on event-based vision, old DVS cameras specifically some time ago ... and I actually did the processing side of things in Haskell :-)
Um, Dayan & Abbott's "Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems" comes to mind. It's rather neuroscience heavy though.
I've looked at this before to some extent. The problem seems to me to be that (at least for training) adapting a model in 'real-time' in response to new input just seems too hard to do in practice. In reality when training a model there are usually lots of tweaks, fine-tuning and iteration needed that require human intervention, at which point you don't really need real-time anymore. Perhaps there is a case for using such approaches for inference although it's not something I've thought about too much.
Source: I'm an undergrad doing research in neuromorphic computing.
A lot of information about SNNs misses a lot of recent findings on the effect of astrocytes on the dynamical state of the network. If you look in the recent literature on SNNs, you'll find that including astrocyte models in the networks improves memory and accuracy significantly in many cases.
One of my first memorable PhD experiences was getting invited to an impromptu meeting in Lithuania, where we met a Finish and Lithuanian scientist both working on astrocyte models and wondering how that could be applicable to neuromorphic hardware. Needless to say this didn't go anywhere, but at least we got to sample some really nice Lithuanian food.
Spiking networks are to Machine Learning like Haskell to C++ and Python: While not really used to solve many real-world problems, they are extremely interesting academically, and important concepts have ended up in the mainstream, like event-based sensing and control, or event-driven signal processing.
I don't get the appeal of spiking networks. They struggle to solve problems that have already readily been solved by ANNs and they don't offer much in terms of biological realism - they don't account for neuronal geometry and dendritic nonlinear phenomena, nor do they explain the protein dependence of LTP.
These are two disparate applications of spiking networks. 1. Machine learning—yes, many in the field seem to be trying to reinvent ANNs with spikes. Not very useful in my opinion. 2. Modeling biological processes. A lot of progress has been made in neuroscience research thanks to spiking network models, coupled with dendritic computation and all other sorts of biological detail. But one would not normally use neuromorphic hardware if the end goal is biological realism.
Only if biological realism is required in real time and on a constrained power budget, such as on a robot, is Neuromorphic hardware the weapon of choice.
I would love to see Spiking Networks and NMHW take over machine learning but it has such a long way to go. And I seriously doubt the strategy, followed by most players, to try to beat good old ANNs at their own game.
Unless we identify a problem set where event-based computing with spikes is the inherently natural solution, I find it hard to imagine that spiking networks will ever outcompete ANN solutions.