I've just started a PhD on neuromorphic memristive Systems at EPFL, so if someone wants to ask questions/chat about this I will do my best to serve. Here or via the contact info in my profile:)
I've worked on HW/SW codesign, for mixed signal chips, initially for convnets and image recognition, and later as a more general DL architecture. My advisor is the guy who "found the missing memristor" at HP in 2008.
Looking back now I regret going into HW field. I should have applied to CS rather than ECE, and focus on DL algorithms (especially RL), or maybe even something like what Numenta is doing, because my primary interest is AI, not hardware to run AI.
The hardest part has been being the only DL/ML specialist in my group. Everyone else here is more HW-oriented. I only understood this when I did an internship where I worked with people I could learn from/discuss ideas with.
I looked at your CV, and my guess is that for you, the hardest part will be to focus on one thing for the next 4 years.
Define soon...in research it is already working I think, but AFAIK in commercial products it's at most used for memory and CMOS image sensors (which are amenable to that and where density pressure is the highest).
Getting production and thermal balancing right is really tough here, and remember that you need scale it to insane production runs to work with IC. But I'd not be surprised to see some breakthroughs in the next few years.
And I think memristives for memory will probably help because they ease the thermal pressure if you manage to eliminate sneak currents, and the material science/microengineering wizards in my lab are working on directly using the vias to make memristives, so obviously I dream of having stacks of memory sandwiching compute layers already...but this is still very much research!
They are already on the market.
It's not compute, but it is an awesomely HDR,low latency image sensor which gets you a motion gradient for free. Downside is no colour and no static images, but as a supplementary chip for stabilzation, for robotics or low power "wake up" camera and all those applications it is in my opinion already worth considering, and if the software and algorithmic side of image processing catches up to event stream based sensor data this will be quite awesome.
Disclaimer: I'm acquainted with some of the people who worked on this and think they are all quite lovely people, so might be biased.
skummetmaelk their examples. What I should have said that the HDR is for the event sensors, which are black/white in the direction of the event (i.e. motions dark=> bright or bright to dark). But if your usecase is motion detection or tracking, that is enough as far as I'm concerned
How, if at all, will the memory device in Intel's optane xpoint memory relate to neuromorphic systems? Is it commercial proof of concept for a commercially viable device that, with relatively straightforward modification, can support in-memory computing in the future?
I am actually working on this partially...I wouldn't call it new paradigms, but dataflow, probabilistic computing and parallel by default paradigms like VHDL or CUDA would be my best guess if the wildest dreams come true and we actually reroute physical chips to make new models. But only in the lower layers(similar to XDA abstracts stuff already for you on tensorflow).
At the end, "all" I see happening in the near future will be
* ISA extensions to have accelerators similar to the ones we have for Crypto/RNG and especially vectorization (crossbar matrix multipliers <3 )
* custom chips being made using the standard custom/semicustom design flows, just including memristive and maybe neuromorphic cells
* maybe something akin to FPGA programming, especially since intel might be integrating their new Altera into consumer devices https://newsroom.intel.com/news-releases/intel-completes-acq...
I don't know enough about protein folding to say anything authorative about that. In general: it depends. If it has a memory bottleneck and the computation can be expressed as Matmul/matadd, then I can see memristives as accelerators. neuromorphic stuff...my gut would say no, unless you can map it into some domain where it is good at
You are doing about as well as I did...I'm full EE, but worked as Software and AI/ML freelancer. Do some reserach about labs (there aren't that many) and then develop towards that
I am wondering what have you done previously that got you into neuromorphic computing. I am currently an undergraduate in computer engineering but the idea of neuromorphic computing is really interesting and I am curious what the academic path looks like.
I did electrical engineering and listened to a lecture by our neuroscientific systems prof https://www.nst.ei.tum.de/team/jorg-conradt/ and it kind of developed from there, since I was interested in AI and ML as well(worked as a freelancer)
It's basically doing for compute what biomimetics https://en.wikipedia.org/wiki/Biomimetics did already for materials and robots (Festo has some amazing stuff here:https://www.youtube.com/watch?v=7-JvyzOddTM), looking at systems in nature when designing VLSI circuits. Examples would be looking at the eye for camera sensors, the ear for audio, the brain for computation etc.
So it's an orthogonal field that happens to marry nicely with some of the properties of memristives, in that both with them and in the brain there needs to be consideration of noise, false firings etc. It also gets people (like me) excited because it opens up a way for routing and synaptic plasticity which was missing for ML-ASICs so far
"We conclude by noting that biology has always served and will continue to serve as a great inspiration to develop methods for achieving lower-power and real-time learning systems. However, just as birds in nature may have inspired modern aeronautics technology, we eventually moved in new directions and capabilities for faster travel, larger carrying capacities and entirely different fuelling requirements. Similarly, in computing, modern application needs to go beyond those faced in nature, such as searching large databases, efficiently scheduling resources or solving highly coupled sets of differential equations. Interestingly, some of the observed characteristics in memristors may similarly provide ‘beyond biology’ opportunities in computing, taking advantage of the novel device dynamical behaviour and the network topology inspired by biology. In this regard, concepts such as the memory processing unit represent truly exciting opportunities down the road. To achieve these and other new computing systems of the future will require persistent and creative research that goes beyond any single discipline, and must include insights from neuroscience, physics, chemistry, computer science, and electrical and computer engineering, among others."
The obvious things are size, speed and integration with existing computers.
A neuron is measured in micrometers, a memristor in nanometers (and the paper suggests even single-atom devices may be possible).
A neuron can fire at most a couple hundred times per second; a memristor has "subnanosecond switching speed", i.e. it operates in the usual GHz operating frequency range of modern computers.
And you can integrate them at the circuit level, so they can draw on the superbiological capabilities of existing computers at native speed.
Beyond replacing current transistor logic, I find it theoretically pleasing that memristors fill the gap to complete the linking of current, voltage, charge, and flux with the existing 3 fundamental passive devices. What would basic circuit analysis lessons look like 10 years after they become commonplace?
> What would basic circuit analysis lessons look like 10 years after they become commonplace?
Exactly the same as they do now. Memristors aren't linear time-invariant, so it's extremely unlikely that they would show up in a basic circuit analysis lesson.
Not even inductors or caps are in a basic circuit analysis course. But this quadriga of passive components was at least mentioned in the very beginning of the first ee semester, components class which focused the physical effects (and was notoriously hard).
The "future" should be in quotes. Articles have been selling these BS for 20 years now --- and we're gonna get the same articles for the next 30 years at least....