Hacker News new | past | comments | ask | show | jobs | submit login
IBM has built a digital rat brain that could power tomorrow’s smartphones (qz.com)
77 points by ottoborden on Aug 19, 2015 | hide | past | favorite | 30 comments



> a system has about 48 million digital neurons, which is roughly as many as found in the brain of a rodent.

The count of neurons alone does not make a (rat) brain. The degree to which they are connected, the higher-level layouts, the fidelity of the neurons and many other things are also necessary.

It's like having the world's smallest bowl filled with soup of short DNA strands worth 3 billion base pairs and saying your bowl is similar to the human genome.


The first sentence literally says why it's not just due to count of neurons.

In August 2014, IBM announced that it had built a “brain-inspired” computer chip—essentially a computer that was wired like an organic brain, rather than a traditional computer.

That's not to say it is equivalent to a rat-brain, there will obviously be differences in efficiency of components and portions that aren't replicated correctly or just plain done differently, but your analogy is no better than what you accused the article of.


While you are correct that the count of neurons is not the only thing the synthetic brain has to boast, merely saying 'wired like a rat brain' is not really much extra.

It does not speak to the complexity of the network. I interpreted that line as 'connected in the same topology' but immediately thought, yeah right, they definitely don't have the same 'branching factor' as a real rat brain. (not sure of the correct words here, but some real neurons have many thousands to millions of connections).


...so the phrase "wired like an organic brain" is enough to brush off any skepticism?

It'd be rather impressive if it actually did a large number of things that rats do, but a testing environment for that is rather hard to arrange. In the absence of a "rat benchmark" you can build anything and say it's wired like a rat's brain.


I never said it didn't deserve criticism, just that the criticism is not absolved of the responsibility to make useful, supported arguments as well.

I thought the GP post had a point that could be made, but I think they went about it sloppily, and the analogy used in the criticism was hyperbolic in the opposite direction as the article. The article's title was mostly link-bait, but there were some weak assertions in the article that tried to back it up. The sibling comment at your level by avoid3d actually does a good job of trying to address some specifics problems of the claim, and if something similar to that was at the top level, I would either not bothered to reply, or actually looked into the criticisms by researching the chip if my interest was piqued (and if not beat to it by a useful reply from someone else). That's much more useful for discussion.


How does IBM keep doing cool stuff like this when every impression I have of them is a slow moving, lumbering, IT services monster that's lunging to provide solutions for Big Business?


They have kept their research division alive. And they actually do a lot of research, as opposed to mooching from universities or refining what competitors do.

https://en.wikipedia.org/wiki/List_of_top_United_States_pate...


Here in Melbourne, Australia, I believe that IBM research is trying to madly hire science / mathematics PhDs. Here's a press release from 2011 [1]:

  > Each year, IBM invests an average of US$6 billion
  > globally into research and development. The new R&D
  > laboratory is the first lab of its kind to focus on
  > research and development. It is located on campus at
  > the University of Melbourne and will employ 150
  > researchers within the next five years.
Why here? It may be the case that Australia's R&D tax incentive plays part of this [2]. Or perhaps it could be that Australia produces a decent crop of well-educated researchers, while much of the economy is based around extracting natural resources and selling them, so there isn't too much competition for employees. [3]

[1] http://www.austrade.gov.au/invest/doing-business-in-australi...

[2] https://www.ato.gov.au/Business/Research-and-development-tax...

[3] i may be completely wrong about this.


Australia seems like a place that pumps out a lot of researchers but doesn't have the global service export economy to build off of, like other nations sometimes do. Props to IBM for taking advantage of it.


This is pretty much it. Our universities are generally fairly first-rate on a world scale, but we don't have the same information economy that other nations have. You see it often in our medical research where we punch far above our weight. It's good that IBM is exploring the comptuer science and maths side of this.


It's an investment for the future since the mining boom is declining. Also research universities down under do have their niches on global scale (eg. medical research).


IBM seems to be leading in patent awards this year as well: http://qz.com/418068/ibm-has-been-awarded-an-average-of-21-p...

Also note that not all patents come from their Research division; anyone can file for one.


This project is actually an example of how they are slow-moving and lumbering. If they implemented state-of-the-art learning algorithms in silicon it would be amazing and potentially revolutionary and they'd have tons of customers. That would be deep convolutional neural nets. Instead they've gone with spiking neural nets, which have approximately nothing to do with the current state of the art in artificial neural networks.

Furthermore, you might be misled by their PR storm. In fact this chip doesn't implement learning at all. The learning is the important part! This chip is merely an accelerator for running pre-trained neural networks, and because of the spiking architecture those neural networks are doomed to perform poorly.


This cool research (just like Watson) seems like a loss-leader for selling consulting services which tailor the tech to very narrow, specific AI purposes. They're all breathlessly announced as though revolutionary, but none seem generalizable. Still, investing in so many small bets increases their exposure to a big win in the long run...


My understanding is that part of the appeal of this chip is the power consumption. 100mW is sounds pretty stingy to me, but I'm not terribly knowledgeable either. So I think they're trading off capability for power consumption by implementing a spiking net rather than a convolutional.


Thank you, seriously, thank you, for pointing out the obvious that eludes so many... Have an upvote...


"How can daddy's job be so boring when his hobbies are so cool"


I had that impression too, but they are turning around. They are ramping up their cloud solutions, doing AI research with Watson and other cool stuff. It is a big 100+ year old company, it won't turn on a dime to change directions, but it is turning in the right direction (and I think rather quickly for such a large company).

I think we'll see good things coming out of it in the future. Certainly seems like I've heard more interesting news items from it than in the past years:

Just in the last month or remember a few cool IBM news item -- Linux only mainframe, switching to using Apple Macbooks internally (they are buying something like 200K units, it used to be a Thinkpad+Windows country only), acquired some cool cloud companies, maybe another one I forgot.


I suspect that their research division is nearly as disconnected from their slow services monster mothership as Oracle's flying sailboat division. And serving the same purpose, brand grooming. (Which both do in very cool ways)


IMO it would really inspire confidence to see one of these chips beat traditional neural networks at something...

Speech recognition, image recognition, tumor detection, whatever. I don't care, but something...

Right now it seems like a cheese shop that doesn't actually have any cheese. If this is such a superior super mega fantastic processor, one would think it could at least run AlexNet* or some sort of superior variant, no?

*http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf


One reason that these nets are not there yet, is that they are sticking to the 'firing' analogue of natural brains. They are using binary output over time that averages out, a technique that has so far been much slower than neural networks with continuous output.


I found an interesting comment from Yann LeCun about the TrueNorth chip which this seems to use: https://www.facebook.com/yann.lecun/posts/10152184295832143


Not totally relevant to the article, but:

> Modha said his team’s goal is to build a “brain in a shoebox,” with over 10 billion synapses, consuming less than 1 kilowatt hour of power—the minuscule amount of power the human brain requires to work.

Can anyone even guess what information they're trying to convey here? On the face of it, it makes no sense - it's like saying the brain consumes 90000 kcal, but over what period? A day? A month? A lifetime?

But I can't even figure what they're trying to say. The human brain consumes about 20W (certainly not 1 kW!), so it would take about two days to use up 1 kWh of energy. I don't think that's what they're going for, but I can't think of any other reading of it that doesn't amount to basically a guess.


I imagine they meant kw, and that 20w is close enough to 1kw. (btw, I thought the human brain uses a lot more than 20w, maybe 20w in chemical signals, but a lot more in heat?)


The human body is about 100W-120W, which is about 2000-2500 kcal/day. 20W sounds right.


Here is the link to the paper on the programming language: https://dl.dropboxusercontent.com/u/91714474/Papers/020.IJCN...


I have to disagree with the paper's claims that OOP is the ideal language paradigm to be programming in. I worked for years in computational neuroscience, where most tools are OOP based. I left academia after drinking the pure functional programming kool-aid, now my work focuses on getting more mainstream adoption of FP. I hope that with an large enough paradigm shift, we will be able to write software for neuromorphic chips.

My point is that when programming SNNs, the primary concern is not _encapsulation_ but rather it should be _composition_ and _declaration_. In my opinion, most of the computational neuroscience programming field in in imperative dark ages when it comes to actually writing software that can account for biological behavior.


This is really interesting to me. I make money with OOP in the commercial arena. But I'm considering getting out and changing careers. Can you summarize WHY FP is superior? In this context is fine. If I'm asking too much, do you have a good paper which explains the necessity of FP to a OOP-head?


What FP languages/platforms do you use? There was a book about Evolutionary Neural Networks in Erlang. I've heard OCaml can be great. Clojure and Scala have a few tools like that as well...


These guys are using similar technology to the Boahen lab's:

http://web.stanford.edu/group/brainsinsilicon/index.html

In fact, I think some of the alumni now work at IBM, on this very project.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: