IBM’s next generation, TrueNorth neuromorphic chip

Ok, I admit it, besides being a storage nut I also have an enduring interest in AI. And as the technology of more sophisticated neuromorphic chips starts to emerge it seems to me to herald a whole new class of AI capabilities coming online. I suppose it’s both a bit frightening as well as exciting which is why it interests me so.

IBM announced a new version of their neuromorphic chip line, called TrueNorth with +5B transistors and the equivalent of ~1M neurons. There were a number of articles on this yesterday but the one I found most interesting was in MIT Technical Review, IBM’s new brainlike chip processes data the way your brain does, (based on a Journal Science article requires login, A million spiking neuron integrated circuit with a scaleable communications network and interface).  We discussed an earlier generation of their SyNAPSE chip in a previous post (see my IBM research introduces SyNAPSE chip post).

How does TrueNorth compare to the previous chip?

The previous generation SyNAPSE chip had a multi-mode approach which used  65K “learning synapses” together with ~256K “programming synapses”. Their current generation, TrueNorth chip has 256M “configurable synapses” and 1M “programmable spiking neurons”.  So the current chip has quadrupled the previous chips “programmable synapses” and multiplied the “configurable synapses” by a factor of a 1000.

Not sure why the configurable synapses went up so high but it could be an aspect of connectivity, something akin to what happens to a “complete graph” which has a direct edge connection to every node in the graph. In a complete graph if you have N nodes then the number of edges is given as [N*(N-1)]/2, which for 1M nodes would be ~500M edges. So it must not be a complete graph, but it’s “close to complete” with 1/2 the number of edges.

Analog vs. Digital?

When last I talked with IBM on their earlier version chip I wondered why they used digital logic to create it rather than analog. They said to be able to better follow along the technology curve of normal chip electronics digital was the way to go.

It seemed to me at the time that if you really  wanted to simulate a brains neural processing then you would want to use an analog approach and this should use much less power. I wrote a couple of posts on the subject, one of which was on MIT’s analog neuromorphic chip (see my MIT builds analog neuromorphic chip post) and the other was on why analog made more sense than digital technology for neuromorphic computation (see my Analog neural simulation or Digital neuromorphic computing vs. AI post).

The funny thing is that IBM’s TrueNorth chip uses a lot less power (1000X, milliwatts vs watts) than normal CMOS chips in e use today. Not sure why this would be the case with digital logic but if this is true maybe there’s more of a potential to utilize these sorts of chips in wider applications beyond just traditional AI domains.

How do you program it?

I would really like to get a deeper look at the specs for TrueNorth and its programming model.  But there was a conference last year where IBM presented three technical papers on TrueNorth architecture and programming capabilities (see MIT Technical Report: IBM scientists show blueprints for brain like computing).

Apparently the 1M programming spike neurons are organized into blocks of 256 neurons each (with a prodigious amount of “configurable” synapses as well). These seem equivalent to what I would call a computational unit. One programs these blockss with “corelets” which map out the neural activity that the 256-neuron blocks can perform. Also these corelets “programs” can be linked together or one be subsumed within another sort of like subroutines.  IBM as of last year had a library of 150 corelets which do stuff like detect visual artifacts, motion in a visual image, detect color, etc.

Scale-out neuromorphic chips?

The abstract of the Journal Science paper talked specifically about a communications network interface that allows the TrueNorth chips to be “tiled in two dimensions” to some arbitrary size. So it is apparent that with the TrueNorth design, IBM has somehow extended a within chip block interface that allows corelets to call one another, to go off chip as well. With this capability they have created a scale-out model with the TrueNorth chip.

Unclear why they felt it had to go only two dimensional rather than three but, it seems to mimic the sort of cortex layer connections we have in our brains today. But even with only two dimensional scaling there are all sorts of interesting topologies that are possible.

There doesn’t appear to be any theoretical limit to the number of chips that can be connected in this fashion but I would suppose they would all need to be on a single board or at least “close” together because there’s some sort of time frame that couldn’t be exceeded for propagation delay, i.e., the time it takes for a spike to transverse from one chip to the farthest chip in the chain couldn’t exceed say 10msec. or so.

So how close are we to brain level computations?

In one of my previous post I reported Wikipedia stating that  a typical brain has 86B neurons with between 100M and 500M synapses. I was able to find the 86B number reference today but couldn’t find the 100M to 500M synapses quote again.  However, if these numbers are close to the truth, the ratio between human neurons and synapses is much less in a human brain than in the TrueNorth chip. And TrueNorth would need about 86,000 chips connected together to match the neuronal computation of a human brain.

I suppose the excess synapses in the TrueNorth chip is due to the fact that electronic connection have to be fixed in place for a neuron to neuron connection to exist. Whereas in the brain, we can always grow synapse connections as needed. Also, I read somewhere (can’t remember where) that a human brain at birth has a lot more synapse connections than an adult brain and that part of the learning process that goes on during early life is to trim excess synapses down to something that is more manageable or at least needed.

So to conclude, we (or at least IBM) seem to be making good strides in coming up with a neuromorphic computational model and physical hardware, but we are still six or seven generations away from a human brain’s capabilities (assuming a 1000 of these chips could be connected together into one “brain”).  If a neuromorphic chip generation takes ~2 years then we should be getting pretty close to human levels of computation by 2028 or so.

The Tech Review article said that the 5B transistors on TrueNorth are more transistors than any other chip that IBM has produced. So they seem to be at current technology capabilities with this chip design (which is probably proof that their selection of digital logic was a wise decision).

Let’s just hope it doesn’t take it 18 years of programming/education to attain college level understanding…

Comments?

Photo Credit(s): New 20x [view of mouse cortex] by Robert Cudmore

Forgetting is important and other news from cognitive research

A female student reading a Serbian contract law book, her face is obscured by the book
Study time by Stanković Vlada

It turns out retrieval is more important (at least for the brain) than storage.

Recent research from cognitive scientists such as Robert Bjork at the UCLA Learning & Forgetting lab have shown that most of what we think we know about learning is wrong.  (See Learning and Forgetting Lab,  Getting it wrong, UCLA Learning and Forgetting Lab for more).

 

The researchers have been testing people to see which approaches are better to recalling some information they were trying to study. They found that the key to studying and actually remembering better is working on better retrieval not better storage.

It’s somewhat interesting that the scientists aren’t talking about learning as much as retrieval of information.  Almost as if learning were actually the equivalent to information retrieval.

Stop studying the same items over and over again, just try something different

It seems that studying a single item over and over again is the wrong way to try to learn something.  A better way is to vary your studying, to examine different but related items, which somehow lets you better classify the information and provides more accessible paths for retrieving that data.

Stop studying in the same place, go someplace else

Further guidance is when trying to learn something new vary the location, decor, or any other characteristic of the environment you are trying to study in.  The key here is that these other locations add another tag/handle/indexes to the data and the more indexing, the better for retrieval success.

Stop studying, start testing

An additional way to remember better is trying to retrieve information early and often, even if it doesn’t work.  It appears that the more you try to recall, some tidbit of information, irregardless of success, the stronger the access path is burned into your brain.  So that the next time you try to recollect that information, it becomes much easier to do.  In fact, the suggestion is to try to test yourself after learning something new, right away, sort of retrieval exercise without studying it.  Struggling to recollect something helps?!

Stop taking notes during class, start taking them afterwards

Following on in that vein yet almost unbelievable, is another recommendation to abandon note taking altogether and rather, spend time after class to summarize (exercising that retrieval path again) what you were taught.  The important part is to do this immediately afterwards.  (Don’t tell my kids!)

Stop studying continually, wait before you study again

Moreover, another suggestion is to wait before you study something again. It seems if you study something too soon after having just studied it, you are not exercising that recall path well enough. Rather, they advocate waiting around a couple of days/weeks before studying something again to remember it better.  Struggling to recall information is better for remembering it than having an easy time of it.

With (relatively) infinite storage, forgetting is important

Finally, the cognitive scientists seem to think that forgetting is almost as important as remembering.  From a storage perspective, it appears that the brain has an unlimited capacity to store information.  But the downside is that any retrieval takes time and effort (something akin to searching through a bunch of indexes).

What we really want is to be better able to retrieve information that’s important.  Keeping all that extraneous junk readily recallable just slows down the retrieval of the really good stuff.  So forgetting helps purge un-needed access paths/tags/indexes freeing up space for what needs to be remembered.

~~~~

Gosh, and to think all along all those illegible notes I took in college (and still do) really did help me learn!?

Comments?