IBM’s next generation, TrueNorth neuromorphic chip

Ok, I admit it, besides being a storage nut I also have an enduring interest in AI. And as the technology of more sophisticated neuromorphic chips starts to emerge it seems to me to herald a whole new class of AI capabilities coming online. I suppose it’s both a bit frightening as well as exciting which is why it interests me so.

IBM announced a new version of their neuromorphic chip line, called TrueNorth with +5B transistors and the equivalent of ~1M neurons. There were a number of articles on this yesterday but the one I found most interesting was in MIT Technical Review, IBM’s new brainlike chip processes data the way your brain does, (based on a Journal Science article requires login, A million spiking neuron integrated circuit with a scaleable communications network and interface).  We discussed an earlier generation of their SyNAPSE chip in a previous post (see my IBM research introduces SyNAPSE chip post).

But first please take our new poll:

How does TrueNorth compare to the previous chip?

The previous generation SyNAPSE chip had a multi-mode approach which used  65K “learning synapses” together with ~256K “programming synapses”. Their current generation, TrueNorth chip has 256M “configurable synapses” and 1M “programmable spiking neurons”.  So the current chip has quadrupled the previous chips “programmable synapses” and multiplied the “configurable synapses” by a factor of a 1000.

Not sure why the configurable synapses went up so high but it could be an aspect of connectivity, something akin to what happens to a “complete graph” which has a direct edge connection to every node in the graph. In a complete graph if you have N nodes then the number of edges is given as [N*(N-1)]/2, which for 1M nodes would be ~500M edges. So it must not be a complete graph, but it’s “close to complete” with 1/2 the number of edges.

Analog vs. Digital?

When last I talked with IBM on their earlier version chip I wondered why they used digital logic to create it rather than analog. They said to be able to better follow along the technology curve of normal chip electronics digital was the way to go.

It seemed to me at the time that if you really  wanted to simulate a brains neural processing then you would want to use an analog approach and this should use much less power. I wrote a couple of posts on the subject, one of which was on MIT’s analog neuromorphic chip (see my MIT builds analog neuromorphic chip post) and the other was on why analog made more sense than digital technology for neuromorphic computation (see my Analog neural simulation or Digital neuromorphic computing vs. AI post).

The funny thing is that IBM’s TrueNorth chip uses a lot less power (1000X, milliwatts vs watts) than normal CMOS chips in e use today. Not sure why this would be the case with digital logic but if this is true maybe there’s more of a potential to utilize these sorts of chips in wider applications beyond just traditional AI domains.

How do you program it?

I would really like to get a deeper look at the specs for TrueNorth and its programming model.  But there was a conference last year where IBM presented three technical papers on TrueNorth architecture and programming capabilities (see MIT Technical Report: IBM scientists show blueprints for brain like computing).

Apparently the 1M programming spike neurons are organized into blocks of 256 neurons each (with a prodigious amount of “configurable” synapses as well). These seem equivalent to what I would call a computational unit. One programs these blockss with “corelets” which map out the neural activity that the 256-neuron blocks can perform. Also these corelets “programs” can be linked together or one be subsumed within another sort of like subroutines.  IBM as of last year had a library of 150 corelets which do stuff like detect visual artifacts, motion in a visual image, detect color, etc.

Scale-out neuromorphic chips?

The abstract of the Journal Science paper talked specifically about a communications network interface that allows the TrueNorth chips to be “tiled in two dimensions” to some arbitrary size. So it is apparent that with the TrueNorth design, IBM has somehow extended a within chip block interface that allows corelets to call one another, to go off chip as well. With this capability they have created a scale-out model with the TrueNorth chip.

Unclear why they felt it had to go only two dimensional rather than three but, it seems to mimic the sort of cortex layer connections we have in our brains today. But even with only two dimensional scaling there are all sorts of interesting topologies that are possible.

There doesn’t appear to be any theoretical limit to the number of chips that can be connected in this fashion but I would suppose they would all need to be on a single board or at least “close” together because there’s some sort of time frame that couldn’t be exceeded for propagation delay, i.e., the time it takes for a spike to transverse from one chip to the farthest chip in the chain couldn’t exceed say 10msec. or so.

So how close are we to brain level computations?

In one of my previous post I reported Wikipedia stating that  a typical brain has 86B neurons with between 100M and 500M synapses. I was able to find the 86B number reference today but couldn’t find the 100M to 500M synapses quote again.  However, if these numbers are close to the truth, the ratio between human neurons and synapses is much less in a human brain than in the TrueNorth chip. And TrueNorth would need about 86,000 chips connected together to match the neuronal computation of a human brain.

I suppose the excess synapses in the TrueNorth chip is due to the fact that electronic connection have to be fixed in place for a neuron to neuron connection to exist. Whereas in the brain, we can always grow synapse connections as needed. Also, I read somewhere (can’t remember where) that a human brain at birth has a lot more synapse connections than an adult brain and that part of the learning process that goes on during early life is to trim excess synapses down to something that is more manageable or at least needed.

So to conclude, we (or at least IBM) seem to be making good strides in coming up with a neuromorphic computational model and physical hardware, but we are still six or seven generations away from a human brain’s capabilities (assuming a 1000 of these chips could be connected together into one “brain”).  If a neuromorphic chip generation takes ~2 years then we should be getting pretty close to human levels of computation by 2028 or so.

The Tech Review article said that the 5B transistors on TrueNorth are more transistors than any other chip that IBM has produced. So they seem to be at current technology capabilities with this chip design (which is probably proof that their selection of digital logic was a wise decision).

Let’s just hope it doesn’t take it 18 years of programming/education to attain college level understanding…

Comments?

Photo Credit(s): New 20x [view of mouse cortex] by Robert Cudmore

Mobile health (mHealth) takes off in Kenya

iHanging out with Kenya Techies by whiteafrican (cc) (from Flickr)
Hanging out with Kenya Techies by whiteafrican (cc) (from Flickr)

Read an article today about startups and others in Kenya  providing electronic medical care via mHealth and improving the country’s health care system (see Kenya’s Startup Boom).

It seems that four interns were able to create a smartphone and web App in a little over 6 months, to help track Kenya’s infectious disease activity.   They didn’t call it healthcare-as-a-service nor was there any mention of the cloud in the story, but they were doing it all, just the same.

Old story, new ending

The Kenyan government was in the process of contracting out the design and deployment of a new service that would track the cases of infectious disease throughout the country to enable better strategies to counteract them.  They were just about ready to sign a $1.9M contract with one mobile phone company when they decided it was inappropriate for them to lock-in a single service provider.

So they decided to try a different approach, they contacted the head of the Clinton Health Access Initiative (CHAI) who contacted an instructor at Strathmore University who identified four recent graduates and set them to work as interns for $150/month. They spent the spring and summer gathering requirements and pounding out the App(s).  At the end of the summer it was up and running on smart phones and the web throughout their country.

They are now working on an SMS version of the system to allow others who do not own smart phones to be able to use the system to record infectious disease activity. They are also taking on a completely new task to try and track government drug shipments to hospitals and clinics to eliminate shortages and waste.

mHealth, the future of healthcare

The story cited above says that there are at least 45 mHealth programs actively being developed or already completed in Kenya. Many of them created by a startup incubator called iHub.  We have written about Kenya’s use of mobile phones to support novel services before (see Is cloud a leapfrog technology).

Some of these mHealth projects include:

  • AMPATH which uses OpenMRS (open sourced medical records platform) and SMS messaging to remind HIV patients to take their medicines and provides call-in for questions about the medication or treatments,
  • Daktari, a mobile service provider’s call-a-doc service that provides a phone-in hot-line for medical questions, in a country with only one doctor per every 6000 citizens, such phone-in health care can more effectively leverage the meagre healthcare resources available,
  • MedAfrica App which provides doctors or dentists phone numbers and menus to find basic healthcare and diagnostic information in Kenya.

There are many others mHealth projects on the drawing board including a national electronic medical records (EMR) service, medical health payment cards loaded up using mobile payments, and others.

Electronic medical care through mHealth

It seems that Kenya is becoming a leading edge provider of mHealth solutions based in the cloud mainly because it’s inexpensive, fits well with technology that pervades the country, and can be scaled up rapidly to cover its citizens.

If Kenya can move to deploy healthcare-as-a-service using mobile phones, so can the rest of the third world.

Speaking of mHealth, I got a new free app on my iPhone the other day called iTriage, check it out.

Comments?

 

Storage performance matters, even for smartphones

Portrait of a Young Girl With an iPhone, after Agnolo Bronzino by Mike Licht,...  (cc) (From Flickr)
Portrait of a Young Girl With an iPhone, after Agnolo Bronzino by Mike Licht,... (cc) (From Flickr)

 

Read an interesting article from MIT’s Technical Review about a study presented at last weeks Usenix FAST (File and Storage Technology) conference on How Data Storage Cripples Mobile Apps.  It seems storage performance can seriously slow down smartphone functioning, not unlike IT applications (see IO throughput vs. response time and why it matters post for more).

The smartphone research was done by NEC.  They took an Android phone and modified  the O/S to use an external memory card for all of the App data needs.

Then they ran a number of Apps through their paces with various external memory cards.  It turned out that depending on the memory card in use, the mobile phones email and Twitter Apps launched 2-3X faster.   Also, the native web App was tested with over 50 pages loads and had at best, a 3X faster page load time.

All the tests were done using a cable to simulate advanced network connections, above and beyond today’s capabilities and to eliminate that as the performance bottleneck.  In the end, faster networking didn’t have as much of a bearing on App performance as memory card speed.

(NAND) memory card performance

The problem, it turns out is due to data writes.  It seems that the non-volatile memory used in most external memory cards is NAND flash, which as is we all know, has much slower write time than read time, almost 1000X  (see my post on Why SSD performance is such a mystery).  Most likely the memory cards are pretty “dumb” so many performance boosting techniques used in enterprise class SSDs are not available (e.g., DRAM write buffering).

Data caching helps

The researchers did another experiment with the phone, using a more sophisticated version of data caching and a modified Facebook App.  Presumably, this new “data caching” minimized the data write penalty by caching writes to DRAM first and only destaging data to NAND flash when absolutely necessary.   By using the more sophisticated “data caching” they were able to speed up the modified Facebook App by 4X.

It seems that storage sophistication matters even in smartphones, I think I am going to  need to have someone port the caching portions of Data ONTAP® or Enginuity™ to run on my iPhone.

Comments?