Analog neural simulation or digital neuromorphic computing vs. AI

DSC_9051 by Greg Gorman (cc) (from Flickr)
DSC_9051 by Greg Gorman (cc) (from Flickr)

At last week’s IBM Smarter Computing Forum we had a session on Watson, IBM’s artificial intelligence machine which won Jeopardy last year and another session on IBM sponsored research helping to create the SyNAPSE digital neuromorphic computing chip.

Putting “Watson to work”

Apparently, IBM is taking Watson’s smarts and applying it to health care and other information intensive verticals (intelligence, financial services, etc.).  At the conference IBM had Monoj Saxena, senior director Watson Solutions and Dr. Herbert Chase, a professor of clinical medicine a senior medical professor from Columbia School of Medicine come up and talk about Watson in healthcare.

Mr. Saxena’s contention and Dr. Chase concurred that Watson can play at important part in helping healthcare apply current knowledge.  Watson’s core capability is the ability to ingest and make sense of information and then be able to apply that knowledge.  In this case, using medical research knowledge to help diagnose patient problems.

Dr. Chase had been struck at a young age by one patient that had what appeared to be an incurable and unusual disease.  He was an intern at the time and was given the task to diagnose her issue.  Eventually, he was able to provide a proper diagnosis but it irked him that it took so long and so many doctors to get there.

So as a test of Watson’s capabilities, Dr. Chase input this person’s medical symptoms into Watson and it was able to provide a list of potential diagnosises.  Sure enough, Watson did list the medical problem the patient actually had those many years ago.

At the time, I mentioned to another analyst that Watson seemed to represent the end game of artificial intelligence. Almost a final culmination and accumulation of 60 years in AI research, creating a comprehensive service offering for a number of verticals.

That’s all great, but it’s time to move on.

SyNAPSE is born

In the next session IBM had Dr. Dharmenrad Modta come up and talk about their latest SyNAPSE chip, a new neueromorphic digital silicon chip that mimicked the brain to model neurological processes.

We are quite a ways away from productization of the SyNAPSE chip.  Dr. Modha showed us a real-time exhibition of the SyNAPSE chip in action (connected to his laptop) with it interpreting a handwritten numeral into it’s numerical representation.  I would say it’s a bit early yet, to see putting “SyNAPSE to work”.

Digital vs. analog redux

I have written about the SyNAPSE neuromorphic chip and a competing technology, the direct analog simulation of neural processes before (see IBM introduces SyNAPSE chip and MIT builds analog synapse chip).  In the MIT brain chip post I discussed the differences between the two approaches focusing on the digital vs. analog divide.

It seems that IBM research is betting on digital neuromorphic computing.  At the Forum last week, I had a discussion with a senior exec in IBM’s STG group, who said that the history of electronic computing over the last half century or so has been mostly about the migration from analog to digital technologies.

Yes, but that doesn’t mean that digital is better, just more easy to produce.

On that topic, I asked the Dr. Modha, on what he thought of MIT’s analog brain chip.  He said

  • MIT’s brain chip was built on 180nm fabrication processes whereas his is on 45nm or over 3X finer. Perhaps the fact that IBM has some of the best fab’s in the world may have something to do with this.
  • The digital SyNAPSE chip can potentially operate at 5.67Ghz and will be absolutely faster than any analog brain simulation.   Yes, but each analog simulated neuron is actually one of a parallel processing complex and with a 1’000 or a million of them operating even 1000X or million X slower it’s should be able to keep up.
  • The digital SyNAPSE chip was carefully designed to be complementary to current digital technology.   As I look at IT today we are surrounded by analog devices that interface very well with the digital computing environment, so I don’t think this will be a problem when we are ready to use it.

Analog still surrounds us and defines the real world.  Someday the computing industry will awaken from it’s digital hobby horse and somehow see the truth in that statement.


In any case, if it takes another 60 years to productize one of these technologies then the Singularity is farther away than I thought, somewhere around 2071 should about do it.


2 thoughts on “Analog neural simulation or digital neuromorphic computing vs. AI

  1. Thanks for writing this Ray. The things that are coming out of Watson are quite remarkable, something we probably wouldn't have expected only a few years ago. Which is a bit like SyNAPSE story. Not sure it will be another 60 years, but either way the whole idea got my friend Jessica thinking about this (no pun intended) and its implications for us all in terms of the way we use our own brains.

    1. Karl,Thanks for the comment. Watson can do a lot of things, almost any information intensive industry – and aren't all industries information intensive today? How long the SyNAPSE/MIT's brain chip takes to become a real product offering is another question. New paradigms take time to change industry mindsets.The Von Neumann/Turing computing architecture took off pretty early on because it was really the only way with the technology of the time to create programable computers vs. plugged-in computers. From today's perspective AI and Neuromorphic/Brain simulation cognitive computing will have to compete against one-another, over the next couple of decades as to which one provides the best approach to creating intelligent machines. It's unclear today whether some future advance along the lines of filling out Watson's AI or one of the other approaches will win out in the end. The dynamic of multiple competing models to intelligent machines will make it harder for any new approach to take hold. Whether it takes 60 years to make it is just a guess on my part. But AI has been around at least since the mid-1950's and although there have been various components which have become piecemeal products (voice recognition, navigation, pattern recognition, etc.) Watson is one of the first to put all of AI into a single system which can almost be re-purposed by linking it to another catalog of research journals. Of course this belittle's all the work to change Watson from a Jeopardy performer to a full-fledged service offering that can handle 1000s of requests at a time. But once that's been done, it seems to me that all it takes is a new knowledge base for Watson to become competent information assistant to yet another vertical.Ray

Comments are closed.