Mapping the brain

Charles Bell: Anatomy of the Brain, c. 1802 by brain_blogger  (cc) (From Flickr)
Charles Bell: Anatomy of the Brain, c. 1802 by brain_blogger (cc) (From Flickr)

Read an interesting piece today on MIT News titled Patterns of connections reveal brain functions.  The article was mostly about how scientists there had managed to identify brain functionality by mapping the connections it had to other parts of the brain.

They had determined that facial recognition functionality could be recognized just by the connections it had to the rest of the brain. But that’s not what I found interesting.

Seeing connections in living brains

By using MRIs and diffusion-weighted imaging (applying MRI magnetic fields in many different directions and detecting water flow) they can now identify connections between locations within a living brain.  I suppose this has been going on for quite a while now but this is the first I have heard about it.

The article didn’t mention the granularity of the connections they were able to detect, but presumably this would get better over time as MRI’s became more detailed.  Could they concievably identify a single synapse or neuron to neuron connection?  Could they identify the synapse’s connection strength or almost as important its positive or negative gain?

Technology to live forever

Ray Kurzweil predicted that in the near future, science would be able to download a living brain into a computer and by doing so an “individual” could live forever in “virtual life”.  One of the first steps in this process is the ability to read out neural connections.  Of course we would need more than just connections alone, but mapping is a first step.

Together with mapping brains and neuromorphic computing advances coming from IBM and MIT labs, we could conceivably do something like what Anders Sandgren and Nick Bostrom described in their Whole Brain Emulation paper.  But even with a detailed, highly accurate map of neurons and synapses, the cognitive computing elements available today are not yet ready to emulate a whole brain – thank God.

Other uses

I am little frightened to think of the implications of such brain mapping capabilities.  Not to mention the ability to read connections in living brains could potentially be used to read connections in deceased (presumably preserved) brains just as well.

Would such a device be able to emulate a person’s brain enough to be able to extract secrets – gives brain washing a whole new meaning.  Probably, at a minimum, such technology could provide an infinitely better lie detector.

~~~~

Another step on the road to the singularity.

Comments?

Analog neural simulation or digital neuromorphic computing vs. AI

DSC_9051 by Greg Gorman (cc) (from Flickr)
DSC_9051 by Greg Gorman (cc) (from Flickr)

At last week’s IBM Smarter Computing Forum we had a session on Watson, IBM’s artificial intelligence machine which won Jeopardy last year and another session on IBM sponsored research helping to create the SyNAPSE digital neuromorphic computing chip.

Putting “Watson to work”

Apparently, IBM is taking Watson’s smarts and applying it to health care and other information intensive verticals (intelligence, financial services, etc.).  At the conference IBM had Monoj Saxena, senior director Watson Solutions and Dr. Herbert Chase, a professor of clinical medicine a senior medical professor from Columbia School of Medicine come up and talk about Watson in healthcare.

Mr. Saxena’s contention and Dr. Chase concurred that Watson can play at important part in helping healthcare apply current knowledge.  Watson’s core capability is the ability to ingest and make sense of information and then be able to apply that knowledge.  In this case, using medical research knowledge to help diagnose patient problems.

Dr. Chase had been struck at a young age by one patient that had what appeared to be an incurable and unusual disease.  He was an intern at the time and was given the task to diagnose her issue.  Eventually, he was able to provide a proper diagnosis but it irked him that it took so long and so many doctors to get there.

So as a test of Watson’s capabilities, Dr. Chase input this person’s medical symptoms into Watson and it was able to provide a list of potential diagnosises.  Sure enough, Watson did list the medical problem the patient actually had those many years ago.

At the time, I mentioned to another analyst that Watson seemed to represent the end game of artificial intelligence. Almost a final culmination and accumulation of 60 years in AI research, creating a comprehensive service offering for a number of verticals.

That’s all great, but it’s time to move on.

SyNAPSE is born

In the next session IBM had Dr. Dharmenrad Modta come up and talk about their latest SyNAPSE chip, a new neueromorphic digital silicon chip that mimicked the brain to model neurological processes.

We are quite a ways away from productization of the SyNAPSE chip.  Dr. Modha showed us a real-time exhibition of the SyNAPSE chip in action (connected to his laptop) with it interpreting a handwritten numeral into it’s numerical representation.  I would say it’s a bit early yet, to see putting “SyNAPSE to work”.

Digital vs. analog redux

I have written about the SyNAPSE neuromorphic chip and a competing technology, the direct analog simulation of neural processes before (see IBM introduces SyNAPSE chip and MIT builds analog synapse chip).  In the MIT brain chip post I discussed the differences between the two approaches focusing on the digital vs. analog divide.

It seems that IBM research is betting on digital neuromorphic computing.  At the Forum last week, I had a discussion with a senior exec in IBM’s STG group, who said that the history of electronic computing over the last half century or so has been mostly about the migration from analog to digital technologies.

Yes, but that doesn’t mean that digital is better, just more easy to produce.

On that topic, I asked the Dr. Modha, on what he thought of MIT’s analog brain chip.  He said

  • MIT’s brain chip was built on 180nm fabrication processes whereas his is on 45nm or over 3X finer. Perhaps the fact that IBM has some of the best fab’s in the world may have something to do with this.
  • The digital SyNAPSE chip can potentially operate at 5.67Ghz and will be absolutely faster than any analog brain simulation.   Yes, but each analog simulated neuron is actually one of a parallel processing complex and with a 1’000 or a million of them operating even 1000X or million X slower it’s should be able to keep up.
  • The digital SyNAPSE chip was carefully designed to be complementary to current digital technology.   As I look at IT today we are surrounded by analog devices that interface very well with the digital computing environment, so I don’t think this will be a problem when we are ready to use it.

Analog still surrounds us and defines the real world.  Someday the computing industry will awaken from it’s digital hobby horse and somehow see the truth in that statement.

~~~~

In any case, if it takes another 60 years to productize one of these technologies then the Singularity is farther away than I thought, somewhere around 2071 should about do it.

Comments?

MIT builds analog synapse chip

2011 Wikimedia commons (400px-Synapse_Illustration_unlabeled.svg)
2011 Wikimedia commons (400px-Synapse_Illustration_unlabeled.svg)

Recently MIT announced a new brain chip, a breakthrough device that simulates a single brain synapse with an analog chip.

We have discussed before the digital nueromorphic chip activity going on (see my IBM introducing their SyNAPSE chip and Electro-human interface posts). However both those were digital, this new MIT chip is analog.  The chip uses ~400 transistors and was fabricated using VLSI processing.

But first please take our new poll:

Analog, whats that?

Given that the world has gone digital, analog devices may be foreign to most of us.  But analog dominated the way electronics worked for the first half of last century and were still pretty prominent during the last half.

Nowadays, such devices are used primarily in signal processing, and where streams of data are transformed from one mode to another (serial/deserializers).   An analog signal has a theoretically an infinite resolution (Wikipedia), which should make it closer to real life and may be why some stereophiles perfer records to CDs.

Neurons are analog devices

That being said, it’s a treat to see some new analog technology come out that’s better than digital implementations.  One would have to say that neural activity is by definition analog and as such, should make simulating brain activity much easier.

The advantage of analog can be seen in that the neural synapse is the connection between two neurons.  Information is transferred between the two neurons by the take up of Ions.  In the case of the MIT synapse chip, the same sort of process occurs but in this case information flows based on gradients of electronic potential.

In testament to the capabilities of the new synapse chip they were able to resolve a long standing debate in neuro-biology. The question was on how long term potentation (LTP) and long term depression (LTD) which enhances or depresses the information transfer across the synapse was accomplished in real neurons.  Previously, it had been postulated that LTP and LTD would depend on two different mechanisms in real cells. But there was one theory that said with a specific type of receptor, both LTP and LTD could be performed in a single way.

MIT researchers were able to configure their synapse-chip to mimic that new receptor and were able to show how LTP and LTD could work with this single receptor in the brain.

Onto the brain

Of course a single synapse is not much considering the brain has 100B neurons each with many 100’s if not 1000’s of synapses. But it’s a start.

Naturally, considering its built out of transistors using CMOS technology, it should follow Moore’s law and after 18 months or so we should have a chip with two synapses on it. Another 40 or so doublings more (~60 years from now in 2071), if Moore’s law holds, we can have a brain-chip with 100B neurons and 100T synapses on it.

Of course, this being a prototype, I suppose with today’s fabrication capable of  creating 40M transistors/chip, we may already be able to simulate 100K synapses and 100 neurons. Which means we should have a brain’s level of neurons and synapses in 30 doublings or ~2056.

Analog is better than biological

The other nice thing about analog logic and transistors, is that information processing in the brain-chip should be orders of magnitude faster than the brain’s biological processing.  Which is probably even more frightening.

The IBM SyNAPSE chip mentioned earlier was an all digital creation and had two chip cores, one provided “learning synapses” and the other “programmable synapses”.  This was probably an attempt to mimic neural processing in digital logic.

The analog brain-chip that MIT has invented, has no such distinction, supplying all synapse functionality in 400 transistors.   Nonetheless, any accurate simulation of neural processes can help us to understand how to mimic it better. The fact that we have an analog simulation neural processes should help us improve the digital simulation to more closely match the brain.

—-

Not sure what we should call this chip, it’s certainly not neuromorphic, because it’s a real simulation of analog neural synapses not a digital approximation.  I would use synapse- chip but its already in use.  I kind of like the brain-chip but that may be stretching it a bit. Maybe the neuron-chip is best for now

Now that we know the date for the singularity, hopefully we can be ready to deal with whatever happens then.

Comments?