Last year we reported on IBM’s progress in taking PCM (phase change memory) and using it to create a new, neuromorphic computing architecture (see Phase Change Memory (PCM) based neuromorphic processors). And earlier we discussed IBM’s (2nd generation), True North chip and IBM’s (1st generation) Synapse Chip.
This past week IBM made another cognitive computing announcement. This time they have taken their neuromorphic technologies another step closer to precise emulation of neurological processing of the brain.
Their research paper was not directly available, but IBM Research has summarized its contents in a short web article with a video (see IBM Scientists imitate the functionality of neurons with Phase-Change device).
New fidelity in emulating neurons
IBM Research has been able to mimic two functions of a biological neuron-synapse connection within a PCM synaptic array device. That is their PCM device can emulate:
- (Leaky) integrate and fire (LIF) – taking a number of inputs synapses, integrating them over time, and if there’s sufficient total voltage, causing a spike (firing) of an output synapse of the emulated neuron and reseting the process until the next build up in input synapse voltages. “Leaky” indicates that if there’s not sufficient input voltage over a given time period, the system leaks all the non-triggering voltages and resets back to normal, waiting for next build up in input synapse voltage to occur.
- Spike-timing dependent plasticity (STDP) – providing feedback into the input synapse weights that enhances those synapses that were signaling (providing voltage) and diminishes those synapses that were-not signaling just prior to when the neuron fired. So as the emulated neuron fires, it increases the weightings of those input synapses that helped and decreases the weightings of those input synapses that didn’t help the firing to occur.
The LIF and STDP processes in their PCM synaptic array occur asynchronously and simultaneously. And give rise to what IBM is calling event-based computation.
But what really blew me away…
All that’s interesting of course but the real surprise was what they were able to do with their PCM synaptic array with its emulated LIF-STDP computational capabilities.
According to the video on IBM’s website (see link above), without any programming, they were able to feed the chip a noisy video of two images (an IBM logo and a Watson Logo), each being momentarily displayed, multiple times, while the rest of the time, the video presented frames of random pixels that were turned on/off. Each pixel in the video stream was connected to one synapse of an emulated neuron.
And over a short period (<2min on video, could be longer in real time), the PCM synaptic array device was able to learn the patterns by using their LIF-SPDK processes. In the end the PCM device was able to “tell” what was being signaled by interpreting this noisy video stream into 3 separate images: 1) where the IBM logo and the Watson avatar were on top of one another; 2) an IBM logo alone; and 3) a Watson avatar alone.
There’s more. According to the video, the chip has 10K synapses per emulated neuron. The adult human brain has something like 5000 synapses per neuron (babies and children have more) across a 100B neurons. The research paper says the IBM PCM synaptic array had 256 neurons on it. Unclear to me how many of these arrays they used in the demonstration on the video. The article goes on to say that IBM has been able to organize 100s of neurons into “populations” that can be used to analyze realtime information streams.
Power use also was very minimal. The average emulated neuron used 120 micro-watts per cycle (a 40 Watt lightbulb uses 40 Million micro-watts, constantly). These new emulated PCM synapses-neural processes are also analog, not digital processes, which is why they use so little power.
To top it all off, IBM has demonstrated in the lab, billions of PCM synaptic array device switching cycles, which at 100hz would last multiple years.
There appears to be a few other types of neurons in the mammalian brain that don’t use the LIF-STDP computational model. These appear to be mostly sensory neurons, such as aural (ear cochlear hair cells) and visual sensors (eye retinal and retinal bi-polar cells). But the LIF-STDP computational model seems to be the main contributor to neurological processes as we understand them today.
If I didn’t know better, I would think that IBM wants to rule cognitive computing like they did with mainframes in the 50-60-70s. Other than a few research organizations, I don’t see anyone else making as much progress as they have the last couple of years.
Now if they could only scale the device to 100B neurons and make it last 150 years or so. I think then, I could just download my brains and retire to live off its earnings.