Read an interesting piece today on MIT News titled Patterns of connections reveal brain functions. The article was mostly about how scientists there had managed to identify brain functionality by mapping the connections it had to other parts of the brain.
They had determined that facial recognition functionality could be recognized just by the connections it had to the rest of the brain. But that’s not what I found interesting.
Seeing connections in living brains
By using MRIs and diffusion-weighted imaging (applying MRI magnetic fields in many different directions and detecting water flow) they can now identify connections between locations within a living brain. I suppose this has been going on for quite a while now but this is the first I have heard about it.
The article didn’t mention the granularity of the connections they were able to detect, but presumably this would get better over time as MRI’s became more detailed. Could they concievably identify a single synapse or neuron to neuron connection? Could they identify the synapse’s connection strength or almost as important its positive or negative gain?
Technology to live forever
Ray Kurzweil predicted that in the near future, science would be able to download a living brain into a computer and by doing so an “individual” could live forever in “virtual life”. One of the first steps in this process is the ability to read out neural connections. Of course we would need more than just connections alone, but mapping is a first step.
Together with mapping brains and neuromorphic computing advances coming from IBM and MIT labs, we could conceivably do something like what Anders Sandgren and Nick Bostrom described in their Whole Brain Emulation paper. But even with a detailed, highly accurate map of neurons and synapses, the cognitive computing elements available today are not yet ready to emulate a whole brain – thank God.
I am little frightened to think of the implications of such brain mapping capabilities. Not to mention the ability to read connections in living brains could potentially be used to read connections in deceased (presumably preserved) brains just as well.
Would such a device be able to emulate a person’s brain enough to be able to extract secrets – gives brain washing a whole new meaning. Probably, at a minimum, such technology could provide an infinitely better lie detector.
I ranked my blog posts using a ratio of hits to post age and have identified with the top 10 most popular posts for 2011 (so far):
Vsphere 5 storage enhancements – We discuss some of the more interesting storage oriented Vsphere 5 announcements that included a new DAS storage appliance, host based (software) replication service, storage DRS and other capabilities.
Intel’s 320 SSD 8MB problem – We discuss a recent bug (since fixed) which left the Intel 320 SSD drive with only 8MB of storage, we presumed the bug was in the load leveling logic/block mapping logic of the drive controller.
Is FC dead?! – What with the introduction of 40GbE FCoE just around the corner, 10GbE cards coming down in price and Brocade’s poor YoY quarterly storage revenue results, we discuss the potential implications on FC infrastructure and its future in the data center.
I would have to say #3, 5, and 9 were the most fun for me to do. Not sure why, but #10 probably generated the most twitter traffic. Why the others were so popular is hard for me to understand.
IBM with the help of a Columbia, Cornell, University of Wisconsin (Madison) and University of California creates the first generation of neuromorphic chips (press release and video) which mimics the human brain’s computational architecture implemented via silicon. The chip is a result of Project SyNAPSE (standing for Systems of Neuromorphic Adaptive Plastic Scalable Electronics)
Hardware emulating wetware
Apparently the chip supports two cores one with 65K “learning” synapses and the other with ~256K “programmable” synapses. Not really sure from reading the press release but it seems each core contains 256 neuronal computational elements.
IBM’s goal is to have a trillion neuron processing engine with 100 trillion synapses occupy a 2-liter volume (about the size of the brain) and consuming less than one kilowat of power (about 500X the brains power consumption).
The IBM research team has demonstrated some typical AI applications such as simple navigation, machine vision, pattern recognition, associative memory and classification applications with the chip.
Given my history with von Neuman computing it’s kind of hard for me to envision how synapses represent “programming” in the brain. Nonetheless, wikipedia defines a synapse as a connection between any two nuerons which can take two forms electrical or chemical. A chemical synapse (wikipedia), can have different levels of strength, plasticity, and receptivity. Sounds like this might be where the programmability lies.
Just what the “learning” synapses do, how they relate to the programmatical synapses and how they do it is another question entirely.
Stay tuned, a new, non-von Neuman computing architecture was born today. Two questions to ponder
I wonder if they will still call it artificial intelligence?