AI reaches a crossroads

There’s been a lot of talk on the extendability of current AI this past week and it appears that while we may have a good deal of runway left on the machine learning/deep learning/pattern recognition, there’s something ahead that we don’t understand.

Let’s start with MIT IQ (Intelligence Quest),  which is essentially a moon shot project to understand and replicate human intelligence. The Quest is attempting to answer “How does human intelligence work, in engineering terms? And how can we use that deep grasp of human intelligence to build wiser and more useful machines, to the benefit of society?“.

Where’s HAL?

The problem with AI’s deep learning today is that it’s fine for pattern recognition, but it doesn’t appear to develop any basic understanding of the world beyond recognition.

Some AI scientists concede that there’s more to human/mamalian intelligence than just pattern recognition expertise, while others’ disagree. MIT IQ is trying to determine, what’s beyond pattern recognition.

There’s a great article in Wired about the limits of deep learning,  Greedy, Brittle, Opaque and Shallow: the Downsides to Deep Learning. The article says deep learning is greedy because it needs lots of data (training sets) to work, it’s brittle because step one inch beyond what’s it’s been trained  to do and it falls down, and it’s opaque because there’s no way to understand how it came to label something the way it did. Deep learning is great for pattern recognition of known patterns but outside of that, there must be more to intelligence.

The limited steps using unsupervised learning don’t show a lot of hope, yet

“Pattern recognition” all the way down…

There’s a case to be made that all mammalian intelligence is based on hierarchies of pattern recognition capabilities.

That is, at a bottom level  human intelligence consists of pattern recognition, such as vision, hearing, touch, balance, taste, etc. systems which are just sophisticated pattern recognition algorithms that label what we are hearing as Bethovan’s Ninth Symphony, tasting as grandma’s pasta sauce, and seeing as the Grand Canyon.

Then, at the next level there’s another pattern recognition(-like) system that takes all these labels and somehow recognizes this scene as danger, romance, school,  etc.

Then, at the next level, human intelligence just looks up what to do in this scene.  Almost as if we have a defined list of action templates that are what we do when we are in danger (fight or flight), in romance (kiss, cuddle or ?), in school (answer, study, view, hide, …), etc.  Almost like a simple lookup table with procedural logic behind each entry

One question for this view is how are these action templates defined and  how many are there. If, as it seems, there’s almost an infinite number of them, how are they selected (some finer level of granularity in scene labeling – romance but only flirting …).

No, it’s not …

But to other scientists, there appears to be more than just pattern recognition(-like) algorithms and lookup and act algorithms, going on inside our brains.

For example, once I interpret a scene surrounding me as in danger, romance, school, etc.,  I believe I start to generate possible action lists which I could take in this domain, and then somehow I select the one to do which makes the most sense in this situation or rather gets me closer to my current goal (whatever that is) in this situation.

This is beyond just procedural logic and involves some sort of memory system, action generative system, goal generative/recollection system, weighing of possible action scripts, etc.

And what to make of the brain’s seemingly infinite capability to explain itself…

Baby intelligence

Most babies understand their parents language(s) and learn to crawl within months after birth. But they haven’t listened to thousands of hours of people talking or crawled thousands of miles.  And yet, deep learning requires even more learning sets in order to label language properly or  learning how to crawl on four appendages. And of course, understanding language and speaking it are two different capabilities. Ditto for crawling and walking.

How does a baby learn to recognize these patterns without TB of data and millions of reinforcements (“Smile for Mommy”, say “Daddy”). And what to make of the, seemingly impossible to contain wanderlust, of any baby given free reign of an area.

These questions are just scratching the surface in what it really means to engineer human intelligence.

~~~~

MIT IQ is one attempt to try to answer the question that: assuming we understand how to pattern recognition can be made to work well on today’s computers what else do we need to do to build a more general purpose intelligence.

There are obvious ethical questions on whether we want to engineer a human level of intelligence (see my Existential risks… post). Our main concern is what it does (to humanity) once we achieve it.

But assuming we can somehow contain it for the benefit of humanity, we ought to take another look at just what it entails.

 

Photo Credits:  Tech trends for 2017: more AI …., the Next Silicon Valley website. 

HAL from 2001 a Space Odyssey 

Design software test labeling… 

Exploration in toddlers…, Science Daily website

Old world AI, Checkers, and The Champion

Read an article in The Atlantic this week (How checkers was solved) on Jonathan Schaeffer, the man who solved checkers, and his quest to beat Marion Tinsley, The Champion.

But first some personal history, while I was at university (back in the early 70’s) and first learned how to code in real (Fortran, 360/Assembler, IBM PL/I, Cobol) languages, one independent project I worked on was a checkers playing program. It made use of advanced alpha-beta search optimizations, board analysis routines and move trees.

These were the days of punched card decks and JCL, submitting programs to run as a batch job and getting results hours to days later. For one semester, I won the honor of consuming the most CPU time of any person in the school. I still have the card deck someplace but it may be hard to find a card reader, let alone a PL/I compiler/DOS system to run it.

In any case, better men than I have taken up the checkers challenge over time. And Schaeffer had made it his life’s work to conquer checkers and did it with his program, Chinook.

In my day checkers was a young kid and old person game. It was simple enough to learn but devilishly hard to master. My program got to look about 3.5 moves ahead, Schaeffer’s later program, used during an early match, was looking 16 moves ahead and was improved from there.

Besting The Champion

From the 50s through the early 90s there was one man who was the undisputed Champion of Checkers and that was Tinsley. Although he lost a few games during his time to other men, he never lost a match.

The article talks about how Schaeffer improved Chinook over time and at one time it had beaten Tinsley in two games but still lost the match. With a later version, it beat Tinsley a couple of times and then Tinsley fell ill and had to leave the game, later dying and forfeiting the match.

But even after Tinsley’s death, Schaeffer kept on improving Chinook.

Early on Schaeffer had a checkers endgame database and an opening database that were computed by Chinook as optimal move sequences from valid openings (professional checkers has a set of 3 move openings that players select at random and the game takes off from there) and endgames (positions with limited number’s of pieces to the end of the game).

These opening and endgame databases were stored for later retrieval during a game. This way if a game fell into a set opening or endgame the program could just follow the optimal play that was already computed.

Solving checkers

As computing power increased, Chinook’s end game database started earlier in the game with more pieces on the board and his opening database started working towards later into the game, following opening moves farther into the mid game.

When Schaeffer’s program solved checkers, essentially his opening database and his endgame database met in the middle of the game. And at that point he had the solution to every checkers position/game that could ever be.

AI vs. humans today

AI has changed to a different way of operating over time. When I was coding my checkers program, it was search trees/optimizations and board analysis. In fact, in 1996 IBM Deep Blue used variants of these techniques to beat Garry Kasparov, then World Chess Champion.

Today’s machine learning is less about search algorithms, game analyses, and game (or logic) databases and more about neural nets, machine learning and reinforcement learning.

New AI finally conquered Go only a couple of years ago, a game that’s very much more complex than checkers or chess. But in 2017 Google (Deepmind) AlphaGo didn’t use search trees and board analyses, it used neural nets, machine learning and reinforcement learning to beat Ke Jie, the then World #1 ranked Go Master.

Welcome to the new world of AI.

Photo Credit(s):

Analog neural simulation or digital neuromorphic computing vs. AI

DSC_9051 by Greg Gorman (cc) (from Flickr)
DSC_9051 by Greg Gorman (cc) (from Flickr)

At last week’s IBM Smarter Computing Forum we had a session on Watson, IBM’s artificial intelligence machine which won Jeopardy last year and another session on IBM sponsored research helping to create the SyNAPSE digital neuromorphic computing chip.

Putting “Watson to work”

Apparently, IBM is taking Watson’s smarts and applying it to health care and other information intensive verticals (intelligence, financial services, etc.).  At the conference IBM had Monoj Saxena, senior director Watson Solutions and Dr. Herbert Chase, a professor of clinical medicine a senior medical professor from Columbia School of Medicine come up and talk about Watson in healthcare.

Mr. Saxena’s contention and Dr. Chase concurred that Watson can play at important part in helping healthcare apply current knowledge.  Watson’s core capability is the ability to ingest and make sense of information and then be able to apply that knowledge.  In this case, using medical research knowledge to help diagnose patient problems.

Dr. Chase had been struck at a young age by one patient that had what appeared to be an incurable and unusual disease.  He was an intern at the time and was given the task to diagnose her issue.  Eventually, he was able to provide a proper diagnosis but it irked him that it took so long and so many doctors to get there.

So as a test of Watson’s capabilities, Dr. Chase input this person’s medical symptoms into Watson and it was able to provide a list of potential diagnosises.  Sure enough, Watson did list the medical problem the patient actually had those many years ago.

At the time, I mentioned to another analyst that Watson seemed to represent the end game of artificial intelligence. Almost a final culmination and accumulation of 60 years in AI research, creating a comprehensive service offering for a number of verticals.

That’s all great, but it’s time to move on.

SyNAPSE is born

In the next session IBM had Dr. Dharmenrad Modta come up and talk about their latest SyNAPSE chip, a new neueromorphic digital silicon chip that mimicked the brain to model neurological processes.

We are quite a ways away from productization of the SyNAPSE chip.  Dr. Modha showed us a real-time exhibition of the SyNAPSE chip in action (connected to his laptop) with it interpreting a handwritten numeral into it’s numerical representation.  I would say it’s a bit early yet, to see putting “SyNAPSE to work”.

Digital vs. analog redux

I have written about the SyNAPSE neuromorphic chip and a competing technology, the direct analog simulation of neural processes before (see IBM introduces SyNAPSE chip and MIT builds analog synapse chip).  In the MIT brain chip post I discussed the differences between the two approaches focusing on the digital vs. analog divide.

It seems that IBM research is betting on digital neuromorphic computing.  At the Forum last week, I had a discussion with a senior exec in IBM’s STG group, who said that the history of electronic computing over the last half century or so has been mostly about the migration from analog to digital technologies.

Yes, but that doesn’t mean that digital is better, just more easy to produce.

On that topic, I asked the Dr. Modha, on what he thought of MIT’s analog brain chip.  He said

  • MIT’s brain chip was built on 180nm fabrication processes whereas his is on 45nm or over 3X finer. Perhaps the fact that IBM has some of the best fab’s in the world may have something to do with this.
  • The digital SyNAPSE chip can potentially operate at 5.67Ghz and will be absolutely faster than any analog brain simulation.   Yes, but each analog simulated neuron is actually one of a parallel processing complex and with a 1’000 or a million of them operating even 1000X or million X slower it’s should be able to keep up.
  • The digital SyNAPSE chip was carefully designed to be complementary to current digital technology.   As I look at IT today we are surrounded by analog devices that interface very well with the digital computing environment, so I don’t think this will be a problem when we are ready to use it.

Analog still surrounds us and defines the real world.  Someday the computing industry will awaken from it’s digital hobby horse and somehow see the truth in that statement.

~~~~

In any case, if it takes another 60 years to productize one of these technologies then the Singularity is farther away than I thought, somewhere around 2071 should about do it.

Comments?