Wednesday, September 10, 2014

The "Art" in Artificial Intelligence - Part 1

Today, we are going to launch a problem-solving exercise on what might be the single most complex topic in Information Science - Artificial Intelligence. The goal here is not to provide any sort of comprehensive survey of current theory or practice; rather our journey begins with the stark realization of how little we've achieved in the field since the term was first coined 58 years ago. This is a problem statement and problem resolution exercise and an excellent case study in technology focused innovation. Let's begin...

We'll start at the beginning with some definitions and a review of key assumptions.

Artificial Intelligence, Defined 
The ability for a machine to consistently demonstrate core cognitive skills generally associated with human intelligence; including, learning, problem solving, intuitive reasoning and contextual memory retention and extraction. These skills generally imply the need to achieve some level of self-awareness.

What We Haven't Achieved, Yet
I said that this problem statement is focused around a lack of success in the AI field to date; let's try to quantify that first.

  • No computer can learn like a human.
  • No computer can speak like a human (this is deceptive, tools like Siri will provide responses back to you, but is that in fact anything like human speech? The processing that goes on within Siri is a relatively primitive form of pattern recognition as opposed to what even the least capable human mind can produce).
  • No computer can handle complexity in the same manner a human can (this warrants much more explanation and we'll come back to it).
  • No computer can problem-solve the same way humans can. (there are types of problem solving where computers are of course far superior to humans, yet even with all that power they still fail to solve relatively simple questions that humans can handle naturally).
  • No computer has achieved anything coming close to consciousness or self awareness (despite the endless slew of sci-fi stories where this is a common fixture). 

Anybody trying to solve the AI problem is a mad scientist, right? 
Now, there's another ethical or moral side to this topic which we won't jump into until the end - the question as to whether we should even try to endow a machine with these traits - but then again it is likely that someone will do this regardless of the ethical objections. Part of the human learning process seems to require learning through our mistakes - a trait we may eventually end up passing to artificial entities, someday. But back to the problem.

Challenging Assumptions
As with most problem spaces, the set of initial assumptions associated with it tends to drive all else until or unless those assumptions evolve in some fashion. For Artificial Intelligence, there have been a number of assumptions that have helped define what's its become to date and also help to explain its limited success - they include the following:
  • The notion that brute force computing power will eventually resolve many issues and push through various AI barriers. This is partially true, but then again we sent Apollo to the Moon with the computing power of a standard calculator (by today's standards), how much computing power do we really need to mimic human thought? Nature got us here through elegance, not waste.
  • The notion that we fully understand how the human mind creates consciousness or exercises cognitive capability. We don't, yet.
  • The very flawed notion that machine learning can or should have any connection to the current methods we use to teach each other. 
  • The lack of sensory input associated with most AI paradigms. AI is IO dependent and that usually means keyboard and mouse although images and video (and audio) have now begun to play an important role. We'll get to this in more detail later.
  • The notion that simulation can perform like the real thing; building to the simulation ensures a thing always remains a simulation (thus never achieving actual reproduction). This has some interesting implications which will eventually get us into a discussion of genetic engineering. 
  • A lack of focus on natural language. This at first proved too difficult and now natural language does factor into much of the AI research going on. However, natural language hasn't been looked at as the core logic for an AI system - but it should be - instead we tend to view in terms of how an AI system (or other type of system for that matter) can interact with humans (or just capture human speech accurately). 



Watson actually showed us what he/she/it was thinking - if only we could
have seen into the minds of the humans, then we could have seen whether they think the same way...
A Brief Chronology of AI

  • Ancient Greece - those happy go lucky philosophers in togas considered the notion of thinking machines. 
  • About 150 years ago or longer - Artificial Intelligence (although not explicitly identified as such) becomes a popular topic in Science Fiction.
  • 1930s - The golden age of Science Fiction novels includes more than a few stories about artificial brains and super robots.
  • 1955 - The Term "Artificial Intelligence" is invented by John McCarthy and within a year the first conferences on the topic are held.
  • 1968 - The Arthur C. Clarke novel 2001 becomes a box office hit and HAL, the emotionally unstable AI computer on board the Jupiter, becomes a celebrity. Take your stress pill, Dave.
  • 1980 - Matthew Broderick makes it big playing games with an AI pentagon computer - the game is Thermonuclear War but of course you can't win it, right? 
  • Early 80's - LISP is introduced.
  • Mid-1980's - Expert Systems become popular.
  • 1984 - Skynet becomes 'self aware' and Terminators start trying to kill John Conner.
  • 1987 - The pasty-faced Commander Data steals the show on Star Trek the Next Generation.
  • 1990's - An IBM computer wipes the floor with the world's chess masters.
  • 2001 - Stanley Kubrick's last movie becomes Stephen Spielberg's tribute to him as Joel Haley Osment becomes one of the last humans playing a robot (as opposed to CGI animations with voice actors playing humans, robots and everything else).
  • 2010 - Stanford opens up an AI online course to the general public, several hundred thousand sign up - few finish the course. 
  • 2011 - Siri shows us that smart phones can indeed become somewhat intelligent.
  • 2012 - IBM's Watson wipes the floor with the pantheon of Jeopardy champions.


Don't imprint unless you're serious...

The expectations for AI have thusfar radically outstripped the progress made. Now in the same 50 or so years we've taken the Star Trek communicator & tricorder fictions and made them reality in the form of smart phones (some of which you can now load with apps that when used with sensors can measure one's vital functions much like a tricorder).

A lot of smart people have tried for decades to make AI a reality and across the globe billions or hundreds of billions of dollars have been spent on related research. What's going wrong here? It can be only one of two possibilities:

  1. Human like intelligence cannot be artificially created and maintained or
  2. We've approached the problem wrong
I content that the second possibility is in fact what has happened. As we will progress with the series, it will become clear that the "Art" I referred to in the title is the ability to pick the right path for problem resolution.  In part two of this series, we will examine the question; How can machines learn?





Copyright 2014, Stephen Lahanas


#Semantech
#StephenLahanas
#TechnovationTalks

0 comments:

Post a Comment