Over the past several decades, I’ve seen an endless stream
of predictions and articles in regards to Artificial Intelligence and it
occurred to me recently that we may have missed an important point relating to
this topic. One reason the expectations and the reality of AI have diverged so
greatly may be due entirely to our obsession with the notion that in creating
it we ought to be somehow be mimicking ourselves through some sort of human
intelligence without perhaps truly understanding what that represents. This
seems to be simultaneously our greatest goal and our worst nightmare relating
to AI. But then I got to wondering, what’s the difference between Artificial
Intelligence and Artificial Thought and if we viewed the question from the
latter perspective are we in fact actually making some real progress?
Before we can dive into that question it is worthwhile to
try to define what we mean by Intelligence and Thought. Here’s a good
definition of Intelligence (signed by 52 scientists in the field):
A very general mental capability
that, among other things, involves the ability to reason, plan, solve problems,
think abstractly, comprehend complex ideas, learn quickly and learn from
experience. It is not merely book learning, a narrow academic skill, or
test-taking smarts. Rather, it reflects a broader and deeper capability for
comprehending our surroundings—"catching on," "making
sense" of things, or "figuring out" what to do.
I like this definition as opposed to some others I’ve seen
because I think it carries within it a larger scope that we tend to include
with the notion of what Intelligence is or what an intelligent being / entity
has to be able to accomplish. This type of definition serves us for example
when considering what extraterrestrial intelligence might be. Thought, at first
glance, might be considered a subset of Intelligence – it is the act of
demonstrating one’s Intelligence per se but also the product of that
demonstration. The definitions of Thought are somewhat less concise and often
seem a bit recursive though – take this one for example: “a single act or
product of thinking; idea or notion.” Thought can be both a verb and a noun.
Perhaps the reason that it is so difficult to nail down the definition for
Thought is because we do tend to view it as a subset of the superset
Intelligence and describing a component of that process or capability without fully
explaining or understanding it is challenging. One definition that I think fits
our topic a little better comes from the Merriam Webster dictionary; “reasoning
power or a developed intention or plan.” This definition of Thought doesn’t try
to explain it as much as it tends to highlight how or what it represents. Interestingly,
this more compact definition also closely mirrors some of the first definitions
for Artificial Intelligence, but we’ll return to that in a minute.
So, at the highest level, “Intelligence” might be considered
a higher reasoning power, one that also tends to imply self-awareness and
continuity of thought and a sort of assimilation of knowledge into a Self as
time passes. Intelligence may be more than that as well in that there is an
integrative aspect to it that often isn’t included the definitions – sometimes
we view that integrative aspect as Self but not always. If we return to the
Ontology of the subject, we might be able to state that you can’t have
Intelligence without the thought process or individual thoughts but perhaps you
can have thoughts without Intelligence per se. In other words, Intelligence at
least on the face of it, seems to represent a higher order than Thought,
although as we know from the real world that there are also various orders or
levels of Intelligence too.
This is more of a philosophical question then a technical
one, but let me follow it a bit further. Let’s say that there can be Thought
without Intelligence and the key difference between the two in the context of
Human Intelligence (which seems to be what many have tried to emulate) may be
self-awareness and the ability to integrate reality within a unique perspective
and context. At its most fundamental, Thought can be disconnected from other Thought
as well as from any experience or capability resembling self-awareness and / or
complex integrative interpretation. By this definition, there are likely a wide
variety of animals that have mental activity that might be described as
Thought, but certainly not Intelligence in the sense we tend to attribute to
humans. There can be lower levels of Intelligence, but the difference there
between Thought has less to with self and more to do with Integrative
interpretation. If we view any organism as a complex system or system of
systems, then some level of integrative coordination is always occurring. This
coordination is often automatic, even for humans, but sometimes it is
deliberate. Deliberate integration may be classified as Thought, but it doesn’t
necessarily require Self-Awareness.
All of this begs the question, what is Self-Awareness and
why is it so important in the distinction between Thought and Intelligence?
Self-Awareness implies an understanding or expectation of identity.
There can, of course, be Awareness without any indication that a Self exists.
Take away Self from the equation of Intelligence and what you have left are a
lot of the same capabilities; such as Learning (to a point), Memory (from an
objective rather than a subjective context), and Thought which can accomplish
many of the same goals that intelligent Thought can, but not all of them. Take
Self away and you can also potentially discard the requirement for complex
integration (the level and type of integration can become much more selective).
We might refer to this Selfless state as “Targeted Thought” and this is a
little more interesting in that it represents areas within which Thought can be
developed, specialized and reinforced without higher-level expectations. The
difference between that and Thought in the traditional sense (as a subset of
Intelligence) is that there are likely to be clear boundaries that constrain
the operation of Targeted Thought. A Targeted Thought “Boundary” for example
might operate solely within the context of airline routes and all of the
processes directly associated with flight routing. Within that boundary,
reasoning power based on planning and guided toward specific intentions could
take place to help solve questions of efficiency or profit. This type of
Thought can still be considered “novel” thought as long as it isn’t fully
determined in advance, but it is unlikely that something artificially
constrained to a single purpose ought to be considered intelligent in the way
we tend to view Intelligence.
So, Artificial Thought in the abstract sense might be
considered as the ability to derive novel outputs from similar inputs based on real-world
situations and unique (yet mostly static) rulesets and perhaps various Targeted
boundaries. This definition provides a framework in which Artificial cognition
or Thought might obtain near-term success – a much narrower view to be sure and
one already potentially aligned with nearly every practical AI effort yet undertaken.
This definition is perhaps not too far off from what the initial definitions
for Artificial Intelligence were, yet the types of predictions that we can make
about what Artificial Thought can or can’t do will likely become much better
defined within this narrower confine.
Let’s go from the abstract to the real world, Nature. In
Nature, lower level organisms likely have many potential applications for
Thought but don’t build that type of Thought around language or complex symbols
but rather through some sort of connection to various sensory capability and
stimuli. It wouldn’t be expected for a squirrel to memorize a complex path
between several dozen trees bearing acorns, yet in order for the squirrel to
succeed and survive through the Winter, he must have a well-defined ruleset
informed by recent and current experience that facilitates such navigation. The
squirrel’s journey is a problem-solving exercise, one that he may or may not be
able to learn from or remember but one that he has to be able to repeat
successfully under dynamic conditions. The squirrel might be viewed as a system
which demonstrates a limited level of awareness, and employs novel thought in
an integrative manner so it might be considered a lower-level intelligence
(when all of those capabilities are combined).
My point with the squirrel analogy is this; if we were to
attempt to create & program an Intelligent squirrel (using the more
expansive of scope of the definition for Intelligence) to think through all of
this we might be missing the point or be conducting a certain level of overkill.
We could instead take any portion of the capability the squirrel possesses and
use that to solve various types of problems or perform tasks. In other words,
in deconstructing even a lower order of Intelligence, we can extract some
Artificial Thinking capability that might prove useful – capability that likely
far outstrips what we’re currently able to do.
And even if we were to view all those capabilities combined
in the case of the Squirrel’s survival, he simply doesn’t require a general
purpose Intelligence the way we’ve been defining it in AI, but he does require
a certain level or type of Thought (novel action resulting from dynamic inputs).
This is not to downplay the complexity of the squirrel’s mission, which is in
fact fairly daunting. Any successful squirrel must regularly evade predators,
find food, find shelter and navigate its environment in a dynamic 24/7
environment. The navigation problem alone is challenging enough, and we use
such problems all the time to test the efficacy of AI programs. However, while
the squirrel needs to be able to jump from one branch to another without
falling, it doesn’t require a complex understanding of Physics to do that
successfully. A squirrel utilizes sensory information to determine speed,
distance and other factors and makes a decision as to whether he should or
shouldn’t jump. If we can achieve any sort of novel decision making, even in
areas much less complex than the squirrel experiences, we will have made
serious progress.
Let’s return to the concept of Thought again for a moment.
What we’ve just described as Artificial Thought in essence represents a
combination of Data Fusion (sensory data) and logical problem solving. The
rulesets can be hardwired directly into the thinking machine, without it ever
having to learn or improve upon them, although some limited form of learning
may be a possibility. The requirement for natural or machine learning is not an
absolute necessity for Artificial Thought whereas it might be for AI. Nature for
example, endows its various creatures with a minimal number of rulesets – just
what’s needed and not much more and only very basic learning abilities. This primitive
level of Thought and corresponding lack of self-awareness is perfectly
acceptable for a multitude of tasks (and keep in mind that a lack of
self-awareness does not imply lack of awareness). This scenario, or even pieces
of it, still more or less surpasses what Artificial Intelligence can achieve
today, yet it represents a much more realistic target if we reconsider how we
might go about achieving it. If we were to combine a number of diverse Thought
processes we might be said to be building a lower order of Intelligence, but
not yet perhaps the general purpose, human inspired intelligence most
associated with AI. And that is perfectly ok as it still represents real
progress.
There’s another consideration and area of confusion as well
when it comes to the struggle to create Artificial Intelligence; it has to do
with the obsession of creating architectures inspired by what we believe to be
the structures underlying human intelligence (e.g. the human brain). Neural
networks as a metaphor is perhaps the best example of this but not the only
one. Sometimes, I think this is like designing a mission to Mars based on the
understanding of how a bottle-rocket works; while there are bound to be some
similarities – our understanding of human physiology in regards to Intelligence
is still relatively primitive. More importantly perhaps, is the realization
that there are a massive set of applications available for machines that think
but aren’t necessarily intelligent. This means that even if we did understand
how to recreate human or even a general intelligence, the overhead for doing so
might not really be necessary or at least not yet. There’s a lot we could do
using mere Thought in a more selective sense.
And why is selective or Targeted Thought of any value? Well,
if we look at IT for example and the amount of effort directed towards
explicitly defining behaviors in code the value should become readily apparent
– Thought – even targeted and selective Thought – could make the operation of
any type of machine or system infinitely more efficient. And just because the
machines in question cannot appreciate the Thoughts they’re having, doesn’t
mean that can’t build new behaviors or learn from an initial set of foundational
rules (either individually or collectively). We don’t have to worry about
recreating nature per se, we simply have to keep in mind the pragmatic
motivations behind the value proposition that nature has illustrated so
convincingly to us. If we do that, and work towards simpler goals in an
evolutionary fashion, we can grow Artificial Thought into a powerful part of
most industries. This approach also builds upon areas where success has already
occurred and has the potential to accelerate those successes but it also
tempers expectations in regards to what can or even what should be done.
Artificial Thought (as opposed to Artificial Intelligence) can become much more
focused and specialized in terms of its architectural objectives.
It’s time to come full-circle and consider again why there
is such confusion and disappointment in the field of AI. It starts with the
definition for Artificial Intelligence:
Artificial intelligence (AI) is the
intelligence exhibited by machines. In computer science, an ideal
"intelligent" machine is a flexible rational agent that perceives its
environment and takes actions that maximize its chance of success at an
arbitrary goal. Colloquially, the term "artificial intelligence" is
applied when a machine mimics "cognitive" functions that humans
associate with other human minds, such as "learning" and
"problem solving." Here’s another definition:
A branch of computer science
dealing with the simulation of intelligent behavior in computers; the
capability of a machine to imitate intelligent human behavior.
When I refer to Thought versus Intelligence, I’m not
implying that Thought can or should resemble human thought, rather what I’m
positing is that Thought is any cognitive ‘processing’ that isn’t explicitly
programmed up front (e.g. it is semi-random in nature based upon inputs fed
into it). We’re not talking about simulating human intelligence or behavior and
this type of thought is limited in terms of the Intuitive or contextual
capability – it is certainly not creative Thought. The distinctions I’m making
are important. In our industry (IT), we’re perhaps too enthusiastic in
pronouncing this or the other technology as being “Intelligent” in some regard.
The reality is that none of them truly are, which is also why after pursuing
Artificial Intelligence for more than 50 years, few are willing to say anyone
has actually achieved it. But that’s not to say we’ve achieved nothing – we
have in fact built a wide variety of foundational technologies which are coming
close to filling effective roles as Cognitive Aids. These technologies don’t
simulate human thought, but rather expand or supplement it with Artificial
Thought which we can choose to apply intelligently or otherwise. The type of thinking
that we should be focused on is discrete, focused and targeted.
I supposed this dialog risks the possibility of replacing
one vague and hard to realize concept with another, but there does need to be a
way to classify intermediary cognitive capability that goes beyond standard
computing but falls short of human cognition. We don’t need to speculate too
much on the quality of thought in various animals that clearly have the ability
to think on some level and we might extend the same courtesy to machines or
systems. We don’t have to consider either as Intelligent to appreciate some
value in what they do – and many successful organisms obviously don’t think at
all – but the ones that do can become role models so to speak for the near-term
goals associated with artificial cognition. Understanding or recreating
biological neural processes aren’t necessary here either – the models we’re
aiming for are pragmatic and logical with animals simply providing a useful
analogy if nothing more (which means we’d use them as models in a way somewhat
different than we might for Robotics).
Descartes once said, “I think, Therefore I am.” Someday, I’m
sure there will be a machine that becomes Intelligent in this context, by
becoming self-aware. It’s time to recognize that the path towards machine
intelligence ought to follow a more rigorous evolution of less lofty goals. In
my next article in this series, I’m going to provide a framework for
classifying types of Artificial Thought and discuss how that can be allied with
architectural objectives as well as current or near-term technologies and
applications.
Copyright 2016, Stephen Lahanas