This past week, I was making the long commute between Dayton and Columbus, Ohio and trying to amuse myself the best I could – in this case by listening to a college course on the fundamentals of Particle Physics. One might not think there is an obvious connection between Particle Physics and Artificial Intelligence, but it turns out there is at least one. The connection, in my mind at least, was the framework used in Physics to help organize the field of subatomic particles – it’s known as the Standard Model of Particle Physics.
In the Physics world in the early 20th century, as in the field of Artificial Intelligence now, there was a tremendous amount of information which was obtained from various sources and very little ability to place it all into a unified context. We discovered particles, defined new mathematics and reinvented philosophy through Relativism and Uncertainty, yet without a guiding framework it all must have seemed terribly chaotic and random for those working in the field. Out of that chaos emerged a framework though; half invented, half discovered – one that helped to focus technology, philosophy and application of the science in question. I thought to myself, this is exactly what we’re missing with Artificial Intelligence. But as I posited in my last post, the framework that’s needed might also require a philosophical adjustment – one that illustrates Intelligence in the context of a combination of evolutionary capabilities which might best be described as Artificial Thought. In that article, I tried to make the philosophical case for something like that might make sense, in this post, I’m going to get a bit more specific and examine the more pragmatic aspects of a Framework for Artificial Thought.
Let’s start at the beginning – what does this or any such “Framework” buy us? In the case of The Standard Model of Particle Physics, the framework provided the following benefits:
- The ability to place a number of potentially divergent concepts and discoveries within a unified and coherent context
- The ability to explain the nature of something in a manner consistent with empirical data
- The ability to support a variety of predictions, like for example the discovery of specific types of new particles, the most recent and famous of those being the Higgs Boson
For Artificial Thought, the benefits might be a little different, but perhaps not as much as one would think. The high-level value proposition associated with a framework for Artificial Thought might include the following benefits:
- The ability to align a diverse set of AI theories, techniques and technologies within a coherent, unified context
- The ability to define a clear evolutionary path within that context whereby component capabilities can be combined to achieve ever greater orders of Thought and eventually Intelligence
- The ability to better chart and predict success in achieving Thought or AI milestones
I suppose the biggest difference between Particle Physics and Artificial Thought is that we’re bypassing the need to discover the functions of natural intelligence on the biological level. In other words, there are no CERN-like labs available to discover Thought in progress the way we discover subatomic particles. This could make our efforts harder to achieve, but perhaps only if our goal was to recreate natural intelligence as opposed to generating capabilities which are logically similar if not actually organic in nature. This brings us back to the central premise in the previous article, that recreating the most complex and comprehensive capability is a hell of a tough goal and we should instead worry mostly about intermediate steps rather than the end game (and I might add without getting lost in the growing tangle of 100’s of immediate, lower level details or approaches and opportunities)
Let’s take a look at what a Framework for Artificial Thought might look like…
The framework resembles an IT architecture because in a sense that’s what it is. Imagine a situation in a few years where we have a galaxy of AI-related capabilities, how might they work together and to what end? This view gives us a hint as to what that might look like. It does more than that though, it also shows how we move from lower-level Thought to higher-level Thought and it also begins to illustrate the potential for orders of Artificial Intelligence through integration of Thought capabilities (both within and across Tiers). The Tiers themselves mimic to some extent the natural intelligence we’ve referred to (both human and otherwise) by illustrating how basic capabilities might evolve into something more.
Tier 1 is Awareness and I think it’s safe to say this is the area where traditional AI has made the most progress thusfar and that only stands to reason. As we discussed before, Awareness in this context is nothing at all like Self Awareness. We can imagine a rover crawling along the rocky, barren landscape of Mars, avoiding obstacles by becoming aware of them through sensory apparatus and that fits this tier just fine. Is the rover Intelligent? Not really, yet some of the rovers we’ve built or are building can potentially operate on their own without explicit direction or intervention from human operators. This is a good starting place…
Tier 2 might witness our rover becoming more sophisticated, perhaps learning from its environment and building upon its experiences yet still basically reacting to environment. As you might have noticed from the diagram, it’s clear that Artificial Thought will occur both individually and collectively – something perhaps not fully anticipated by the founders of AI back in the 50’s and 60’s. This has critical implications to the applications of Artificial Thought, for example in the case of the rover, the implication might be that it distributes some of its higher function elsewhere. If maintaining higher Thought from Earth presents difficulties due to the 8 minute travel time for instructions, then perhaps there might be cognitive capability in another part of the lander or on an orbiting satellite. The key idea here though is that the rover itself doesn’t need all of the cognitive capability itself, which would become even more important if for some reason the rover was instead some type of Martian UAV and needed to operate for long periods of time with minimal fuel. The bottom line, though, is that Collective Thought is still Thought, regardless of how it might be distributed or otherwise combined. This is one area where our current view of individualistic human-mimicked Thought really diverges from where we’re headed.
The question begins to arise in relation to Tier 2 capability as to whether if we combined all of these functions and integrated them somehow, would we in fact achieve a level of Intelligence? For the sake of argument, let’s say yes, that it would. If we combine all capabilities that fit with Tier 1 and Tier 2 Thought, we might say we have achieved a “level 0” order of intelligence. I don’t think Tier 1 by itself would justify that assignment, yet when we look at what Tier 1 represents it does seem to mimic much of what might be required for a lower order of intelligent life to survive.
With Tier 3, things get more interesting. This is where Watson and some of the other more ambitious AI projects have been focused with limited success, but still some progress has been made. The distinction between reactive and proactive, between simple and complex is a big leap and one that not all Artificial Thought has to make. It’s important to keep in mind here also that the Tiers are not actually separate from one another, rather the higher tiers build from the lower. This model is evolutionary on several levels, both in terms of building capabilities up but also in our ability to mimic the progression of Thought metaphorically from its beginnings to somewhere at least close to how we view it. Beyond those considerations, it also represents a real-time integration architecture as well – with lower information feeding higher level capabilities. If enough integration (across Tier 3 capabilities) occurs then we might say that we’ve reached a “level 1” intelligence.
Of course, I haven’t defined what level 0 or 1 orders of intelligence represent, but the taxonomy might look something like this:
- Level 0 – An order of intelligence mimicking primitive life
- Level 1 – An order of intelligence mimicking intermediate forms of life, but not humans
- Level 2 – An order of intelligence that truly mimics human intelligence
- Level 3 – An order of intelligence beyond human intelligence
Each of these orders or levels of Intelligence involve a multitude of complex Thinking behaviors (I’ve abstracted the view to a great extent for this dialog). The framework I’ve outlined above isn’t focused on level 2 or 3 Intelligence, we can leave that for science fiction for now. But, the next ten years could see some remarkable breakthroughs on the lower levels. “It thinks, therefore it works,” might be a good corollary to Descartes’ original premise.
Copyright 2016, Stephen Lahanas
0 comments:
Post a Comment