Cyber Security Predictions for 2017

2016 was a big year in the annals of Cyber Security, and 2017 promises to eclipse it.

Creating an Enterprise Data Strategy

An introduction to the process of developing comprehensive strategies for enterprise data manangement and exploitation.

A Framework for Evolutionary Artificial Thought

Let’s start at the beginning – what does this or any such “Framework” buy us?

The Innovation Dilemma

What things actually promote or discourage innovation? We'll examine a few in this post...

Digitial Transformation, Defined

Digitial Transformation is a hot topic in IT and big money maker for consultants - but what does it really mean?.

Monday, January 9, 2017

Politics will Never Be The Same: Revelations & Mysteries from the Intelligence Report on Russian Hacking

A declassified version of the combined Intelligence Report on Russian Hacking of the 2016 Election was released on Friday, just after the classified version of the briefing, (Background to “Assessing Russian Activities and Intentions in Recent US Elections”: The Analytic Process and Cyber Incident Attribution), was presented to President-Elect Trump. This report appears to be the final deliverable associated with President Obama’s request made several weeks after the election to consolidate findings related to Russian hacking during the election. This report follows a release about week ago of a more detailed or specific report related to the larger hacking campaign; the Grizzly Steppe analysis. I review the Grizzly Steppe findings last week and I had hoped there would be a follow-up report and thankfully we’ve finally gotten it.
First, I’d like to commend the Obama administration and the Intelligence Community for doing what is overall a fairly decent job of tackling what is a complex and somewhat explosive issue. Perhaps the most immediate result of the release on Friday was the distinct change in tone coming from our President Elect on the subject – while he is still downplaying the significance of the Russian interference with the 2016 election process – he is no longer denying that it occurred. 
The final report itself includes a number of interesting revelations, but also still contains a few too many mysteries for my tastes and I’ll cover both of these perspectives in the remainder of this post. It is also important to acknowledge that there obviously must be more tangible evidence within the classified version of the report, but whether any of those elements may be declassified later remains to be seen.
The Revelations Include:
  • An acknowledgement that both major political parties were attacked, but that only stolen information was only leaked from the Democratic Party hacks.
  • That there seems to be some level of disagreement between the FBI, CIA and NSA on conclusions regarding the intent of the attacks, although there is a general consensus.
  • The Intelligence Community has taken the time to give at least a general sense of how their analytic process works, including the all-important issue of how attack attribution is assigned / determined.
  • There was an acknowledgement that various election related organizations were compromised, including boards of election, although the report emphasizes that no vote tallying machines were involved, I’ll return to this point a bit later.
  • There is a strong focus on the multi-faceted and coordinated nature of the activities, which extended beyond hacking to include manipulation of public perception through 21st century variations of propaganda techniques utilizing a variety of technologies, including Social Media and cable networks such as RT (Russia Today).
  • The entire campaign was personally mandated, approved by Vladimir Putin with the expectation on his part that Trump would likely be more favorable to Russian policy objectives than Clinton. When it was looking like Clinton would win, the Russians launched a messaging campaign questioning whether the election was rigged if Clinton won (they were also preparing a Twitter campaign called #DemocracyRIP the night of the election). 
  • The Russian General Staff Main Intelligence Directorate (GRU), used a persona called ‘Guccifer 2.0’ to leak DNC emails onto the Internet.
  • RT (Russia Today Television), also actively coordinated with Wiki-leaks on the release of Democratic Party emails.
  • The 2016 Election activities represent a significant escalation of Russian Intelligence activities against the US, reaching levels not seen since the Cold War and perhaps surpassing those. This new level of interference may be expected to be the new normal, especially given the success enjoyed in the US and other similar outcomes such as BRexit, where Russia is seeking to destabilize Western Liberal democracies.
The Mysteries That Remain
While there were some interesting revelations, many questions remain unanswered; including but not limited to the following:
  • There is still no clear explanation of how the US government is safeguarding or not – the election-related systems and processes across the country. I discussed this in some length on my post on voting integrity, but the bottom line from the latest report is that it seems as though there really isn’t any coherent or national strategy for how to deal with this. In the report, while the Intelligence Agencies acknowledge that some voting organizations were compromised they also explain that the voting systems themselves weren’t. But that begs the question, how do they really know that? How many voting systems were audited either during or after the election? Nobody knows and it wasn’t disclosed in the report.
  • Related to the above point, take a look at this excerpt from an article on the attempted recount in Michigan: “According to the newspaper, officials “couldn’t reconcile vote totals for 610 of 1,680 precincts” during last month’s countywide canvass of Election Day returns, adding that most are in Clinton stronghold Detroit, “where the number of ballots in precinct poll books did not match those of voting machine printout reports in 59 percent of precincts, 392 of 662.” Trump won Michigan by only 10,000 votes.
  • Friday’s Intelligence Report references the Department of Homeland Security in relation to the compromises or attacks on election boards, but doesn’t explain what part of DHS collected that information or who if anyone is actually responsible for safeguarding election systems. There is no discussion of the audit processes that may or may not be in place and what those audits are based upon. 
  • There is no discussion as to what an appropriate response could be or ought to be – in other words – it’s an analytic assessment without any policy implications. This, in my opinion, is a big gap in what should have been included in the final report. Making this information declassified and initiating a dialog about what happened is of course valuable, but what are we to make of it? How does the United States intend to respond to these types of events in the future if North Korea, Iran, Russia, China or any other nation decides that it wants to manipulate our political or economic processes to their advantage? We surely can’t expect to just document it, there needs to be a discussion of appropriate levels of response – a policy discussion – and this is needed primarily so we don’t make radical decisions in a haste.
In the title of this post, I’ve claimed that Politics will never be the same after the 2016 election - I firmly believe that. I think that this election has changed how politics will operate for decades to come, both here and abroad. I also don’t think that the US has fully assimilated the impact of what has really happened here yet. There will likely be countless papers, books and courses developed to explore the subject in greater depth over the coming years, but I also fear that the US has somehow lost the initiative in the midst of all of this. The idea that other nations or international actors have used technologies we’ve developed against us in this fashion is quite disturbing; all the moreso given that this just seems to be the start of an merging trend.
Here are the other election-related posts I’ve written since the election:
  1. How Technology Defined the 2016 Election
  2. Technology & the 2016 Election Part 2: Voting Integrity
  3. Technology & Election 2016 part 3 – The Failure of Data Science?
  4. Technology & The 2016 Election part 4: A New Age for Political Campaigning
  5. Technology & The 2016 Election part 5: Voter Beware
  6. The 5 Principles of Cyber Warfare
  7. What We Just Learned about Grizzly Steppe
  8. Cyber Security Predictions for 2017
Copyright 2017, Stephen Lahanas

Wednesday, January 4, 2017

Redefining Knowledge Management

Earlier this week I asked the question; “whatever happened to Knowledge Management?” The bottom line answer to that question is that perhaps the reason this once emerging field within IT has seemingly disappeared is due to the fact that it was never properly defined in the first place. This follow-up post is dedicated to examining the philosophical question more closely and trying to draw some clearer distinctions between Information, Data and Knowledge Management. This examination in turn will then support another follow-up post where I’ll attempt to characterize what a more succinct Knowledge Management framework might look like.



Before I start, I’ll try to give some rationale as to why this ought to matter. Just because Knowledge Management wasn’t properly defined before and has largely dropped off the map within IT, doesn’t mean that it isn’t needed. There most definitely needs to be something that transcends the practice of Data Management – but before we go too deep here, let’s take a another stab at defining some of terms we’re referring to…

Information Management – This has been a catch-all term for quite some time now. Information Management could be viewed synonymously with Information Technology and thus represents an aggregate of many non-related IT technologies. There is no unifying expectation behind this term nor is it necessarily even directed at data per se. The best way to describe Information Management might be as the top level of the Taxonomy – the superset into which all other IT-related practice should fit.

Data Management (DM) – As I alluded to in the previous post, Data Management is much more focused on the operational aspects of a variety of directly and indirectly related technologies. The DMBOK (the Data Management Body of Knowledge) has taken pains to classify this more precisely through a fairly wide-ranging taxonomy, covering everything from Data Architecture & Governance to Metadata Management. Perhaps the one common denominator across all of these areas is the expectation all of the capability is either resident upon or associated with systems that house and manage data. This includes both structured and unstructured data. The DMBOK also addresses the lifecycle aspect of those capabilities, which is why it includes Data Architecture, Development and Governance.

If we look at these two definitions again, despite the likelihood of some overlap with Business Intelligence or Analytics, we might consider these terms bookends enclosing Knowledge Management, with Information Management being the top-level umbrella for all IT capability and Data Management being the foundation upon which KM capability is derived.

So that brings us back to the question, what makes Knowledge Management different than the lower-level Data Management capability categories? This of course requires us to consider the difference between information / data and knowledge. An analogy might help to illustrate the question better; one of my favorite history reads is Shelby Foote’s History of the Civil War. When writing this 3 volume masterpiece, Foote spent years examining the complete archives of official records North & South – hundreds or perhaps thousands of volumes. He synthesized all of that into a somewhat concise timeline (concise being a relative term here in that each of the volumes is around 700 pages long) and was able to both educate us and tell an extremely complex story in what appears to be a very logical fashion. In this analogy, the historical archives represent data, collected from various sources during the war and the resulting book utilized an analytic process to transform a large quantity of source information into a more compact set of Civil War Knowledge. By that analogy, then I’m insinuating that KM is both a process and product – one that rests above another foundation. Knowledge Management then must necessarily be concerned as a field or practice with both the transformative creation of Knowledge (from source data or information) as well as its effective dissemination, application or exploitation.

Let’s examine now what the key characteristics that Knowledge Management ought to consist of:
1.      The ability to aggregate source data and add value to it. As mentioned previously, this is a very near approximation to the practices of BI, Analytics and perhaps even Data Science. An open question here might whether we’d consider a Data Warehouse or Data Lake a knowledge source or a data source. I tend to think of them as the latter.

2.      The ability to extend individual analysis to collective analysis and assimilation. This is an important consideration stemming from the observation that Knowledge in general has always been considered more of a collective than an individual enterprise. While individuals may attain, create or otherwise possess knowledge, knowledge is not bound by those individuals and indeed if it is kept secret to some extent it no longer qualifies as knowledge in the traditional sense of the term.

3.      The ability to collect information from many sources and not just aggregate it, but integrate it in meaningful ways. Data Integration is also included in the DMBOK, but again it may be construed to have a more operational focus in that context. For example, the focus in data management integration may be more about interoperability whereas with KM the focus is directed more towards answering complex questions.

4.      The ability to derive new meaning from existing source information. This can occur with Data Science and perhaps Analytics as well, and I tend to think that those practices fit better within the realm of KM than DM. Again, the distinction from DM here may be the expectations associated with the analysis – the idea being that each organization eventually develops its own unique perspective – both of itself and of its industry. That perspective is based on both experiential (a posteriori) and defined knowledge (a priori) synthesized from available sources and assimilated based on filters or needs unique to the organization.

5.      The ability to harness Artificial Thought or Intelligence to add further value and perspective to source information. This is where Knowledge Management merges with other topics I’ve been discussing recently – it is also the area of greatest potential for KM. And necessarily, this combination implies that AI or AT belong as part of KM – perhaps the most important part as this is we might expect the greatest value add to occur. One sticky question that this combination raises though is the notion as to whether machine Thought or Intelligence can add enough value to turn data into Knowledge, my feeling is that the answer here is a qualified yes – qualified because the expectations driving that transformation are still pretty much derived from humans.

Back in the old days of the early 2000’s, there was a flurry of debate back and forth between the relative value or application of OLAP versus OLTP within the Analytics realm. That debate has largely subsided, but it provided us with a partial preview of the differences between DM & KM in general. If we were to distill those differences into a single thought, it might be this – Knowledge Management extends us beyond ordinary operational concerns and begins to imply some level of organizational awareness. As such, it clearly builds upon the lower tiered architectures to do more – just what can be done has hardly been tested yet. Also, in the early 2000’s there were early efforts in the military realm to create Common Operating Pictures or Data Fusion Centers. These have since evolved some and become particularly important in the context of Cyber Security. Here too, Knowledge Management is a more fitting description of what we’re trying to achieve – in this case the synthesis of potentially billions of data sources to discover hidden patterns. Some might think this is actually a discussion of Big Data – but it really isn’t. Big Data belongs squarely within the realm of Data Management as it merely another data management platform with little if any expectation as to how that data might be transformed into something else (despite all of the hype to the contrary).

We’ll wrap up this post in the series with an updated definition for Knowledge Management:

Knowledge Management (KM) is a field of practice within Information Management that builds upon a foundation provided by Data Management capabilities. KM adds value to source information in both a collective and subjective manner, helping to create unique organizational perspectives and insights through assimilation of source data into directed or targeted types of knowledge.

In my next post in this series, I will explore the architectural boundaries for the next generation of Knowledge Management capabilities.



Copyright 2017, Stephen Lahanas

Monday, January 2, 2017

Cyber Security Predictions for 2017

2016 was a big year in the annals of Cyber Security, and 2017 promises to eclipse it. While the drama of the election has largely subsided, the after-effect of the DNC attack and related Russian hacks is still building steam. President-elect Trump hinted this weekend that he had special information which he will share later this week, which in all honesty may tend to grow the controversy rather than silence it. The election hacking was not by any means the whole Cyber Security story for 2016 – but it did highlight that the stakes for Cyber security are slowly but steadily escalating. So what can we expect for this year?
I’m going to make ten predictions for what may happen in 2017 in the field of Cyber Security. These prognostications are not made with any unusual knowledge but rather through examination of previous trends and logical extrapolation of where those are likely to lead us to.
  • Prediction 1 – The situation relating to Fake news will shortly lead to various types of Internet self-censorship. This may involve techniques such as badges for legitimate news outlets which will then trigger innovation on the part of attackers in attempts to mimic them.
  • Prediction 2 – Cyber-attacks will play a significant role in one or more world conflicts. We are getting close to the point where a Cyber Attack alone might be tip the balance in some of these conflicts.
  • Prediction 3 – President-elect Trump will eventually get on the same page with the US Intelligence Community in regards to Cyber Security, but it will take a number of months and several high profile incidents to turn him around.
  • Prediction 4 – The cost of Cyber Crime will reach an all-time high; 2017 will mark the first year that Cyber Crime takes out one or more major financial institutions (causing damage to operations or reputation so severe that it forces closure).
  • Prediction 5 – Two-factor authentication will be more or less forced into becoming the primary way of logging into most online services.
  • Prediction 6 – Related to the above, password management as we know it (or have known it) will begin changing drastically. Applications will be provided that assign better passwords and password management apps (vaults) will become much more common. This always been the weakest link in security.
  • Prediction 7 – 2017 will be the worst year yet for the hacking of personal data. The techniques for obtaining such data have gotten ever-more effective yet most organizations still don’t know what all sensitive data they possess. It’s a formula for disaster.
  • Prediction 8 – 2017 will likely see a greater degree of integration between Cyber and traditional military and intelligence forces and not just in the US. One area of particular concern will be Cyber vulnerabilities within various traditional warfighting technologies.
  • Prediction 9 – While there will be continued discussion regarding the after-effects of the Russian election hacks, there will be little if any effort this year to safeguard American voting systems or processes.
  • Prediction 10 – 2017 will likely become the year of our first Cyber Demonstrations. So, what is a Cyber Demonstration? Essentially, it is a demonstration of power – some sort of disruption that is likely accompanied by a political message. We haven’t had too many of these outside the context of Wiki-leaks or the election. However, I have a feeling that this type of activity may become more common relatively soon.
I don’t think any of these predictions are particularly surprising – but then again surprises are hard to predict, so we’ll have to check back at the end of the year and see what we missed…

Copyright 2017, Stephen Lahanas

Sunday, January 1, 2017

Whatever Happened to Knowledge Management?

Information Technology is a dynamic field, one often driven by buzzwords and fleeting trends. Sometimes these trends continue for decades, other times they fade somewhat quickly. One trend that experienced that fate seems to be Knowledge Management (KM). I recall first hearing about it in the early 2000’s and at that time it seemed to encompass several classes of subordinate technologies including but not limited to the following:
  • Document Management
  • Content Management
  • Search technology (various)
  • Business Intelligence (A.K.A. Decision Support or Analytics)
  • Metadata Management
  • Learning Management

It also tended to include more specialized tools such as knowledgebases or FAQ generators and it potentially seemed to include more integrative content-focused technologies such as Wikis. For a time KM seemed poised to also include a wide range of Semantic technologies as well. However, in the last two years or so in particular, the term Knowledge Management has seemingly dropped off of the map. I was wondering to myself why this may have happened and whether anything in particular had replaced it.
To be sure, there are a couple of related trends that have dominated the IT landscape in the past two or three years; the most significant of those being Data Science & Big Data. However, neither of these seems to fulfill the role KM was being groomed for over the previous decade. In fact, in some ways these more recent trends have become much less specific in regards to their expectations or scope (which has actually become a problem for both of them). This situation may actually help to explain what happened to Knowledge Management - perhaps the original scope was too expansive? But was it just a scope issue?
I want return again to the core or implied premise associated with Knowledge Management, that there ought to be some sort of enterprise-wide ability to help unify all of these knowledge related functions or processes and resources. The problem with this premise is related to the scope, in that there isn’t one product group of technologies associated with it, but rather a fairly large set of technologies – some of them not closely related at all other than in a philosophical sense – in other words that they could be construed as part of a larger knowledge ecosystem. So, we seem to be missing an industry impetus, but we also seem to be lacking any sort of agreed upon knowledge process or framework that would necessarily help to tie all of these diverse technologies together. This latter problem takes us deep into the heart of the larger philosophical question which KM seemed to be begging – e.g., what is the difference between information and knowledge? That isn’t an easy question to answer – and in some sense parallels my recent discussion on Artificial Intelligence versus Artificial Thought. In fact, AI could even be considered as part of KM depending on how you look at it.
So back to the tough question, what makes data or information become knowledge? Is that dependent on adding value to the data or information though specific types of processes or is it merely in the integration and analysis of such source data that the source transcends itself to become knowledge? Or is knowledge only something we can consider in a collective sense; with the sum total of all data assets being knowledge potential of some sort? These types of questions may have been raised during the years that KM was discussed actively but I think never properly answered.
You might have noticed from my initial list of related technologies that I seemed to have left out Data Management or Information Management. Either of those could potentially be considered to be part of a knowledge management framework – however the reason I left them out is that for those fields, there is a much better operational understanding of how those type of technologies work. In fact, the view from with one of these areas, Data Management overlaps quite a bit with some of what I’ve attributed to Knowledge Management (one need only look at the DMBOK to see this illustrated). Thus Data Management as a trend has continued (for decades now), primarily concerned with operational maintenance of a number of unrelated technologies without the implied necessity to integrate it all into something transcendent across the enterprise.
What if we did wish to settle the deeper question though, regarding the differing expectations between operational management of source data and knowledge exploitation? It is now 2017, are we in a position to define an integrative knowledge framework and if so what would the philosophical foundation consist of? Moreover, would coming up with this type of framework help to redeem the dimming trends of Big Data and Data Science? I think it’s worth trying to answer the question and also worth taking a shot at defining the missing knowledge framework. I will tackle both parts of this problem in two upcoming posts…
Copyright 2016, Stephen Lahanas

Saturday, December 31, 2016

What We Just Learned about Grizzly Steppe

The Obama administration announced yesterday that sanctions were being placed on Russia in retaliation for the 2016 Election Hacking scandal. Shortly after that announcement, a Joint Analysis Report (JAR) was released providing a description of the nature of the Cyber attacks. It's still not clear if this report (released to scribd.com) is the complete intelligence report that the President had requested some weeks back or one perhaps one of several. What is clear however, is that the level of detail is perhaps more granular than expected, but the scope seems to be narrower than it could have been.
Architectural representation of the Election Hacks from FBI Report: JAR-16-20296

So what did we learn from the document? Here are a few highlights:
  • We have a relatively straightforward diagrammatic view of how the attacks occurred (I've placed an example of this in the post image)
  • We've been given a glimpse into the nature of the Russian Intelligence Service (RIS), but a limited one. Approximately two dozen names are listed as being associated with the RIS, but it's not clear if all these are indeed separate groups (and no explanation is given about any of it). There are some very Bond-like spynames in the group like CrouchingYeti, Fancy Bear and Gray Cloud but that in itself isn't very illuminating.
  • We are shown some detail regarding the identity of the exploit. Unfortunately, this is not provided in a context that might be well-understood outside of the Intelligence Community or a small cadre of Cyber security experts. The exploit information is supposed to clinch the identification of the groups in question and maybe it does, however it certainly seems as though part of the story is missing.
  • Fully half of the document is dedicated to describing various Cyber risk factors and mitigating actions in some detail. While this is good information, it is terribly generic and it seems a though it has been used to inflate the size of the report somewhat - perhaps at the expense of the main point for releasing it.
While I don't wish too sound too critical here, I think it might be worthwhile for the folks working on this analysis to consider creating another draft. First, I'd like to address why I think that's necessary and then I'll delve into what ought to be revised or added in the next version of the report.
The reason why we need to get this right should be obvious, but I'll state it again anyway. The report represents the foundation for both the claims that the attack occurred as well as for the sanctions that will follow. This may or may not represent a form of Cyber-warfare (both the attack and the response - I've outlined that topic in more depth here). In any case, it is a serious matter and the sanctions probably represent the most severe actions we've taken against Russia since the end of the Cold War. Thus the foundation needs to be as a strong as possible. Obviously, there are national security issues at play with this topic, however in some situations, more information can be better than less. The information missing from the current version of the report includes the following:
  • Detail on the other organizations which were hit in the attack - there is an implication of a much wider attack, but no specifics.
  • An explanation of the context - the goals of the attack and how the stolen information was utilized. Also, there needs to be an explanation of the process of exploit identification for those who aren't already familiar with it.
  • A discussion of how the US can help safeguard election processes and systems. This is somewhat covered by the best practice portion of the report, but that seems to also be saying that all such mitigation for thwarting future attacks is entirely up to each potential target which isn't altogether satisfying. We should be having stronger a dialog on how critical processes can be protected by the groups we thought we there to perform that task. For example, who if anyone, will take the lead on auditing voting systems in every state?
The current Grizzly Steppe report seems to have give us the bare minimum. We need more than that if we wish to learn from this experience and keep it from happening again. Let's give it another try...
copyright 2016, Stephen Lahanas

Friday, December 23, 2016

A Framework for Evolutionary Artificial Thought

This past week, I was making the long commute between Dayton and Columbus, Ohio and trying to amuse myself the best I could – in this case by listening to a college course on the fundamentals of Particle Physics. One might not think there is an obvious connection between Particle Physics and Artificial Intelligence, but it turns out there is at least one. The connection, in my mind at least, was the framework used in Physics to help organize the field of subatomic particles – it’s known as the Standard Model of Particle Physics.
In the Physics world in the early 20th century, as in the field of Artificial Intelligence now, there was a tremendous amount of information which was obtained from various sources and very little ability to place it all into a unified context. We discovered particles, defined new mathematics and reinvented philosophy through Relativism and Uncertainty, yet without a guiding framework it all must have seemed terribly chaotic and random for those working in the field. Out of that chaos emerged a framework though; half invented, half discovered – one that helped to focus technology, philosophy and application of the science in question. I thought to myself, this is exactly what we’re missing with Artificial Intelligence. But as I posited in my last post, the framework that’s needed might also require a philosophical adjustment – one that illustrates Intelligence in the context of a combination of evolutionary capabilities which might best be described as Artificial Thought. In that article, I tried to make the philosophical case for something like that might make sense, in this post, I’m going to get a bit more specific and examine the more pragmatic aspects of a Framework for Artificial Thought.  
Let’s start at the beginning – what does this or any such “Framework” buy us? In the case of The Standard Model of Particle Physics, the framework provided the following benefits:
  1. The ability to place a number of potentially divergent concepts and discoveries within a unified and coherent context
  2. The ability to explain the nature of something in a manner consistent with empirical data
  3. The ability to support a variety of predictions, like for example the discovery of specific types of new particles, the most recent and famous of those being the Higgs Boson
For Artificial Thought, the benefits might be a little different, but perhaps not as much as one would think. The high-level value proposition associated with a framework for Artificial Thought might include the following benefits:
  1. The ability to align a diverse set of AI theories, techniques and technologies within a coherent, unified context
  2. The ability to define a clear evolutionary path within that context whereby component capabilities can be combined to achieve ever greater orders of Thought and eventually Intelligence
  3. The ability to better chart and predict success in achieving Thought or AI milestones
I suppose the biggest difference between Particle Physics and Artificial Thought is that we’re bypassing the need to discover the functions of natural intelligence on the biological level. In other words, there are no CERN-like labs available to discover Thought in progress the way we discover subatomic particles. This could make our efforts harder to achieve, but perhaps only if our goal was to recreate natural intelligence as opposed to generating capabilities which are logically similar if not actually organic in nature. This brings us back to the central premise in the previous article, that recreating the most complex and comprehensive capability is a hell of a tough goal and we should instead worry mostly about intermediate steps rather than the end game (and I might add without getting lost in the growing tangle of 100’s of immediate, lower level details or approaches and opportunities)
Let’s take a look at what a Framework for Artificial Thought might look like…
The framework resembles an IT architecture because in a sense that’s what it is. Imagine a situation in a few years where we have a galaxy of AI-related capabilities, how might they work together and to what end? This view gives us a hint as to what that might look like. It does more than that though, it also shows how we move from lower-level Thought to higher-level Thought and it also begins to illustrate the potential for orders of Artificial Intelligence through integration of Thought capabilities (both within and across Tiers). The Tiers themselves mimic to some extent the natural intelligence we’ve referred to (both human and otherwise) by illustrating how basic capabilities might evolve into something more.
Tier 1 is Awareness and I think it’s safe to say this is the area where traditional AI has made the most progress thusfar and that only stands to reason. As we discussed before, Awareness in this context is nothing at all like Self Awareness. We can imagine a rover crawling along the rocky, barren landscape of Mars, avoiding obstacles by becoming aware of them through sensory apparatus and that fits this tier just fine. Is the rover Intelligent? Not really, yet some of the rovers we’ve built or are building can potentially operate on their own without explicit direction or intervention from human operators. This is a good starting place…
Tier 2 might witness our rover becoming more sophisticated, perhaps learning from its environment and building upon its experiences yet still basically reacting to environment. As you might have noticed from the diagram, it’s clear that Artificial Thought will occur both individually and collectively – something perhaps not fully anticipated by the founders of AI back in the 50’s and 60’s. This has critical implications to the applications of Artificial Thought, for example in the case of the rover, the implication might be that it distributes some of its higher function elsewhere. If maintaining higher Thought from Earth presents difficulties due to the 8 minute travel time for instructions, then perhaps there might be cognitive capability in another part of the lander or on an orbiting satellite. The key idea here though is that the rover itself doesn’t need all of the cognitive capability itself, which would become even more important if for some reason the rover was instead some type of Martian UAV and needed to operate for long periods of time with minimal fuel. The bottom line, though, is that Collective Thought is still Thought, regardless of how it might be distributed or otherwise combined. This is one area where our current view of individualistic human-mimicked Thought really diverges from where we’re headed.
The question begins to arise in relation to Tier 2 capability as to whether if we combined all of these functions and integrated them somehow, would we in fact achieve a level of Intelligence? For the sake of argument, let’s say yes, that it would. If we combine all capabilities that fit with Tier 1 and Tier 2 Thought, we might say we have achieved a “level 0” order of intelligence. I don’t think Tier 1 by itself would justify that assignment, yet when we look at what Tier 1 represents it does seem to mimic much of what might be required for a lower order of intelligent life to survive.
With Tier 3, things get more interesting. This is where Watson and some of the other more ambitious AI projects have been focused with limited success, but still some progress has been made. The distinction between reactive and proactive, between simple and complex is a big leap and one that not all Artificial Thought has to make. It’s important to keep in mind here also that the Tiers are not actually separate from one another, rather the higher tiers build from the lower. This model is evolutionary on several levels, both in terms of building capabilities up but also in our ability to mimic the progression of Thought metaphorically from its beginnings to somewhere at least close to how we view it. Beyond those considerations, it also represents a real-time integration architecture as well – with lower information feeding higher level capabilities. If enough integration (across Tier 3 capabilities) occurs then we might say that we’ve reached a “level 1” intelligence.
Of course, I haven’t defined what level 0 or 1 orders of intelligence represent, but the taxonomy might look something like this:
  • Level 0 – An order of intelligence mimicking primitive life
  • Level 1 – An order of intelligence mimicking intermediate forms of life, but not humans
  • Level 2 – An order of intelligence that truly mimics human intelligence
  • Level 3 – An order of intelligence beyond human intelligence
Each of these orders or levels of Intelligence involve a multitude of complex Thinking behaviors (I’ve abstracted the view to a great extent for this dialog). The framework I’ve outlined above isn’t focused on level 2 or 3 Intelligence, we can leave that for science fiction for now. But, the next ten years could see some remarkable breakthroughs on the lower levels.  “It thinks, therefore it works,” might be a good corollary to Descartes’ original premise.

Copyright 2016, Stephen Lahanas