Cyber Security Predictions for 2017

2016 was a big year in the annals of Cyber Security, and 2017 promises to eclipse it.

Creating an Enterprise Data Strategy

An introduction to the process of developing comprehensive strategies for enterprise data manangement and exploitation.

A Framework for Evolutionary Artificial Thought

Let’s start at the beginning – what does this or any such “Framework” buy us?

The Innovation Dilemma

What things actually promote or discourage innovation? We'll examine a few in this post...

Digitial Transformation, Defined

Digitial Transformation is a hot topic in IT and big money maker for consultants - but what does it really mean?.

Saturday, December 31, 2016

What We Just Learned about Grizzly Steppe

The Obama administration announced yesterday that sanctions were being placed on Russia in retaliation for the 2016 Election Hacking scandal. Shortly after that announcement, a Joint Analysis Report (JAR) was released providing a description of the nature of the Cyber attacks. It's still not clear if this report (released to scribd.com) is the complete intelligence report that the President had requested some weeks back or one perhaps one of several. What is clear however, is that the level of detail is perhaps more granular than expected, but the scope seems to be narrower than it could have been.
Architectural representation of the Election Hacks from FBI Report: JAR-16-20296

So what did we learn from the document? Here are a few highlights:
  • We have a relatively straightforward diagrammatic view of how the attacks occurred (I've placed an example of this in the post image)
  • We've been given a glimpse into the nature of the Russian Intelligence Service (RIS), but a limited one. Approximately two dozen names are listed as being associated with the RIS, but it's not clear if all these are indeed separate groups (and no explanation is given about any of it). There are some very Bond-like spynames in the group like CrouchingYeti, Fancy Bear and Gray Cloud but that in itself isn't very illuminating.
  • We are shown some detail regarding the identity of the exploit. Unfortunately, this is not provided in a context that might be well-understood outside of the Intelligence Community or a small cadre of Cyber security experts. The exploit information is supposed to clinch the identification of the groups in question and maybe it does, however it certainly seems as though part of the story is missing.
  • Fully half of the document is dedicated to describing various Cyber risk factors and mitigating actions in some detail. While this is good information, it is terribly generic and it seems a though it has been used to inflate the size of the report somewhat - perhaps at the expense of the main point for releasing it.
While I don't wish too sound too critical here, I think it might be worthwhile for the folks working on this analysis to consider creating another draft. First, I'd like to address why I think that's necessary and then I'll delve into what ought to be revised or added in the next version of the report.
The reason why we need to get this right should be obvious, but I'll state it again anyway. The report represents the foundation for both the claims that the attack occurred as well as for the sanctions that will follow. This may or may not represent a form of Cyber-warfare (both the attack and the response - I've outlined that topic in more depth here). In any case, it is a serious matter and the sanctions probably represent the most severe actions we've taken against Russia since the end of the Cold War. Thus the foundation needs to be as a strong as possible. Obviously, there are national security issues at play with this topic, however in some situations, more information can be better than less. The information missing from the current version of the report includes the following:
  • Detail on the other organizations which were hit in the attack - there is an implication of a much wider attack, but no specifics.
  • An explanation of the context - the goals of the attack and how the stolen information was utilized. Also, there needs to be an explanation of the process of exploit identification for those who aren't already familiar with it.
  • A discussion of how the US can help safeguard election processes and systems. This is somewhat covered by the best practice portion of the report, but that seems to also be saying that all such mitigation for thwarting future attacks is entirely up to each potential target which isn't altogether satisfying. We should be having stronger a dialog on how critical processes can be protected by the groups we thought we there to perform that task. For example, who if anyone, will take the lead on auditing voting systems in every state?
The current Grizzly Steppe report seems to have give us the bare minimum. We need more than that if we wish to learn from this experience and keep it from happening again. Let's give it another try...
copyright 2016, Stephen Lahanas

Friday, December 23, 2016

A Framework for Evolutionary Artificial Thought

This past week, I was making the long commute between Dayton and Columbus, Ohio and trying to amuse myself the best I could – in this case by listening to a college course on the fundamentals of Particle Physics. One might not think there is an obvious connection between Particle Physics and Artificial Intelligence, but it turns out there is at least one. The connection, in my mind at least, was the framework used in Physics to help organize the field of subatomic particles – it’s known as the Standard Model of Particle Physics.
In the Physics world in the early 20th century, as in the field of Artificial Intelligence now, there was a tremendous amount of information which was obtained from various sources and very little ability to place it all into a unified context. We discovered particles, defined new mathematics and reinvented philosophy through Relativism and Uncertainty, yet without a guiding framework it all must have seemed terribly chaotic and random for those working in the field. Out of that chaos emerged a framework though; half invented, half discovered – one that helped to focus technology, philosophy and application of the science in question. I thought to myself, this is exactly what we’re missing with Artificial Intelligence. But as I posited in my last post, the framework that’s needed might also require a philosophical adjustment – one that illustrates Intelligence in the context of a combination of evolutionary capabilities which might best be described as Artificial Thought. In that article, I tried to make the philosophical case for something like that might make sense, in this post, I’m going to get a bit more specific and examine the more pragmatic aspects of a Framework for Artificial Thought.  
Let’s start at the beginning – what does this or any such “Framework” buy us? In the case of The Standard Model of Particle Physics, the framework provided the following benefits:
  1. The ability to place a number of potentially divergent concepts and discoveries within a unified and coherent context
  2. The ability to explain the nature of something in a manner consistent with empirical data
  3. The ability to support a variety of predictions, like for example the discovery of specific types of new particles, the most recent and famous of those being the Higgs Boson
For Artificial Thought, the benefits might be a little different, but perhaps not as much as one would think. The high-level value proposition associated with a framework for Artificial Thought might include the following benefits:
  1. The ability to align a diverse set of AI theories, techniques and technologies within a coherent, unified context
  2. The ability to define a clear evolutionary path within that context whereby component capabilities can be combined to achieve ever greater orders of Thought and eventually Intelligence
  3. The ability to better chart and predict success in achieving Thought or AI milestones
I suppose the biggest difference between Particle Physics and Artificial Thought is that we’re bypassing the need to discover the functions of natural intelligence on the biological level. In other words, there are no CERN-like labs available to discover Thought in progress the way we discover subatomic particles. This could make our efforts harder to achieve, but perhaps only if our goal was to recreate natural intelligence as opposed to generating capabilities which are logically similar if not actually organic in nature. This brings us back to the central premise in the previous article, that recreating the most complex and comprehensive capability is a hell of a tough goal and we should instead worry mostly about intermediate steps rather than the end game (and I might add without getting lost in the growing tangle of 100’s of immediate, lower level details or approaches and opportunities)
Let’s take a look at what a Framework for Artificial Thought might look like…
The framework resembles an IT architecture because in a sense that’s what it is. Imagine a situation in a few years where we have a galaxy of AI-related capabilities, how might they work together and to what end? This view gives us a hint as to what that might look like. It does more than that though, it also shows how we move from lower-level Thought to higher-level Thought and it also begins to illustrate the potential for orders of Artificial Intelligence through integration of Thought capabilities (both within and across Tiers). The Tiers themselves mimic to some extent the natural intelligence we’ve referred to (both human and otherwise) by illustrating how basic capabilities might evolve into something more.
Tier 1 is Awareness and I think it’s safe to say this is the area where traditional AI has made the most progress thusfar and that only stands to reason. As we discussed before, Awareness in this context is nothing at all like Self Awareness. We can imagine a rover crawling along the rocky, barren landscape of Mars, avoiding obstacles by becoming aware of them through sensory apparatus and that fits this tier just fine. Is the rover Intelligent? Not really, yet some of the rovers we’ve built or are building can potentially operate on their own without explicit direction or intervention from human operators. This is a good starting place…
Tier 2 might witness our rover becoming more sophisticated, perhaps learning from its environment and building upon its experiences yet still basically reacting to environment. As you might have noticed from the diagram, it’s clear that Artificial Thought will occur both individually and collectively – something perhaps not fully anticipated by the founders of AI back in the 50’s and 60’s. This has critical implications to the applications of Artificial Thought, for example in the case of the rover, the implication might be that it distributes some of its higher function elsewhere. If maintaining higher Thought from Earth presents difficulties due to the 8 minute travel time for instructions, then perhaps there might be cognitive capability in another part of the lander or on an orbiting satellite. The key idea here though is that the rover itself doesn’t need all of the cognitive capability itself, which would become even more important if for some reason the rover was instead some type of Martian UAV and needed to operate for long periods of time with minimal fuel. The bottom line, though, is that Collective Thought is still Thought, regardless of how it might be distributed or otherwise combined. This is one area where our current view of individualistic human-mimicked Thought really diverges from where we’re headed.
The question begins to arise in relation to Tier 2 capability as to whether if we combined all of these functions and integrated them somehow, would we in fact achieve a level of Intelligence? For the sake of argument, let’s say yes, that it would. If we combine all capabilities that fit with Tier 1 and Tier 2 Thought, we might say we have achieved a “level 0” order of intelligence. I don’t think Tier 1 by itself would justify that assignment, yet when we look at what Tier 1 represents it does seem to mimic much of what might be required for a lower order of intelligent life to survive.
With Tier 3, things get more interesting. This is where Watson and some of the other more ambitious AI projects have been focused with limited success, but still some progress has been made. The distinction between reactive and proactive, between simple and complex is a big leap and one that not all Artificial Thought has to make. It’s important to keep in mind here also that the Tiers are not actually separate from one another, rather the higher tiers build from the lower. This model is evolutionary on several levels, both in terms of building capabilities up but also in our ability to mimic the progression of Thought metaphorically from its beginnings to somewhere at least close to how we view it. Beyond those considerations, it also represents a real-time integration architecture as well – with lower information feeding higher level capabilities. If enough integration (across Tier 3 capabilities) occurs then we might say that we’ve reached a “level 1” intelligence.
Of course, I haven’t defined what level 0 or 1 orders of intelligence represent, but the taxonomy might look something like this:
  • Level 0 – An order of intelligence mimicking primitive life
  • Level 1 – An order of intelligence mimicking intermediate forms of life, but not humans
  • Level 2 – An order of intelligence that truly mimics human intelligence
  • Level 3 – An order of intelligence beyond human intelligence
Each of these orders or levels of Intelligence involve a multitude of complex Thinking behaviors (I’ve abstracted the view to a great extent for this dialog). The framework I’ve outlined above isn’t focused on level 2 or 3 Intelligence, we can leave that for science fiction for now. But, the next ten years could see some remarkable breakthroughs on the lower levels.  “It thinks, therefore it works,” might be a good corollary to Descartes’ original premise.

Copyright 2016, Stephen Lahanas

Thursday, December 22, 2016

How Artificial Thought Can Save AI

Over the past several decades, I’ve seen an endless stream of predictions and articles in regards to Artificial Intelligence and it occurred to me recently that we may have missed an important point relating to this topic. One reason the expectations and the reality of AI have diverged so greatly may be due entirely to our obsession with the notion that in creating it we ought to be somehow be mimicking ourselves through some sort of human intelligence without perhaps truly understanding what that represents. This seems to be simultaneously our greatest goal and our worst nightmare relating to AI. But then I got to wondering, what’s the difference between Artificial Intelligence and Artificial Thought and if we viewed the question from the latter perspective are we in fact actually making some real progress?



Before we can dive into that question it is worthwhile to try to define what we mean by Intelligence and Thought. Here’s a good definition of Intelligence (signed by 52 scientists in the field):

A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—"catching on," "making sense" of things, or "figuring out" what to do.

I like this definition as opposed to some others I’ve seen because I think it carries within it a larger scope that we tend to include with the notion of what Intelligence is or what an intelligent being / entity has to be able to accomplish. This type of definition serves us for example when considering what extraterrestrial intelligence might be. Thought, at first glance, might be considered a subset of Intelligence – it is the act of demonstrating one’s Intelligence per se but also the product of that demonstration. The definitions of Thought are somewhat less concise and often seem a bit recursive though – take this one for example: “a single act or product of thinking; idea or notion.” Thought can be both a verb and a noun. Perhaps the reason that it is so difficult to nail down the definition for Thought is because we do tend to view it as a subset of the superset Intelligence and describing a component of that process or capability without fully explaining or understanding it is challenging. One definition that I think fits our topic a little better comes from the Merriam Webster dictionary; “reasoning power or a developed intention or plan.” This definition of Thought doesn’t try to explain it as much as it tends to highlight how or what it represents. Interestingly, this more compact definition also closely mirrors some of the first definitions for Artificial Intelligence, but we’ll return to that in a minute.

So, at the highest level, “Intelligence” might be considered a higher reasoning power, one that also tends to imply self-awareness and continuity of thought and a sort of assimilation of knowledge into a Self as time passes. Intelligence may be more than that as well in that there is an integrative aspect to it that often isn’t included the definitions – sometimes we view that integrative aspect as Self but not always. If we return to the Ontology of the subject, we might be able to state that you can’t have Intelligence without the thought process or individual thoughts but perhaps you can have thoughts without Intelligence per se. In other words, Intelligence at least on the face of it, seems to represent a higher order than Thought, although as we know from the real world that there are also various orders or levels of Intelligence too.

This is more of a philosophical question then a technical one, but let me follow it a bit further. Let’s say that there can be Thought without Intelligence and the key difference between the two in the context of Human Intelligence (which seems to be what many have tried to emulate) may be self-awareness and the ability to integrate reality within a unique perspective and context. At its most fundamental, Thought can be disconnected from other Thought as well as from any experience or capability resembling self-awareness and / or complex integrative interpretation. By this definition, there are likely a wide variety of animals that have mental activity that might be described as Thought, but certainly not Intelligence in the sense we tend to attribute to humans. There can be lower levels of Intelligence, but the difference there between Thought has less to with self and more to do with Integrative interpretation. If we view any organism as a complex system or system of systems, then some level of integrative coordination is always occurring. This coordination is often automatic, even for humans, but sometimes it is deliberate. Deliberate integration may be classified as Thought, but it doesn’t necessarily require Self-Awareness.

All of this begs the question, what is Self-Awareness and why is it so important in the distinction between Thought and Intelligence?

Self-Awareness implies an understanding or expectation of identity. There can, of course, be Awareness without any indication that a Self exists. Take away Self from the equation of Intelligence and what you have left are a lot of the same capabilities; such as Learning (to a point), Memory (from an objective rather than a subjective context), and Thought which can accomplish many of the same goals that intelligent Thought can, but not all of them. Take Self away and you can also potentially discard the requirement for complex integration (the level and type of integration can become much more selective). We might refer to this Selfless state as “Targeted Thought” and this is a little more interesting in that it represents areas within which Thought can be developed, specialized and reinforced without higher-level expectations. The difference between that and Thought in the traditional sense (as a subset of Intelligence) is that there are likely to be clear boundaries that constrain the operation of Targeted Thought. A Targeted Thought “Boundary” for example might operate solely within the context of airline routes and all of the processes directly associated with flight routing. Within that boundary, reasoning power based on planning and guided toward specific intentions could take place to help solve questions of efficiency or profit. This type of Thought can still be considered “novel” thought as long as it isn’t fully determined in advance, but it is unlikely that something artificially constrained to a single purpose ought to be considered intelligent in the way we tend to view Intelligence.

So, Artificial Thought in the abstract sense might be considered as the ability to derive novel outputs from similar inputs based on real-world situations and unique (yet mostly static) rulesets and perhaps various Targeted boundaries. This definition provides a framework in which Artificial cognition or Thought might obtain near-term success – a much narrower view to be sure and one already potentially aligned with nearly every practical AI effort yet undertaken. This definition is perhaps not too far off from what the initial definitions for Artificial Intelligence were, yet the types of predictions that we can make about what Artificial Thought can or can’t do will likely become much better defined within this narrower confine.

Let’s go from the abstract to the real world, Nature. In Nature, lower level organisms likely have many potential applications for Thought but don’t build that type of Thought around language or complex symbols but rather through some sort of connection to various sensory capability and stimuli. It wouldn’t be expected for a squirrel to memorize a complex path between several dozen trees bearing acorns, yet in order for the squirrel to succeed and survive through the Winter, he must have a well-defined ruleset informed by recent and current experience that facilitates such navigation. The squirrel’s journey is a problem-solving exercise, one that he may or may not be able to learn from or remember but one that he has to be able to repeat successfully under dynamic conditions. The squirrel might be viewed as a system which demonstrates a limited level of awareness, and employs novel thought in an integrative manner so it might be considered a lower-level intelligence (when all of those capabilities are combined).

My point with the squirrel analogy is this; if we were to attempt to create & program an Intelligent squirrel (using the more expansive of scope of the definition for Intelligence) to think through all of this we might be missing the point or be conducting a certain level of overkill. We could instead take any portion of the capability the squirrel possesses and use that to solve various types of problems or perform tasks. In other words, in deconstructing even a lower order of Intelligence, we can extract some Artificial Thinking capability that might prove useful – capability that likely far outstrips what we’re currently able to do. 

And even if we were to view all those capabilities combined in the case of the Squirrel’s survival, he simply doesn’t require a general purpose Intelligence the way we’ve been defining it in AI, but he does require a certain level or type of Thought (novel action resulting from dynamic inputs). This is not to downplay the complexity of the squirrel’s mission, which is in fact fairly daunting. Any successful squirrel must regularly evade predators, find food, find shelter and navigate its environment in a dynamic 24/7 environment. The navigation problem alone is challenging enough, and we use such problems all the time to test the efficacy of AI programs. However, while the squirrel needs to be able to jump from one branch to another without falling, it doesn’t require a complex understanding of Physics to do that successfully. A squirrel utilizes sensory information to determine speed, distance and other factors and makes a decision as to whether he should or shouldn’t jump. If we can achieve any sort of novel decision making, even in areas much less complex than the squirrel experiences, we will have made serious progress.

Let’s return to the concept of Thought again for a moment. What we’ve just described as Artificial Thought in essence represents a combination of Data Fusion (sensory data) and logical problem solving. The rulesets can be hardwired directly into the thinking machine, without it ever having to learn or improve upon them, although some limited form of learning may be a possibility. The requirement for natural or machine learning is not an absolute necessity for Artificial Thought whereas it might be for AI. Nature for example, endows its various creatures with a minimal number of rulesets – just what’s needed and not much more and only very basic learning abilities. This primitive level of Thought and corresponding lack of self-awareness is perfectly acceptable for a multitude of tasks (and keep in mind that a lack of self-awareness does not imply lack of awareness). This scenario, or even pieces of it, still more or less surpasses what Artificial Intelligence can achieve today, yet it represents a much more realistic target if we reconsider how we might go about achieving it. If we were to combine a number of diverse Thought processes we might be said to be building a lower order of Intelligence, but not yet perhaps the general purpose, human inspired intelligence most associated with AI. And that is perfectly ok as it still represents real progress.

There’s another consideration and area of confusion as well when it comes to the struggle to create Artificial Intelligence; it has to do with the obsession of creating architectures inspired by what we believe to be the structures underlying human intelligence (e.g. the human brain). Neural networks as a metaphor is perhaps the best example of this but not the only one. Sometimes, I think this is like designing a mission to Mars based on the understanding of how a bottle-rocket works; while there are bound to be some similarities – our understanding of human physiology in regards to Intelligence is still relatively primitive. More importantly perhaps, is the realization that there are a massive set of applications available for machines that think but aren’t necessarily intelligent. This means that even if we did understand how to recreate human or even a general intelligence, the overhead for doing so might not really be necessary or at least not yet. There’s a lot we could do using mere Thought in a more selective sense.

And why is selective or Targeted Thought of any value? Well, if we look at IT for example and the amount of effort directed towards explicitly defining behaviors in code the value should become readily apparent – Thought – even targeted and selective Thought – could make the operation of any type of machine or system infinitely more efficient. And just because the machines in question cannot appreciate the Thoughts they’re having, doesn’t mean that can’t build new behaviors or learn from an initial set of foundational rules (either individually or collectively). We don’t have to worry about recreating nature per se, we simply have to keep in mind the pragmatic motivations behind the value proposition that nature has illustrated so convincingly to us. If we do that, and work towards simpler goals in an evolutionary fashion, we can grow Artificial Thought into a powerful part of most industries. This approach also builds upon areas where success has already occurred and has the potential to accelerate those successes but it also tempers expectations in regards to what can or even what should be done. Artificial Thought (as opposed to Artificial Intelligence) can become much more focused and specialized in terms of its architectural objectives.

It’s time to come full-circle and consider again why there is such confusion and disappointment in the field of AI. It starts with the definition for Artificial Intelligence:

Artificial intelligence (AI) is the intelligence exhibited by machines. In computer science, an ideal "intelligent" machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at an arbitrary goal. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving." Here’s another definition:

A branch of computer science dealing with the simulation of intelligent behavior in computers; the capability of a machine to imitate intelligent human behavior.

When I refer to Thought versus Intelligence, I’m not implying that Thought can or should resemble human thought, rather what I’m positing is that Thought is any cognitive ‘processing’ that isn’t explicitly programmed up front (e.g. it is semi-random in nature based upon inputs fed into it). We’re not talking about simulating human intelligence or behavior and this type of thought is limited in terms of the Intuitive or contextual capability – it is certainly not creative Thought. The distinctions I’m making are important. In our industry (IT), we’re perhaps too enthusiastic in pronouncing this or the other technology as being “Intelligent” in some regard. The reality is that none of them truly are, which is also why after pursuing Artificial Intelligence for more than 50 years, few are willing to say anyone has actually achieved it. But that’s not to say we’ve achieved nothing – we have in fact built a wide variety of foundational technologies which are coming close to filling effective roles as Cognitive Aids. These technologies don’t simulate human thought, but rather expand or supplement it with Artificial Thought which we can choose to apply intelligently or otherwise. The type of thinking that we should be focused on is discrete, focused and targeted.

I supposed this dialog risks the possibility of replacing one vague and hard to realize concept with another, but there does need to be a way to classify intermediary cognitive capability that goes beyond standard computing but falls short of human cognition. We don’t need to speculate too much on the quality of thought in various animals that clearly have the ability to think on some level and we might extend the same courtesy to machines or systems. We don’t have to consider either as Intelligent to appreciate some value in what they do – and many successful organisms obviously don’t think at all – but the ones that do can become role models so to speak for the near-term goals associated with artificial cognition. Understanding or recreating biological neural processes aren’t necessary here either – the models we’re aiming for are pragmatic and logical with animals simply providing a useful analogy if nothing more (which means we’d use them as models in a way somewhat different than we might for Robotics). 

Descartes once said, “I think, Therefore I am.” Someday, I’m sure there will be a machine that becomes Intelligent in this context, by becoming self-aware. It’s time to recognize that the path towards machine intelligence ought to follow a more rigorous evolution of less lofty goals. In my next article in this series, I’m going to provide a framework for classifying types of Artificial Thought and discuss how that can be allied with architectural objectives as well as current or near-term technologies and applications.

Copyright 2016, Stephen Lahanas


Friday, December 16, 2016

The 5 Principles of Cyber Warfare

This week we got a partial glimpse into the types of action that the United States might consider to be acts of Cyber Warfare. I had written about this topic 2 weeks ago in regards to Voting Integrity in the face of Russian cyber attacks, but the story has escalated since then – culminating this week in direct accusations against the Russian government. The CIA and even President Obama have directly implicated Putin as being personally involved with the deliberate aim of swaying the 2016 election. In a year of big stories, this may have been the most far reaching in its implications. One of those implications, which has already been alluded to by many in Washington, is that this act may in fact represent a form of Cyber Warfare.

So, what exactly does Cyber Warfare mean and how does it differ – if at all – from Cyber Terrorism? That’s a tough question, one that I’ve not seen answered clearly before. Cyber Terrorism can come from nation states, such as China, North Korea, Iran and so forth, but one might expect that actions perpetrated by nation-states are less like terrorism per se and more like warfare. It is worthwhile at this point to step back into the not too distant past and bring up a similar question that also still applies here – what’s the difference between a “Cold” and a “Hot” war? The Cold War, as you might remember, involved a whole host activities from espionage to proxy wars. The Hot or real war between the super-powers never occurred and it didn’t happen primarily because of the concept of Mutual Assured Destruction through use of our nuclear arsenals. In that case, the distinction between the terms also involved both the nature of the participants as well as the types of activities involved which is similar to the current question.
None of this really helps though to clear up the confusion regarding what is or what isn’t Cyber Warfare. Here are a few reasons why:
  • Cyber Warfare can be both covert and overt – depending on the nature and intent of the attacks as well on the determination as to whether they should be publicized in any way.
  • Cyber Warfare could be conducted by both Nation States and Terrorist organizations. The key distinction here though would be that we wouldn’t necessarily classify acts committed by smaller unknown groups or even individuals as Cyber Warfare. In those instances, the term Cyber Terrorism might be more applicable. However, it is also clear that in Cyber Warfare, as in traditional warfare, non-nation state organizations can and have conducted offensive operations.
  • Cyber Warfare can be a standalone or blended activity (e.g. coordinated with other traditional war-fighting activities). It’s conceivable that an entire conflict could be fought solely within the Cyber Domain. Cyber “Domain” here refers to the notion that Cyber represents one of several potential war-fighting domains such as Land, Sea, Air and Space. The US military formally acknowledged Cyber as such a domain with its creation of US Cyber Command several years ago. Of course the reality of this statement is more complicated than it sounds as Cyber also infiltrates all other warfare domains through the technology implied by it – it is cross-cutting domain and even if an attack were completely limited to Cyber actions it is highly likely that physical capabilities (such war-fighting assets as ships, planes etc.) might be impacted.
  • Cyber Warfare can be directed at the Government or the Industrial Base or both. We can’t say for example, that all attacks against businesses must be considered Terrorism per se – the intent is what’s important. If the intent of an attack is to cripple the country that’s been targeted, then a Cyber attack like that is no different in principle from the types of bombing raids we conducted against Germany in WW2 in order to cripple its industrial base. Today though, the sectors that are perhaps more vulnerable might be Energy and Finance as opposed to Manufacturing. The results might be the same though if the goal is hobble an economy or otherwise disrupt a nation state.
Now, we are ready to consider what the distinctions between Cyber Warfare and Cyber Terrorism really are. They would likely involve the following considerations:
  1. Cyber Warfare must necessarily consist of a sustained campaign of Cyber activities, designed to disrupt any mission critical functions of an enemy at a national level. This doesn’t mean the activities have to occur in many places to effect a national impact, it merely has to be designed to impact an opponent that way (and would also likely encompass more than one attack or incident).
  2. Cyber warfare must necessarily occur between substantial Cyber combatants. The nature of what constitutes a ‘substantial’ combatant lies in what resources they have to bring to bear in any given conflict. A well-established terrorist or rebel group may have the money and personnel to manage sustained attacks. However smaller groups with few resources may only be able to sustain limited operations or a single attack. While there is always the possibility that an individual or a small group might be able to do harm at the national level, it is unlikely that they could sustain this over months or years and it would be more akin to one-off terrorism than warfare in the context of sustained operations and likely outcomes.
  3. Cyber warfare, in general, involves more specific objectives in contrast to Terrorism which is often random in nature and may only be focused on making a statement rather than effecting some desired outcome.
By these definitions, I’d have to say that the Russian hacking of the DNC computers and related activities designed to impact the 2016 election falls under the category of Cyber Warfare rather than Terrorism. And this begs the question, why does all of this matter and why do we need more specific definitions? The bottom line is, that if we don’t have a clear idea of what represents acts of Cyber warfare (either covert or overt), it’s highly likely we won’t be able measure our response properly. Deciding how to respond is obviously a very big deal – as any such decisions could quickly escalate from the Cyber domain into all the others. Perhaps our government does have all of this worked out, and maybe it’s just too secret for any of us to know about. However, from our vantage point now it’s all bit fuzzy. When the President says “we will retaliate in a manner and time of our own choosing” we basically don’t have a clue to what that really means.
Rather than spend a lot of time speculating as to what our response might be, we can instead highlight some principles that may apply to any such situation. The following principles represent a potential framework that might be used to help deal with Cyber warfare as it continues to evolve.
  1. Proactive Awareness – In order to survive or win any Cyber conflict, the nation needs to know when in fact it is under attack. Some attacks are more obvious than others and as the recent election shows, our response can be slow or too late to avoid impacts. Proactive Cyber Awareness is not about hacking into everyone’s cell phones, but rather it is about being able to identify unusual behavior in key systems and sectors across the country (or wherever our interests may be). This means we need more selective and actionable intelligence then we seem to be getting now.
  2. Measured Response – This has been mentioned in the news, but as I noted it’s not been explained by anyone (at least publicly) yet. For this to actually work, someone needs to define the measured responses up front rather than assessing each event as if it were the first time it had been considered. The landscape is fairly complicated so this involves a lot of work and some automation. However, it shouldn’t fully automatic any more than our current traditional war-fighting capabilities are – the human in the loop must always be present.
  3. Defined Escalation Approach – This is a process and it ought to be built atop the measured responses defined previously, the idea being that whenever or wherever Cyber activities begin crossing over to other areas there needs to be another level of safeguards built in to avoid any type of cascading escalation that could lead to something like a nuclear conflict.
  4. Maintain a Consistent Policy - In theory, our management of Cyber war shouldn’t be unique in each potential scenario – there ought to be a consistent expectation as to what will happen if enemies launch attacks against the US. This is a key point in the recent debate over Russia as the situation has also become embroiled in US political differences, confusing the matter. While there will always need to be specific considerations given to certain situations, we should never give an indication to any opponent that Cyber attacks may be permitted without any response coming from the US. This would be an extremely dangerous precedent and helps to explain why the President and CIA made statements this week to the effect that election interference would not go unpunished. Better late than never and like all of warfare, if we're in the game we should build policy around what's necessary to win - as opposed to settling for mere survival. There may such as a thing as a Cyber Maginot Line...
  5. Continuous Innovation – This may be the most important point, given the stark reality that it is easier and more cost effective to mount a Cyber attack than it is to defend against one. Despite the billions spent each year in the US across government and the private sectors, Cyber Security breaches and attacks have only become more prevalent and severe. More focus needs to be given to pushing the envelope on innovation to help reduce the current advantages enjoyed by our Cyber opponents. Today, much if not the majority of innovation has come from the attackers and we’ve been playing catch-up. As in every other realm of warfare, the side with the greatest technological advantage tends to win.
It’s anyone’s guess as to whether the current Russian hacking crisis will boil over into something more, but one thing is certain, the age of Cyber Warfare has most definitely dawned.

Copyright 2016, Stephen Lahanas

Sunday, December 11, 2016

Technology & The 2016 Election part 5: Voter Beware

This is the last in my series of posts on how Technology influenced the 2016 election. As I write this, new articles keep streaming out in relation to Fake News, Russian hacking and transition appointments. While we may have hoped that after the election had finished, things might cool down – it seems that presumption may have been premature. In the previous posts, I discussed specific technologies, trends and tactics but what does it all mean for us the voters? Few of us wish to be or are qualified to be pundits, we just want to do our civic duty with the least amount of drama and hassle possible – yet drama and hassle seem to be looming large in everyone’s future. What if anything can we as voters do?
In some ways, such as how candidates are selected, things have changed little...
This is probably the toughest question of them all and we know it’s one that’s not being answered as more and more people get turned off from the political process. I think there may two tracks here in terms of the types of things we as voters can do to prepare for and ultimately (or hopefully) improve the process…
Track 1 – Voter Beware
This is a slight turn on the phrase “buyer beware,” but the analogy seems a good one. In coming years, we ought to try to get more savvy in regards to how all politicians – from every side – are trying to manipulate us one way or the other.  Politicians and ideas are products in many ways; they’re advertised like them, we buy them to make ourselves better and when we’re sick of them we discard them and trade up for newer models. Now, maybe I’m being a bit too cynical here, but there are some upsides to viewing this as a Consumerism. Consumers, for one thing, often take more time investigating which brands to buy and actually comparison shop as opposed to sticking with the same thing out of loyalty year open year. Consumers even use unbiased guides to help them wade though the false claims of many products to get at the truth because ultimately, buying the right product can save you a lot of money.
The saving money part is actually an even better analogy to the political process given that the choices we make in elections probably effect our pocketbooks more than all of the comparison shopping we’ll ever do. This year has given us some valid and interesting motivations for perhaps becoming a little more skeptical and judicious when we listen to the political promises of politicians. This year we’ve seen how much pure propaganda has been used to confuse or manipulate us, we’ve seen that raw sentiment (either negative or positive) might not always be the best basis for making a decision. Another way of putting that might be, that if you’re voting against something chances are you really don’t appreciate or understand what it is you’re really voting for. That’s a bit like buying a Jeep instead of a F150 when you really wanted a Prius – it just doesn’t make any sense and if everyone does that it is quite likely that most people will end up unsatisfied to some degree. Here are a couple of other pragmatic suggestions:
  • Don’t get all of your news off Facebook
  • Look for unbiased sources and combine those with the ones you trust already and see what the comparisons really look like from different perspectives
  • Don’t let anyone think for you. Take the time to research yourself.
  • Don’t let anyone discourage you from voting. It’s not just a right or a privilege – it’s an obligation. If the majority of Americans hold their nose in disgust but choose not to participate, then they’ve lost the right to complain about the outcome (which of course they likely will).
  • Demand more of your candidate or party. Don’t let them sink further into ambiguous, abbreviated explanations. They will continue to try to win you over with Twitter but we all know that no one can hold an intelligent conversation using 140 characters or less and no one should treat the American people as if they’re too dumb to go beyond that.
Track 2 – What Could We Change?
If I had to pick one problem which looms above all others at least at the presidential level, it wouldn’t be the electoral college (although that’s not so great), rather it is the way we choose candidates in our party system. There are several huge problems here, including:
  • A closed system that prevents the vast majority of interested, qualified people from ever participating
  • The same system also prevents of us from participating until after all the real choices are made – and then we’re just stuck with it
  • A lock on the main two parties with almost no ability for 3rd parties to gain access. A perfect example of this are the uneven rules that nearly every state requires for candidates to get on the ballot – 3rd party candidates typically require 1,000’s more signatures making it nearly impossible for them to even join a national race
I’ve highlighted these issues specifically because I think one of the most annoying aspects of our system seems to be the lack of actual choice we have as voters, which in turn leaves us with the feeling that it’s the same cast of characters over and over again. And truthfully, that feeling is pretty accurate – for a nation of over 300 hundred million people, we’ve got a tiny handful of what seem to be the same people running everything. Those of us who want change and see the same people doing the same things across administrations and across parties once they get elected, tend to get frustrated. That’s perhaps one explanation for what happened this year.
I’ve got another suggestion though, one that could be fueled by technology but hasn’t happened yet. Most states and districts allow for “write-in” candidates. If the traditional parties do not want to open up (and occasionally they do but usually only for billionaires, otherwise it seems as though the Backroom approach is still going strong); then why not translate the practical experience gained from all sorts of Internet activism and turn that into a new form of political party – a Logical Party. “Logical” not because it makes any particular sense (although hopefully it would), but Logical in the sense that it would be “Virtual” and exist only as an online community. This Logical or Virtual party could then have an open and online selection process (and to be fair there are some places that have started doing this, it’s not entirely new) and it could field candidates both in the presidential race and in state races. Now that would be some real Populism.
And why not? It couldn’t be any worse than 2016, right?

Copyright 2016, Stephen Lahanas