Cyber Security Predictions for 2017

2016 was a big year in the annals of Cyber Security, and 2017 promises to eclipse it.

Creating an Enterprise Data Strategy

An introduction to the process of developing comprehensive strategies for enterprise data manangement and exploitation.

A Framework for Evolutionary Artificial Thought

Let’s start at the beginning – what does this or any such “Framework” buy us?

The Innovation Dilemma

What things actually promote or discourage innovation? We'll examine a few in this post...

Digitial Transformation, Defined

Digitial Transformation is a hot topic in IT and big money maker for consultants - but what does it really mean?.

Wednesday, December 3, 2014

The Growing State of Cyber Insecurity

2014 will likely be marked as the year that the warnings from the past decade about Cyber threats were finally realized. Granted, not all of those warnings have come true, yet - but this year will go down as the worst yet for costly Cyber breaches. That begs an important question - why are we becoming less secure as time is passing - and why haven't the billions of dollars invested in Cyber Security worked?
This is a complex topic, so it will probably help to provide some high level context. We'll start with some definitions:
  • Security Architecture - the practice of actively designing security into complex systems or environments.
  • Intrusion Detection - the backbone for most perimeter-focused security solutions; focus is detection / prevention of breaches.
  • Threat / Vulnerability Management - the practice of tracking and adapting to specific threat vectors (attack signatures, exploits etc.)
  • Security Controls - usually standards-based system & process framework for assessing, securing and auditing security status.
  • Social Engineering - the practice of using non-technical persuasion or other techniques to gain information in order to access secure environments.
Now, let's ask the question again. Target, Chase, Sony Pictures - why is this year the year of massive security breaches? What went wrong?
There are 5 top reasons that this is happening; I'll introduce them together and then explore each one in detail later.
  1. It is easier to Cyber Attack than to Cyber Defend and likely always will be.
  2. Cyber Security is not viewed from a holistic perspective in most organizations today - this includes many military organizations.
  3. There is no one magic bullet technique or technology that can secure an organization - yet we spend a lot of our time looking for one or thinking we have one.
  4. Just as we secure one aspect of the enterprise, 3 new ones pop up that aren't secure - and in many cases each of these offer attack routes back through the areas we thought were secure.
  5. Cyber Security represents an intersection between (human) behavior and information patterns. We haven't yet resolved either of these issues separately yet and we definitely aren't close to dealing with how they intersect.
a representation of pattern identification in Cyber Attacks
So, who am I to discuss such matters? I'm not a recognized Cyber Security expert that's true. I'm just an IT Architect. But, I'm an Architect who has had the privilege of working on some fascinating Security related projects over the years; my first ones were in 1998 and 1999. In 1998, I worked on a research project for the AF to help develop a next generation Intrusion Detection system - we called it the Secure Adaptive Network Environment (SANE). As you can tell, it is was perimeter and data center focused. The second project was much more ambitious, I was brought in as a security architect (from the AF perspective) for the first iteration for GCSS-AF, which was and still is a large data center consolidation, application hosting initiative (now much of it is Cloud-based). Both of these projects helped (for me anyway) to illustrate a number of the key problems that would be associated with Cyber Security for the coming decades (although back then we didn't call it Cyber Security yet). Some of those observations included:
  • The notion that the landscape was going to get ever more complex
  • The need for unified access control (directory services as well as application logins etc)
  • The need for various levels network security (which was in fact already deployed in the DoD) as well as encryption across public networks
  • I saw how easy it was for dedicated enthusiasts to breach most systems they set their sights on (sat in on a few of the first 'hackathons')
  • I saw that static or reactive security was the standard operating approach behind most perimeter based security approaches and it was never going to work
  • I saw that we in the business we spending way too much time focusing on the products that were supposed to make us secure rather than understanding or controlling the holistic processes necessary for real security.
  • And then there is all that log data - which was only going to grow and grow until it would become unmanageable.
  • It was obvious that Cyber Space would become another 'field of battle' alongside air, ground, water and space. There would be both state-sponsored and free-enterprise focused organized cyber cadres. These groups have had nearly 20 years to mature in 2014 - the future of Cyber Security was not individual hacker like Neo (from the Matrix) but Cyber crime syndicates and armies.
Ten years after these initial security projects, things were developing pretty much the way I had anticipated. If anything, things may have developed slower than I had anticipated - in terms of the numbers or severity of the breaches happening in 2008 / 2009, but the trajectory was definitely on track. I thought the time was ripe for moving to the next stage of Cyber defense, but remarkably, I found quite a lot of resistance to the notion of taking a holistic view of Cyber Security, so I moved on to other my productive arenas.
Example of a Cyber (Defense) Collaboration approach across organizations
Holistic Cyber Security is of course where things have to go and the answer to what's missing. Let's look at each of the five issues I identified above in more depth:
  1. It's easier to attack: Why should this be the case? Well, the tools that Hackers, Crackers or rogue Cyber syndicates or armies use are less expensive and less complex to use than the tools we use to defend assets. A hacker can get started with almost no investment while each component of a let's say a perimeter defense architecture may cost millions and take months to implement. Worse than that though is that attackers work as a collaborative community - which means they can collectively share information on how to defeat that new defensive technology and eventually we end up playing a reactive role - fixing vulnerabilities only after they surface. This situation is unlikely to change under current defensive paradigms.
  2. Piecemeal Security: That's the opposite of holistic isn't it? Think about this. Every IT capability in a modern organization represents a potential threat to security. Whether we're talking about a Cloud, a mobile app, an edge device that needs to be secured, data in motion, applications (web based or otherwise), files and documents, email, portals etc.,etc.,etc. And usually all of these things are not managed by the same groups within an organization and often many of these things aren't considered as part of the security landscape at all. Most of the focus for Cyber Security in today's enterprise is still hovering around the perimeter and network. While this part of the picture is important - it is not the whole picture and never was; not in 1998, not in 2008 and certainly not now. On a recent 60 Minutes report, a famous security expert mentioned an even more telling aspect of this problem - even at the perimeter there is now so much information being generated there is no way to discern what are the real threats. We'll talk about that more in a minute.
  3. There is no magic bullet: This is a bad habit shared by other aspects of IT, but for Cyber Security this thinking is particularly problematic. In the late 90's and early 2000's the magic bullet was Intrusion Detection and Firewalls. Then there was PKI and host of other encryption protocols and products and of course anti-virus software has become more and more pervasive since the late 90's. Even the notion of security standards or controls has been viewed as a magic bullet, but the fact is whether it is processes, standards or products - all of these elements represent 'part' of a larger picture. That larger picture needs to begin with deliberate Security Architecture on an enterprise scale.
  4. Cyber Security is Dynamic: Yet most security organizations and products aren't. We understood that all the back in 1998, which is why we began building community contribution of exploits into Intrusion Detection products. Collaboration on the defensive side is there, but it still isn't as effective as the collaboration on the attacking side; mainly because the job of the defenders is many times more complex. Becoming dynamic is no small task - it requires a paradigm shift in thinking for most organizations and thusfar it is very rare to see it in practice.
  5. Cyber Security is Information & People: A proactive approach to security requires the defenders think like those who might attack them and predict or identify weakness. It requires the ability to discern or predict patterns in the ever growing sets of data (just as was highlighted on 60 minutes). This simply has not happened yet. Despite some progress with Security Controls and Vulnerability / Threat Management, we are still largely operating in a reactive mode. We don't have a good handle on stopping insider attacks or understanding threat behaviors.
In some ways, we've been lucky so far that the Cyber attacks have been primarily focused on stealing information or financial data, rather than attacks on systems dedicated to infrastructure. While many of those systems are somewhat more secure by design, they are not as secure as we might think (just as the breaches this year have called into question the efficacy of security associated with PCI standards and finance-related systems). We are becoming more Cyber Insecure because we are not as adaptive as our opponents and because we still refuse to recognize the full scope of the challenge. In many cases, we are spending perhaps exactly as much as we need to - but we're not spending it the right way or in the right context. We're paying for piecemeal security and unfortunately that's what exactly we're getting.

copyright 2014, Stephen Lahanas

Tuesday, September 23, 2014

The Art in Artificial Intelligence - Part 2

Part of the reason we decided to explore this topic on Technovation Talks was the claim made earlier this Summer that an AI had finally passed the Turing Test. So what's the Turing Test? It was a very basically described metric by which any sort of true machine intelligence might be assessed or otherwise verified. Here's the basic premise of the test - if an AI can engage in normal conversation with multiple human participants without the humans realizing that they were conversing with a machine (obviously it would be a remote conversation of some sort) - then the machine could be considered intelligent.

Alan Turing - BTW: a movie about him will be hitting theaters soon...
According to Turing's predictions in 1950, we should have already achieved this level of machine intelligence (by the end of the last century). Yet if you look at the story about this Summer's supposed triumph (which might be considered the first time it has in fact been achieved, there are nothing but problems and doubts):

  • First off, the answers are screwy and it's clear that much of what the computer heard it misinterpreted. 
  • Then they presented the AI as if it were an adolescent from war-torn Ukraine.
  • And they also used the lowest possible threshold to gauge success - this threshold which represented a part of Turing's paper on the subject - suggested that success be declared if on average at least 30% of humans judging the AI would be fooled into thinking it was a human. So, the AI named Eugene, scored a 33% - but that is only because judges lowered the bar thinking he was a semi-illiterate teen. 
More important than all of this of course is the central question as to whether or not the metric or test is actually an accurate way to assess machine intelligence anyway? In a way, every system that has ever tried to compete in one of these tests to date has been purpose-built to pass the test. But does that make it intelligent (if it were actually to pass it)? The technology necessary for a machine to "think" through a conversation the way a human does simply does not exist - nor are we even close to understanding what that model would even look like. The systems trying to pass the Turing Test are simply conversational "hacks," in other words they include built-in tricks like responding to a question with a question or trying to work off of keyword cues. What's missing of course is any continuity of thought - any consciousness - and even the most simplistic conversation requires that. None of these systems can think and none of them can really learn. 

Now it may be that conversation hacking may become sophisticated enough in coming years so that many of these systems may actually pass the Turing Test threshold of 30% on a regular basis. But that test as it is now defined will never provide us with an accurate assessment as to whether a machine has in fact achieved some innate level of intelligence. There is no way to determine through the conversation if the system has "added value" to the topic rather than simply replied phrase by phrase in rather one-sided dialectics. It will be difficult to assess or acknowledge any growth or change. There is no expectation in a simple conversation to determine if you are in fact conversing with a self-aware entity.


In the movie Her, this guy falls in love with his operating system (and it didn't come from the Apple store!)
The first thing we need to do before we tackle how we might achieve AI is to determine what the appropriate assessment or validation for human-like intelligence really needs to be. We are going to suggest one and explain the rationale for it...

The Technovation AI Test -
AI Test Prerequisites / Expectations
  • The Test is not meant to assess acquired knowledge per se, it is meant to assess cognitive ability. In other words, it is not about preparation or repetition of learned information, but is concerned with potential and / or application of any particular knowledge set.
  • The Test does not have to occur in one sitting, but take place over any duration (within reason).
  • The Test isn't merely concerned with correct answers or maturity in a point of time, but can also assess the ability to grow over time based upon responses to various aspects of the test (or other stimuli encountered within the time-frame of the test).
  • The Test is not merely a linguistic exercise - the machine must not merely demonstrate the ability to communicate like a human, it must also demonstrate it can learn. 
  • Foremost above all else though, the machine must demonstrate the one trait most closely associated human intelligence (as opposed to raw computing power) - it must demonstrate intuition. In this context, Intuition represents shorthand problem-solving (which we will discuss in much more depth in a future post). 
  • On last aspect of the test that must be included is a review of the code to ensure that "conversational snippets" are not allowed to be prep-programmed. This implies that the majority of dialog is generated 'real time' by the machine. Now, that would not prevent the machine from reviewing logs of previously generated dialog (in some database), but that review could not lead to verbatim quoting -  rather must paraphrase or other restate previous points. 
The AI Test 
In a series of panel interviews, the AI must convince the judges or reviewers that it should be hired to perform a complex human role. The type of job and foundational knowledge can cover any number of topics but must be sufficiently complex to avoid "lowering the bar." (so, any job that requires a degree). Also, the interview style must be open (similar to essay tests in written assessments) - the answers must not just be correct, they must demonstrate value added insight from the intelligence conveying them. And the answers may be entirely subjective... (even better as long as the machine can rationalize them)
This test necessarily implies a very high threshold - perhaps in excess of a 90% rating for a very complex set of conversations. Why raise the bar this high? Simple - this is the one way we can force the development of a system that can both learn and apply that knowledge to problem solving and do it on the fly. To have human like intelligence, machines must have the ability to understand nuances of human communication and psychology - thus it must not only be able to interact, it must be able to convince us as well.  
Now that we have a more concrete target to aim for - how do we get there. In our next post, we'll delve into Learning - what works and what doesn't and how human and machine intelligence differ today.



Copyright 2014, Stephen Lahanas


#Semantech
#StephenLahanas
#TechnovationTalks

Wednesday, September 10, 2014

The "Art" in Artificial Intelligence - Part 1

Today, we are going to launch a problem-solving exercise on what might be the single most complex topic in Information Science - Artificial Intelligence. The goal here is not to provide any sort of comprehensive survey of current theory or practice; rather our journey begins with the stark realization of how little we've achieved in the field since the term was first coined 58 years ago. This is a problem statement and problem resolution exercise and an excellent case study in technology focused innovation. Let's begin...

We'll start at the beginning with some definitions and a review of key assumptions.

Artificial Intelligence, Defined 
The ability for a machine to consistently demonstrate core cognitive skills generally associated with human intelligence; including, learning, problem solving, intuitive reasoning and contextual memory retention and extraction. These skills generally imply the need to achieve some level of self-awareness.

What We Haven't Achieved, Yet
I said that this problem statement is focused around a lack of success in the AI field to date; let's try to quantify that first.

  • No computer can learn like a human.
  • No computer can speak like a human (this is deceptive, tools like Siri will provide responses back to you, but is that in fact anything like human speech? The processing that goes on within Siri is a relatively primitive form of pattern recognition as opposed to what even the least capable human mind can produce).
  • No computer can handle complexity in the same manner a human can (this warrants much more explanation and we'll come back to it).
  • No computer can problem-solve the same way humans can. (there are types of problem solving where computers are of course far superior to humans, yet even with all that power they still fail to solve relatively simple questions that humans can handle naturally).
  • No computer has achieved anything coming close to consciousness or self awareness (despite the endless slew of sci-fi stories where this is a common fixture). 

Anybody trying to solve the AI problem is a mad scientist, right? 
Now, there's another ethical or moral side to this topic which we won't jump into until the end - the question as to whether we should even try to endow a machine with these traits - but then again it is likely that someone will do this regardless of the ethical objections. Part of the human learning process seems to require learning through our mistakes - a trait we may eventually end up passing to artificial entities, someday. But back to the problem.

Challenging Assumptions
As with most problem spaces, the set of initial assumptions associated with it tends to drive all else until or unless those assumptions evolve in some fashion. For Artificial Intelligence, there have been a number of assumptions that have helped define what's its become to date and also help to explain its limited success - they include the following:
  • The notion that brute force computing power will eventually resolve many issues and push through various AI barriers. This is partially true, but then again we sent Apollo to the Moon with the computing power of a standard calculator (by today's standards), how much computing power do we really need to mimic human thought? Nature got us here through elegance, not waste.
  • The notion that we fully understand how the human mind creates consciousness or exercises cognitive capability. We don't, yet.
  • The very flawed notion that machine learning can or should have any connection to the current methods we use to teach each other. 
  • The lack of sensory input associated with most AI paradigms. AI is IO dependent and that usually means keyboard and mouse although images and video (and audio) have now begun to play an important role. We'll get to this in more detail later.
  • The notion that simulation can perform like the real thing; building to the simulation ensures a thing always remains a simulation (thus never achieving actual reproduction). This has some interesting implications which will eventually get us into a discussion of genetic engineering. 
  • A lack of focus on natural language. This at first proved too difficult and now natural language does factor into much of the AI research going on. However, natural language hasn't been looked at as the core logic for an AI system - but it should be - instead we tend to view in terms of how an AI system (or other type of system for that matter) can interact with humans (or just capture human speech accurately). 



Watson actually showed us what he/she/it was thinking - if only we could
have seen into the minds of the humans, then we could have seen whether they think the same way...
A Brief Chronology of AI

  • Ancient Greece - those happy go lucky philosophers in togas considered the notion of thinking machines. 
  • About 150 years ago or longer - Artificial Intelligence (although not explicitly identified as such) becomes a popular topic in Science Fiction.
  • 1930s - The golden age of Science Fiction novels includes more than a few stories about artificial brains and super robots.
  • 1955 - The Term "Artificial Intelligence" is invented by John McCarthy and within a year the first conferences on the topic are held.
  • 1968 - The Arthur C. Clarke novel 2001 becomes a box office hit and HAL, the emotionally unstable AI computer on board the Jupiter, becomes a celebrity. Take your stress pill, Dave.
  • 1980 - Matthew Broderick makes it big playing games with an AI pentagon computer - the game is Thermonuclear War but of course you can't win it, right? 
  • Early 80's - LISP is introduced.
  • Mid-1980's - Expert Systems become popular.
  • 1984 - Skynet becomes 'self aware' and Terminators start trying to kill John Conner.
  • 1987 - The pasty-faced Commander Data steals the show on Star Trek the Next Generation.
  • 1990's - An IBM computer wipes the floor with the world's chess masters.
  • 2001 - Stanley Kubrick's last movie becomes Stephen Spielberg's tribute to him as Joel Haley Osment becomes one of the last humans playing a robot (as opposed to CGI animations with voice actors playing humans, robots and everything else).
  • 2010 - Stanford opens up an AI online course to the general public, several hundred thousand sign up - few finish the course. 
  • 2011 - Siri shows us that smart phones can indeed become somewhat intelligent.
  • 2012 - IBM's Watson wipes the floor with the pantheon of Jeopardy champions.


Don't imprint unless you're serious...

The expectations for AI have thusfar radically outstripped the progress made. Now in the same 50 or so years we've taken the Star Trek communicator & tricorder fictions and made them reality in the form of smart phones (some of which you can now load with apps that when used with sensors can measure one's vital functions much like a tricorder).

A lot of smart people have tried for decades to make AI a reality and across the globe billions or hundreds of billions of dollars have been spent on related research. What's going wrong here? It can be only one of two possibilities:

  1. Human like intelligence cannot be artificially created and maintained or
  2. We've approached the problem wrong
I content that the second possibility is in fact what has happened. As we will progress with the series, it will become clear that the "Art" I referred to in the title is the ability to pick the right path for problem resolution.  In part two of this series, we will examine the question; How can machines learn?





Copyright 2014, Stephen Lahanas


#Semantech
#StephenLahanas
#TechnovationTalks

The Next Generation of American Leadership - JSA


A worthy cause indeed, invest in the future - support the next generation of American Leadership, right here in Ohio...

Junior State of America - Ohio River Valley Scholarship fund

Saturday, September 6, 2014

How to Create an Enterprise Data Strategy

I work as an IT Architect. One of the more interesting things I get asked to do on occasion is to create Strategies in particular technology areas. This represents the "high-level" side of the typical IT Architecture duties one tends to run into (if you're interested in more IT Architecture related topics check my new blog here - The IT Architecture Journal). A very popular strategic focus across industry lately is Data Strategy. In this post, I will try to explain why it has gotten so popular and some of the fundamental aspects of actually producing one.

Data Strategy, Defined
Data Strategy is the collection of principles, decisions, expectations as well as specific goals and objectives in regards to how enterprise data and related data systems and services will be managed or enhanced over a specified future time period. As an actual artifact, Data Strategy is usually manifested as a document, but is not limited to that format.

A Data Strategy is usually conducted in conjunction with some IT Portfolio planning process. In optimal situations, portfolio decisions and follow-on data-related projects can be mapped directly back to goals and objectives in the Data Strategy.

Why Data Strategy is so popular
Organizations across the world have become more aware of the need for greater attention to data related issues in recent years. Some of this has been driven by collaborative industry initiatives through groups like DAMA (the Data Management Association) and the resulting Data Management Book of Practice (DMBOK). Other drivers include the near-flood of new data technologies released over the past decade as well as the exponentially growing quantity of data out there.

So, what does having a Strategy give actually give you?

What it often provides, if used properly, is both a set of shared expectations as well as a clear path for actualization of those expectations. The Data Strategy allows organizations to deliberately decide how best to exploit their data and to commit to the major investments which might be necessary to support that. This, when contrasted with an hoc and decentralized technology evolution scenario presents a much easier picture to grasp. And it also at least implies a situation that will be easier to predict or otherwise manage. It is that promise of manageability that makes creating a Data Strategy so attractive.

Elements of a Typical Data Strategy
The following mindmap illustrates some of the common elements that you'll find in many data strategies. One noteworthy item in this diagram is the idea of sub-strategies (which can be split off into separate documents / artifacts) ...



The Top 7 Considerations for Data Strategy
While there are many more things to keep in mind, I've tried to distill some of the most important considerations for this post...

  1. The strategy should take into account all data associated with the enterprise. This may sound obvious but in fact it isn't really that obvious. Many organizations explicitly separate management of dedicated data systems from other systems which may have data in them but aren't strictly just DBMSs or reports, etc. For example, there may be state data in small data stores associated with a web-based application that supports an online form / application - the data structures supporting the completion of the form may be different than the ones which collect the completed form data. However, all data, in all applications regardless of where it may be located or how or it is used must be considered.  
  2. There generally needs to be an attempt to define an organizational 'lingua franca' - or a common semantic understanding of data. There are many ways this might be achieved, but it is important that this included within the strategic plan.
  3. The Strategy cannot be entirely generic, even if one of the most vital objectives is some type of industry-driven standardization. Wholly generic plans are usually less than helpful.
  4. The Data Strategy must be presented within a larger context. What this means is that there needs to be an expectation that the Strategy will indeed be the precursor to other activities which ought to be able to map back to it for traceability purposes. 
  5. The Data Strategy needs to have sufficient detail to be meaningful. If it is too high-level it becomes merely an elaborate Mission Statement. The expectation behind any Strategy or plan is that it be actionable. 
  6. The Data Strategy ought to be need or capability based - not product or Hype focused. 
  7. There ought to be a way to measure success 'built into' the Data Strategy. This can come in the form of basic service level expectations or business outcomes or both. 

What goes into one Data Strategy versus another can be radically different from group to group. If you have a Social Media company your needs will be quite different than the US Coast Guard for example - but both will likely need their own Data Strategy.


Copyright 2014, Stephen Lahanas


#Semantech
#StephenLahanas
#TechnovationTalks

How IT Hype Inhibits Technology Innovation

There are many unique characteristics connected to the IT industry, some of them are more associated with pure-play IT vendors but others seem to pervade most all organizations which utilize IT in any significant manner. One of the strangest of these characteristics is the obsession with buzzword Hype. For most people this is symbolized through the development and dissemination of Gartner's famous Hype Cycle diagrams. However, the Hype obsession existed before the Hype Cycle ever did and doesn't explain the phenomenon in a satisfactory manner.

So, what's wrong with Hype? Isn't it just a harmless offshoot of public relations or perhaps just some benign manifestation of mass psychology? Perhaps some of our readers here are old enough to remember the endless silly fads of the 1970's which included Pet Rocks and Mood Rings. Did buying these worthless trinkets in any way negatively impact our later lives - probably not. What these harmless trends did perhaps achieve though was a sort of conditioning that might have pre-disposed the vast majority of us to assign more attention to trends then they might otherwise merit.


Pet Rocks were definitely low tech - but represented a hype-generated trend none-the-less

Now without diving into the implications of psychological theory in regards to conditioning and behavior, it is does seem as though much of what we see or talk about is driven by various Hype cycles. This occurs in entertainment, in business, in food & dining - in nearly every aspect of popular culture and remarkably it also affects science and technology. What represents Buzz in the Scientific community over recent years? Well, how about the Stem Cells, The God Particle and Fractals (or Fibonacci numbers) to name a few.

Getting back to Information Tech - how are we influenced by fads, trends and Buzzword Hype? Well, let's attempt a definition here first...

Buzzword Hype - This represents a unique form of public relations wherein somewhat complex concepts are crammed into a single buzzword (although Buzzwords can technically include several words in them - Master Data Management has three, Big Data has two). While this phenomenon is not limited to IT - it is the most prevalent form of Hype in the technology arena.

So, if an acronym is a mnemonic for a complex term (MDM for Master Data Management), the term itself is a mnemonic for the complex concept that everyone already understands, right? Wait a minute, perhaps we've discovered the first problem; how many people actually understand these terms? Furthermore, how many people are actually concerned with learning these terms?

Let's hone in on one of the biggest Buzzword Hype examples of the last few years - Big Data (we've touched upon this topic before in Technovation Talks). How many people actually have a comprehensive knowledge of what this represents? or even have the same expectations or knowledge about it? Is Big Data just the Hadoop distributed fault tolerant file system, it is the lack of SQL and relational structure, is it the high capacity or throughput or is it some combination of these elements and much, much more. Even more importantly, is Big Data even something that can be neatly defined as one standard solution? All good questions, none of which are typically addressed in the core Hype surrounding the buzzword.

The buzzword hype for Big Data seems to imply more, better, faster, bigger with very little consideration as to how that might happen or what the eventual impacts would be. The term itself becomes its own justification - if everyone is talking about doing it and beginning to do it - why shouldn't we, right? And by the way, what is it again?

Let's step back a moment and try to classify what the core problems associated with the Hype Cycle are:

  • These buzzwords actually drive business decisions, regardless of the level of education or understanding associated with them.
  • There is an undercurrent of peer pressure that tends to 'force' people into making those decisions - decisions they weren't ready to make (because they didn't have time to evaluate the situation properly).
  • The hype tends to drown out most other types of discussions associated either with the technology in question or the real trends of what's happening across most enterprises. And I characterize these as 'real' because they represent common challenges which aren't necessarily product-driven (thus not good candidates for hype).
  • Premature adoption based on hype cycles often has the opposite effect on a particular technology area - it stifles it as too much bad word-of-mouth feedback circulates and what might otherwise be a promising field of practice languishes as a result (E-learning is the best of example of this I can think of).

How does all of this impact or otherwise inhibit Innovation? Well, here are some things to think about:

  • Some Hype Trends are not in fact very innovative, yet if everyone is doing it - then folks trying to introduce truly innovative techniques or products may be drowned out or suppressed. 
  • Most Hype Cycles tend to pose the key buzzword focus area as a "silver bullet" solution - as those of us who have practiced IT for a while can all attest to, there are few if any actual Silver Bullet solutions. Similar the Heisenberg Principle (and we're not referring to Breaking Bad here), introduction of a new element impacts the existing elements in unanticipated ways (well, this isn't an exact analogy to the Heisenberg Principle but it's close enough). The whole of IT might be viewed as "Magic Happens Here" by outsiders but inside we know there is a constant struggle to impose order over chaos - silver bullets are often disruptive. Yes, that's good but not if you don't understand the nature of the disruption before you go whole hog (so to speak).
  • Hype & Buzzwords tend to make people think situations are simpler than they really are, and in some senses actually discourage the necessary analysis that should be occurring when adopting new technology. Innovation cannot be sustained when and where it becomes too expensive to manage.   


We will ever escape the grip of unreasoning Hype in IT? Will our lives be forever ruled by an unrelenting cascade of product focused buzzwords? Who knows - IT is still in its infancy and unlike most other professions we have the opportunity to reinvent ourselves on an almost continual basis - so anything is possible.



Copyright 2014, Stephen Lahanas


#Semantech
#StephenLahanas
#TechnovationTalks

Monday, August 18, 2014

The Innovation Dilemma

Innovation is perhaps today’s ultimate buzzword and most over-hyped topic. People can’t seem to get enough of articles and dialog on how Innovation is the answer to any number of potential issues – yet in all the countless discussions occurring online and in print about this topic, how well do any of the folks discussing innovation really understand it? That’s part one of the dilemma. Part two of the dilemma is that for all the lip service about how Innovation needs to be fostered, are we actually in fact fostering it in any meaningful ways (or perhaps worse yet, might we in fact be hindering it through current trends)? We will examine both parts of this question in today’s post.

Let’s start by helping to define what Innovation actually represents in a meaningful context. We’ll begin this by explaining first what innovation is not; it is not:

  • A marketing slogan
  • A collection of admirable ideas awaiting exploitation
  • The province of rarefied genius or Silicon Valley risk takers
  • Accidental or otherwise random in nature 
  • And lastly – Innovation is not thought (e.g. Innovative Thinking) – thought without application perhaps qualifies as day-dreaming. Innovation is Actionable Thought embedded within the context of a larger problem solving activity. 

Innovation is a process, not an individual event. That process has a ‘macro’ or Global perspective as well as a Local perspective. In other words, the Global process of Innovation encompasses all of the Local processes – the smaller efforts impact the cumulative achievements at the Global level. There is also often synergistic inter-relationships between various local innovation “threads.”

An example of a complex Innovation process

Definitions (Innovation in Theory):
Innovation – This represents the deliberate (reproducible, consistent) process associated with solving specific problems. The process is evolutionary, incremental and focused on specific, well-defined goals. The key concept here is that Innovation is not an anomalous or ephemeral activity; it is most definitely not “magic happens here.”

Local Innovation – Any individual application of an innovation process within a closed community/entity.  This does not imply that the community or entity is somehow cut off from the global community, merely that it has its own a unique charter.

Global Innovation – Any number of local communities may be working to solve the same problems. These problems can be referred to as innovation threads. The level of collaboration or cooperation will vary between these communities, yet on the whole there is usually some information exchange that at times will allow individual local innovation to influence or otherwise contribute to global innovation progress (and conversely, progress acknowledged at the Global will of course any number of Local efforts).

Innovation Threads – An Innovation thread is the collective effort towards resolving a unique problem. Obviously, there are cases where one group defines a similar problem somewhat differently, but in general the progress made in one variation of a particular thread may be applicable to another similar one.

Innovation in Practice

One of the best examples of what differentiates innovation in popular mythology with Innovation in practice is the case of the Wright Brothers. Their story is not one of a handful of good ideas punctuated by the glorious realization of their dreams of flight, but rather the many years of tireless work and massive amount of invention that had to occur in order to achieve a specific goal associated with one very famous problem – “how to achieve powered flight.”

The Wright B Flyer

The Wright Brothers didn’t just look at a bird and shout “Eureka.” They redefined the science of aerodynamics, testing hundreds of airfoil designs. To do this they had to redefine the mathematics of aerodynamics, they had to invent the wind tunnel and more. In short, they had to solve 100’s of related problems in order to resolve the main problem that started their quest. Theirs was an example of local innovation – however it was so profound that it completely redefined the Global scope of innovation for aerodynamics. And we still fly in planes based entirely their designs and principles today.

Another important characteristic of what the Wright Brothers did was its entirely practical focus. Everything they did was goal-focused. This differentiates it from many research and development programs that have arisen over the past 50 to 75 tears in that oftentimes research programs do not have specific, tangible goals in mind. (in other words, they are not entirely pragmatic in nature).

Dilemma 2
Now that we might have a better idea of what Innovation actually represents, let’s consider whether we as a society are actually encouraging or discouraging it. To do that we need to consider a couple of related questions, including:

  1. Can innovation be taught, and if so how would that happen?
  2. What sort of incentives might help to spur innovation?
  3. What might represent disincentives for innovation?


We will answer these questions one at a time…

Can Innovation be Taught?
Yes, it can (and we will explore that topic in more detail in a future post). But is our current expectation of what represents education that fosters innovation accurate – well, no. In our previous example of practical innovation (The Wright Brothers), the main idea was that the entire exercise was problem focused. What they learned, invented and achieved was all focused on a central goal. The vast majority of education today is in contrast not goal-focused. Moreover, it tends to be highly standardized and this trend is getting worse every year. There are some exceptions of course, but for the main part our educational systems today judge students based upon conformity of thought as evidenced through an ever-expanding list of assessment tests. This shift towards assessment testing has a chilling over-all effect on curriculum, making it more and more abstract and less focused on systematic problem-solving. In the United States, we are now teaching almost entirely to the test.

The way most experts have framed education that might somehow foster innovation is by decrying that more education ought to include science and math education (STEM). So, remarkably all of the focus towards achieving more innovation has been directed at what is being taught as opposed to how it is being taught. There is an obvious flaw in this logic that is borne out in almost every field of practical application. This is a massive and complex topic and we are of course just skimming the surface.

What sort of Incentives might encourage Innovation?
Well, this might include incentives both within education and in industry. Incentives within Education might include rewarding problem-solving skills in terms of assessment or college admission and structuring curriculum to encourage development of problem solving skills. These types of skills are in some ways diametrically opposed to the type of assessment testing which is currently so popular now. The idea is of course, is being able to question rather than mimic current thinking in order to develop the types of new perspectives needed to progress beyond current capabilities.

In industry, many assessments are more or less natural – in other words – solutions that effectively solve problems become popular and profitable. However, many such solutions can’t get funding to reach this point – so one area that can be improved is the access to capital (both in the government and private sectors, and for the federal sector R & D can become entirely problem focused rather than random).

What things tend to discourage Innovation?

A misdirected educational system, as discussed already, represents a serious discouragement towards fostering education, but it is not the only problem. Other issues include:

  • A misguided dilution of labor incentives. This sounds a little complex but what it means is that since about the year 2000, a two-tier technology labor scenario has arisen in the United States. The introduction of temporary Visas based upon the mistaken notion that there was a technology labor shortage has in fact displaced several million technology workers here and resulted in the introduction of millions of IT workers who get paid roughly half of what the standard wage would otherwise be. Add this to off-shoring, and what we have created an environment of uncertainty in area where we should be fostering confidence in terms of securing a large, stable technology workforce. 
  • Within individual organizations, despite the hype that seems to imply otherwise, risk-taking and divergent solution approaches are most often discouraged. For organizations to become innovative there generally needs to be some cultural transformation – this is very difficult to achieve. 
The goal with this post was to help refine the dialog about innovation a bit – get beyond the platitudes and start discussing how it can or should work. We will revisit this topic again in coming months…



Copyright 2014, Stephen Lahanas


#Semantech
#StephenLahanas
#TechnovationTalks

Friday, August 8, 2014

Building Effective IT Strategy - part 3

In our last two post on IT Strategy, we highlighted how Strategy can be structured, how it differs fro Tactics (although we will explore that in more depth in this post) and how to employ a consistent process towards developing strategy. The first two steps involved determining what the core strategic approach might be; the second focused on goal-setting. The third and most difficult part is assigning actions to goals...

So, using our Big Data case study, how would begin to translate the higher level goals into definitive actions and then what types of tactics might be used to carry out those actions?


This illustration highlights where Big Data fits within a larger set of Strategic elements in an overall Transformation initiative. This type of representation helps to define relationships, dependencies and quantifies where work needs to happen once the higher level goal-setting has been defined.  

The types of actions that might be involved with actualizing a Big Data Strategy might include the following:

  • Creation of a team or center of excellence to manage the technology / project
  • Definition and Deployment of a proof of concept 
  • Acquisition of the raw data intended for use in the Big Data solution (so let's say this is for an energy company it might include SmartGrid sensor information).
  • Acquisition and / or development of the Big Data Platform
All of these possible actions of course imply a number of key decisions that must be made; the following are a few examples of those;
  • Determination of Big Data technology to use (triple store, key value etc.)
  • Determination / selection of a Big Data solution hardware platform
  • Determination of modeling or data profiling approach
  • Choice of BI platform for data visualization

All of this information is going to be necessary in order to complete detailed roadmaps and ensure accurate estimates for those who manage the IT portfolio process in any given organization. Actions can then begin to be translated into milestones with traceable costs. Those Action-Milestones are then mapped specifically to goals/objectives previously identified in the higher level strategy.

Now, how does action to goal alignment involve Tactics? In the case we've introduced and in most others, the tactics involve the core tools for decision-making. So, for all of the decisions listed above, individual analyses of alternatives might be conducted. For product decisions, run-offs / competitions / evaluations and source selection processes are applied. For design considerations, an architecture approach is applied. All of these activities can also fit within a lifecycle process - all of this represents Tactics.  Why? because, we could use roughly the same lifecycle approaches for any type of technology - whether it is UAV development, Quantum Computing or building a SharePoint portal. It is the interchangeable actualization toolset for all strategy.

The hardest part of aligning Strategy, sub-strategies and tactics is when you find yourself in a very large transformation effort (one perhaps dealing with 100's of systems, dozens of technologies and perhaps thousands of people). There is no single solution, tool or approach for managing that - it represents what mathematicians often refer to as a unsolvable problem (NP Hard). We will look at IT Transformation and intense complexity in an upcoming post.


Copyright 2014, Stephen Lahanas


#Semantech
#StephenLahanas
#TechnovationTalks

Thursday, August 7, 2014

Building Effective IT Strategy - part 2

In yesterday's post, I introduced three elementary categories for IT Strategy:

  1. Product 
  2. Portfolio
  3. Transformation

I then pointed out that each of these follows a basic cycle of:

  1. Determination of Strategic Approach
  2. Goal-Setting
  3. Assigning actions to goals

So, now as promised, I will use this to take a look at a specific scenario. But before we begin, let me mention an important caveat to the three categories listed above. While all strategy tends to fit within these, not all strategy has to exist at the highest level. In other words, there are various levels of Strategy that are still abstract enough to remain differentiated from Tactics.

In this case study example, we are going to look at Big Data. Big Data is something that many organizations might consider important enough and large enough to develop a strategy around. However, just looking at the moniker "Big Data" and one might instantly wonder - well, doesn't that belong as part of a larger "Data Strategy." Yes. And then wouldn't that also imply that the Data Strategy would be part of a larger Strategy. Again the answer is yes - and in this case the whole thing lines up neatly like this:

  • Portfolio Strategy
    •  Data Strategy
      • Big Data Strategy

So, this begs the questions, just how we might first differentiate the lower level strategy from higher level strategy and then perhaps even more importantly, how do you ensure they stay in alignment?

Differentiation:
This tends to be managed something like this - you begin at the top level with the superstructure of where everything is supposed to fit as well as common capability / design principles and objectives. Then as you move down the Strategy levels, things progress from conceptual expectations to logical descriptions. The top level is the most open or flexible; the bottom level the closest to expectations regarding solution execution.

Reconciliation:
Side by side with the differentiation activity is integrated road-mapping - each level below fitting neatly into the one above. The other big component here is alignment of Strategy with Architecture which provides the other key reconciliation tool (if used properly).

So, how would a Big Data Strategy fit into an Enterprise Data Strategy? First, it obviously in most cases extends something that already exists. This then implies either replacement of existing capability or addition of new ones. Now let's jump back to the process we mentioned earlier - Step 1 - assigning portfolio strategy is complete. How would we attack goal-setting for Big Data?

This by the way, is where perhaps more than half of the organizations trying to adopt Big Data solutions are getting tripped up right now. A poor example of goal-setting would be - "let's do a POC without a clear path of how to exploit this technology yet" (mainly because everyone else seems to be doing it. A better approach might be:

  1. Define the set of possible Use Cases associated with your organization (where Big Data might make an impact)
  2. Choose one or two that can be effectively demonstrated and measured - let's say one might be the rapid development of a user driven BI solution based on unstructured (web/social media) data. 
  3. Develop a clear path as to how; a) the initial capability could be rolled into the larger existing ecosystem to avoid silos or solution-fracking and b) add new functionality to the emerging Big Data solution - consistent with overarching organizational goals. 

Step 3 is assigning actions to goals. We'll take a look at that in our next post...




Copyright 2014, Stephen Lahanas


#Semantech
#StephenLahanas
#TechnovationTalks

Wednesday, August 6, 2014

Building Effective IT Strategy - part 1

Every great endeavor begins with a strategy, well that may be true but was it an idea, a single spoken command, a drawing on a napkin? How do we quantify precisely what Strategy represents?

In military history, Strategy is the highest level of planning - the combination of complex goal-setting and the definition of an over-arching approach designed to achieve said goals. In the American Civil War, A. Lincoln decided early on that the North must cut the Confederacy in two by taking the Mississippi river and to stifle commerce using a massive naval blockade. This strategy was even given a name - the Anaconda Plan. The rest of what happened in the war was mainly tactical in nature - so in the case of the military analogy, Tactics are the detailed actions necessary to fulfill elements of the larger Strategy.

The interesting aspect associated with Tactics is that they tend to be "reusable components." In other words, you develop tactics that can be used regardless of the Strategy of that may employ them. This metaphor translates well from the military analogy over into real-world IT.

Lucky for us, the world of IT isn't much like war except in the sense that there is quite a lot of chaos and a need for planning to manage complex situations. Let's try then to define IT Strategy...

IT Strategy is the ongoing effort to guide organizational exploitation of technology over a multi-year period. It is ongoing because in IT (which is hopefully not the case for war) there is no definitive end-state goal. In other the words, the end state is always moving to right, reflecting the evolution that has already occurred as well as the oncoming waves of newer disruptive technologies.

That's the high level view, but IT has its own unique spin on Strategy which makes it possibly more divergent from the war analogy. In IT there are several distinct types of Strategy; these include:

  1. Product Strategy - Focused (perhaps analogous to a Theater strategy in war)
  2. Portfolio Strategy (otherwise referred to as Capability Strategy) - Comprehensive
  3. Integration / Transformation Strategy - This is not just focused on solution integration - it is the larger question of how to redefine and reconcile an entire portfolio 
Depending on the organization in question, they might require only one or perhaps all of these types of strategy at any given time. A software company (one that is solely focused on one software product let's say) would definitely want to use the first type of Strategy to help define their product roadmap, but might not need the other two. 


So, step 1 is determining what type of Strategy is required. Step 2, regardless of the strategy category is a goal-setting exercise. Step 3 is assigning goals to action. In our next post, we will use a Case Study to look at Steps 2 and 3 in more detail.


Policy in the context of IT Strategy generally represents tactical guidance given on an organizational level - this tends to fall under either Portfolio or Transformation Strategy (or both)


Copyright 2014, Stephen Lahanas


#Semantech
#StephenLahanas
#TechnovationTalks