Cyber Security Predictions for 2017

2016 was a big year in the annals of Cyber Security, and 2017 promises to eclipse it.

Creating an Enterprise Data Strategy

An introduction to the process of developing comprehensive strategies for enterprise data manangement and exploitation.

A Framework for Evolutionary Artificial Thought

Let’s start at the beginning – what does this or any such “Framework” buy us?

The Innovation Dilemma

What things actually promote or discourage innovation? We'll examine a few in this post...

Digitial Transformation, Defined

Digitial Transformation is a hot topic in IT and big money maker for consultants - but what does it really mean?.

Tuesday, September 23, 2014

The Art in Artificial Intelligence - Part 2

Part of the reason we decided to explore this topic on Technovation Talks was the claim made earlier this Summer that an AI had finally passed the Turing Test. So what's the Turing Test? It was a very basically described metric by which any sort of true machine intelligence might be assessed or otherwise verified. Here's the basic premise of the test - if an AI can engage in normal conversation with multiple human participants without the humans realizing that they were conversing with a machine (obviously it would be a remote conversation of some sort) - then the machine could be considered intelligent.

Alan Turing - BTW: a movie about him will be hitting theaters soon...
According to Turing's predictions in 1950, we should have already achieved this level of machine intelligence (by the end of the last century). Yet if you look at the story about this Summer's supposed triumph (which might be considered the first time it has in fact been achieved, there are nothing but problems and doubts):

  • First off, the answers are screwy and it's clear that much of what the computer heard it misinterpreted. 
  • Then they presented the AI as if it were an adolescent from war-torn Ukraine.
  • And they also used the lowest possible threshold to gauge success - this threshold which represented a part of Turing's paper on the subject - suggested that success be declared if on average at least 30% of humans judging the AI would be fooled into thinking it was a human. So, the AI named Eugene, scored a 33% - but that is only because judges lowered the bar thinking he was a semi-illiterate teen. 
More important than all of this of course is the central question as to whether or not the metric or test is actually an accurate way to assess machine intelligence anyway? In a way, every system that has ever tried to compete in one of these tests to date has been purpose-built to pass the test. But does that make it intelligent (if it were actually to pass it)? The technology necessary for a machine to "think" through a conversation the way a human does simply does not exist - nor are we even close to understanding what that model would even look like. The systems trying to pass the Turing Test are simply conversational "hacks," in other words they include built-in tricks like responding to a question with a question or trying to work off of keyword cues. What's missing of course is any continuity of thought - any consciousness - and even the most simplistic conversation requires that. None of these systems can think and none of them can really learn. 

Now it may be that conversation hacking may become sophisticated enough in coming years so that many of these systems may actually pass the Turing Test threshold of 30% on a regular basis. But that test as it is now defined will never provide us with an accurate assessment as to whether a machine has in fact achieved some innate level of intelligence. There is no way to determine through the conversation if the system has "added value" to the topic rather than simply replied phrase by phrase in rather one-sided dialectics. It will be difficult to assess or acknowledge any growth or change. There is no expectation in a simple conversation to determine if you are in fact conversing with a self-aware entity.


In the movie Her, this guy falls in love with his operating system (and it didn't come from the Apple store!)
The first thing we need to do before we tackle how we might achieve AI is to determine what the appropriate assessment or validation for human-like intelligence really needs to be. We are going to suggest one and explain the rationale for it...

The Technovation AI Test -
AI Test Prerequisites / Expectations
  • The Test is not meant to assess acquired knowledge per se, it is meant to assess cognitive ability. In other words, it is not about preparation or repetition of learned information, but is concerned with potential and / or application of any particular knowledge set.
  • The Test does not have to occur in one sitting, but take place over any duration (within reason).
  • The Test isn't merely concerned with correct answers or maturity in a point of time, but can also assess the ability to grow over time based upon responses to various aspects of the test (or other stimuli encountered within the time-frame of the test).
  • The Test is not merely a linguistic exercise - the machine must not merely demonstrate the ability to communicate like a human, it must also demonstrate it can learn. 
  • Foremost above all else though, the machine must demonstrate the one trait most closely associated human intelligence (as opposed to raw computing power) - it must demonstrate intuition. In this context, Intuition represents shorthand problem-solving (which we will discuss in much more depth in a future post). 
  • On last aspect of the test that must be included is a review of the code to ensure that "conversational snippets" are not allowed to be prep-programmed. This implies that the majority of dialog is generated 'real time' by the machine. Now, that would not prevent the machine from reviewing logs of previously generated dialog (in some database), but that review could not lead to verbatim quoting -  rather must paraphrase or other restate previous points. 
The AI Test 
In a series of panel interviews, the AI must convince the judges or reviewers that it should be hired to perform a complex human role. The type of job and foundational knowledge can cover any number of topics but must be sufficiently complex to avoid "lowering the bar." (so, any job that requires a degree). Also, the interview style must be open (similar to essay tests in written assessments) - the answers must not just be correct, they must demonstrate value added insight from the intelligence conveying them. And the answers may be entirely subjective... (even better as long as the machine can rationalize them)
This test necessarily implies a very high threshold - perhaps in excess of a 90% rating for a very complex set of conversations. Why raise the bar this high? Simple - this is the one way we can force the development of a system that can both learn and apply that knowledge to problem solving and do it on the fly. To have human like intelligence, machines must have the ability to understand nuances of human communication and psychology - thus it must not only be able to interact, it must be able to convince us as well.  
Now that we have a more concrete target to aim for - how do we get there. In our next post, we'll delve into Learning - what works and what doesn't and how human and machine intelligence differ today.



Copyright 2014, Stephen Lahanas


#Semantech
#StephenLahanas
#TechnovationTalks

Wednesday, September 10, 2014

The "Art" in Artificial Intelligence - Part 1

Today, we are going to launch a problem-solving exercise on what might be the single most complex topic in Information Science - Artificial Intelligence. The goal here is not to provide any sort of comprehensive survey of current theory or practice; rather our journey begins with the stark realization of how little we've achieved in the field since the term was first coined 58 years ago. This is a problem statement and problem resolution exercise and an excellent case study in technology focused innovation. Let's begin...

We'll start at the beginning with some definitions and a review of key assumptions.

Artificial Intelligence, Defined 
The ability for a machine to consistently demonstrate core cognitive skills generally associated with human intelligence; including, learning, problem solving, intuitive reasoning and contextual memory retention and extraction. These skills generally imply the need to achieve some level of self-awareness.

What We Haven't Achieved, Yet
I said that this problem statement is focused around a lack of success in the AI field to date; let's try to quantify that first.

  • No computer can learn like a human.
  • No computer can speak like a human (this is deceptive, tools like Siri will provide responses back to you, but is that in fact anything like human speech? The processing that goes on within Siri is a relatively primitive form of pattern recognition as opposed to what even the least capable human mind can produce).
  • No computer can handle complexity in the same manner a human can (this warrants much more explanation and we'll come back to it).
  • No computer can problem-solve the same way humans can. (there are types of problem solving where computers are of course far superior to humans, yet even with all that power they still fail to solve relatively simple questions that humans can handle naturally).
  • No computer has achieved anything coming close to consciousness or self awareness (despite the endless slew of sci-fi stories where this is a common fixture). 

Anybody trying to solve the AI problem is a mad scientist, right? 
Now, there's another ethical or moral side to this topic which we won't jump into until the end - the question as to whether we should even try to endow a machine with these traits - but then again it is likely that someone will do this regardless of the ethical objections. Part of the human learning process seems to require learning through our mistakes - a trait we may eventually end up passing to artificial entities, someday. But back to the problem.

Challenging Assumptions
As with most problem spaces, the set of initial assumptions associated with it tends to drive all else until or unless those assumptions evolve in some fashion. For Artificial Intelligence, there have been a number of assumptions that have helped define what's its become to date and also help to explain its limited success - they include the following:
  • The notion that brute force computing power will eventually resolve many issues and push through various AI barriers. This is partially true, but then again we sent Apollo to the Moon with the computing power of a standard calculator (by today's standards), how much computing power do we really need to mimic human thought? Nature got us here through elegance, not waste.
  • The notion that we fully understand how the human mind creates consciousness or exercises cognitive capability. We don't, yet.
  • The very flawed notion that machine learning can or should have any connection to the current methods we use to teach each other. 
  • The lack of sensory input associated with most AI paradigms. AI is IO dependent and that usually means keyboard and mouse although images and video (and audio) have now begun to play an important role. We'll get to this in more detail later.
  • The notion that simulation can perform like the real thing; building to the simulation ensures a thing always remains a simulation (thus never achieving actual reproduction). This has some interesting implications which will eventually get us into a discussion of genetic engineering. 
  • A lack of focus on natural language. This at first proved too difficult and now natural language does factor into much of the AI research going on. However, natural language hasn't been looked at as the core logic for an AI system - but it should be - instead we tend to view in terms of how an AI system (or other type of system for that matter) can interact with humans (or just capture human speech accurately). 



Watson actually showed us what he/she/it was thinking - if only we could
have seen into the minds of the humans, then we could have seen whether they think the same way...
A Brief Chronology of AI

  • Ancient Greece - those happy go lucky philosophers in togas considered the notion of thinking machines. 
  • About 150 years ago or longer - Artificial Intelligence (although not explicitly identified as such) becomes a popular topic in Science Fiction.
  • 1930s - The golden age of Science Fiction novels includes more than a few stories about artificial brains and super robots.
  • 1955 - The Term "Artificial Intelligence" is invented by John McCarthy and within a year the first conferences on the topic are held.
  • 1968 - The Arthur C. Clarke novel 2001 becomes a box office hit and HAL, the emotionally unstable AI computer on board the Jupiter, becomes a celebrity. Take your stress pill, Dave.
  • 1980 - Matthew Broderick makes it big playing games with an AI pentagon computer - the game is Thermonuclear War but of course you can't win it, right? 
  • Early 80's - LISP is introduced.
  • Mid-1980's - Expert Systems become popular.
  • 1984 - Skynet becomes 'self aware' and Terminators start trying to kill John Conner.
  • 1987 - The pasty-faced Commander Data steals the show on Star Trek the Next Generation.
  • 1990's - An IBM computer wipes the floor with the world's chess masters.
  • 2001 - Stanley Kubrick's last movie becomes Stephen Spielberg's tribute to him as Joel Haley Osment becomes one of the last humans playing a robot (as opposed to CGI animations with voice actors playing humans, robots and everything else).
  • 2010 - Stanford opens up an AI online course to the general public, several hundred thousand sign up - few finish the course. 
  • 2011 - Siri shows us that smart phones can indeed become somewhat intelligent.
  • 2012 - IBM's Watson wipes the floor with the pantheon of Jeopardy champions.


Don't imprint unless you're serious...

The expectations for AI have thusfar radically outstripped the progress made. Now in the same 50 or so years we've taken the Star Trek communicator & tricorder fictions and made them reality in the form of smart phones (some of which you can now load with apps that when used with sensors can measure one's vital functions much like a tricorder).

A lot of smart people have tried for decades to make AI a reality and across the globe billions or hundreds of billions of dollars have been spent on related research. What's going wrong here? It can be only one of two possibilities:

  1. Human like intelligence cannot be artificially created and maintained or
  2. We've approached the problem wrong
I content that the second possibility is in fact what has happened. As we will progress with the series, it will become clear that the "Art" I referred to in the title is the ability to pick the right path for problem resolution.  In part two of this series, we will examine the question; How can machines learn?





Copyright 2014, Stephen Lahanas


#Semantech
#StephenLahanas
#TechnovationTalks

The Next Generation of American Leadership - JSA


A worthy cause indeed, invest in the future - support the next generation of American Leadership, right here in Ohio...

Junior State of America - Ohio River Valley Scholarship fund

Saturday, September 6, 2014

How to Create an Enterprise Data Strategy

I work as an IT Architect. One of the more interesting things I get asked to do on occasion is to create Strategies in particular technology areas. This represents the "high-level" side of the typical IT Architecture duties one tends to run into (if you're interested in more IT Architecture related topics check my new blog here - The IT Architecture Journal). A very popular strategic focus across industry lately is Data Strategy. In this post, I will try to explain why it has gotten so popular and some of the fundamental aspects of actually producing one.

Data Strategy, Defined
Data Strategy is the collection of principles, decisions, expectations as well as specific goals and objectives in regards to how enterprise data and related data systems and services will be managed or enhanced over a specified future time period. As an actual artifact, Data Strategy is usually manifested as a document, but is not limited to that format.

A Data Strategy is usually conducted in conjunction with some IT Portfolio planning process. In optimal situations, portfolio decisions and follow-on data-related projects can be mapped directly back to goals and objectives in the Data Strategy.

Why Data Strategy is so popular
Organizations across the world have become more aware of the need for greater attention to data related issues in recent years. Some of this has been driven by collaborative industry initiatives through groups like DAMA (the Data Management Association) and the resulting Data Management Book of Practice (DMBOK). Other drivers include the near-flood of new data technologies released over the past decade as well as the exponentially growing quantity of data out there.

So, what does having a Strategy give actually give you?

What it often provides, if used properly, is both a set of shared expectations as well as a clear path for actualization of those expectations. The Data Strategy allows organizations to deliberately decide how best to exploit their data and to commit to the major investments which might be necessary to support that. This, when contrasted with an hoc and decentralized technology evolution scenario presents a much easier picture to grasp. And it also at least implies a situation that will be easier to predict or otherwise manage. It is that promise of manageability that makes creating a Data Strategy so attractive.

Elements of a Typical Data Strategy
The following mindmap illustrates some of the common elements that you'll find in many data strategies. One noteworthy item in this diagram is the idea of sub-strategies (which can be split off into separate documents / artifacts) ...



The Top 7 Considerations for Data Strategy
While there are many more things to keep in mind, I've tried to distill some of the most important considerations for this post...

  1. The strategy should take into account all data associated with the enterprise. This may sound obvious but in fact it isn't really that obvious. Many organizations explicitly separate management of dedicated data systems from other systems which may have data in them but aren't strictly just DBMSs or reports, etc. For example, there may be state data in small data stores associated with a web-based application that supports an online form / application - the data structures supporting the completion of the form may be different than the ones which collect the completed form data. However, all data, in all applications regardless of where it may be located or how or it is used must be considered.  
  2. There generally needs to be an attempt to define an organizational 'lingua franca' - or a common semantic understanding of data. There are many ways this might be achieved, but it is important that this included within the strategic plan.
  3. The Strategy cannot be entirely generic, even if one of the most vital objectives is some type of industry-driven standardization. Wholly generic plans are usually less than helpful.
  4. The Data Strategy must be presented within a larger context. What this means is that there needs to be an expectation that the Strategy will indeed be the precursor to other activities which ought to be able to map back to it for traceability purposes. 
  5. The Data Strategy needs to have sufficient detail to be meaningful. If it is too high-level it becomes merely an elaborate Mission Statement. The expectation behind any Strategy or plan is that it be actionable. 
  6. The Data Strategy ought to be need or capability based - not product or Hype focused. 
  7. There ought to be a way to measure success 'built into' the Data Strategy. This can come in the form of basic service level expectations or business outcomes or both. 

What goes into one Data Strategy versus another can be radically different from group to group. If you have a Social Media company your needs will be quite different than the US Coast Guard for example - but both will likely need their own Data Strategy.


Copyright 2014, Stephen Lahanas


#Semantech
#StephenLahanas
#TechnovationTalks

How IT Hype Inhibits Technology Innovation

There are many unique characteristics connected to the IT industry, some of them are more associated with pure-play IT vendors but others seem to pervade most all organizations which utilize IT in any significant manner. One of the strangest of these characteristics is the obsession with buzzword Hype. For most people this is symbolized through the development and dissemination of Gartner's famous Hype Cycle diagrams. However, the Hype obsession existed before the Hype Cycle ever did and doesn't explain the phenomenon in a satisfactory manner.

So, what's wrong with Hype? Isn't it just a harmless offshoot of public relations or perhaps just some benign manifestation of mass psychology? Perhaps some of our readers here are old enough to remember the endless silly fads of the 1970's which included Pet Rocks and Mood Rings. Did buying these worthless trinkets in any way negatively impact our later lives - probably not. What these harmless trends did perhaps achieve though was a sort of conditioning that might have pre-disposed the vast majority of us to assign more attention to trends then they might otherwise merit.


Pet Rocks were definitely low tech - but represented a hype-generated trend none-the-less

Now without diving into the implications of psychological theory in regards to conditioning and behavior, it is does seem as though much of what we see or talk about is driven by various Hype cycles. This occurs in entertainment, in business, in food & dining - in nearly every aspect of popular culture and remarkably it also affects science and technology. What represents Buzz in the Scientific community over recent years? Well, how about the Stem Cells, The God Particle and Fractals (or Fibonacci numbers) to name a few.

Getting back to Information Tech - how are we influenced by fads, trends and Buzzword Hype? Well, let's attempt a definition here first...

Buzzword Hype - This represents a unique form of public relations wherein somewhat complex concepts are crammed into a single buzzword (although Buzzwords can technically include several words in them - Master Data Management has three, Big Data has two). While this phenomenon is not limited to IT - it is the most prevalent form of Hype in the technology arena.

So, if an acronym is a mnemonic for a complex term (MDM for Master Data Management), the term itself is a mnemonic for the complex concept that everyone already understands, right? Wait a minute, perhaps we've discovered the first problem; how many people actually understand these terms? Furthermore, how many people are actually concerned with learning these terms?

Let's hone in on one of the biggest Buzzword Hype examples of the last few years - Big Data (we've touched upon this topic before in Technovation Talks). How many people actually have a comprehensive knowledge of what this represents? or even have the same expectations or knowledge about it? Is Big Data just the Hadoop distributed fault tolerant file system, it is the lack of SQL and relational structure, is it the high capacity or throughput or is it some combination of these elements and much, much more. Even more importantly, is Big Data even something that can be neatly defined as one standard solution? All good questions, none of which are typically addressed in the core Hype surrounding the buzzword.

The buzzword hype for Big Data seems to imply more, better, faster, bigger with very little consideration as to how that might happen or what the eventual impacts would be. The term itself becomes its own justification - if everyone is talking about doing it and beginning to do it - why shouldn't we, right? And by the way, what is it again?

Let's step back a moment and try to classify what the core problems associated with the Hype Cycle are:

  • These buzzwords actually drive business decisions, regardless of the level of education or understanding associated with them.
  • There is an undercurrent of peer pressure that tends to 'force' people into making those decisions - decisions they weren't ready to make (because they didn't have time to evaluate the situation properly).
  • The hype tends to drown out most other types of discussions associated either with the technology in question or the real trends of what's happening across most enterprises. And I characterize these as 'real' because they represent common challenges which aren't necessarily product-driven (thus not good candidates for hype).
  • Premature adoption based on hype cycles often has the opposite effect on a particular technology area - it stifles it as too much bad word-of-mouth feedback circulates and what might otherwise be a promising field of practice languishes as a result (E-learning is the best of example of this I can think of).

How does all of this impact or otherwise inhibit Innovation? Well, here are some things to think about:

  • Some Hype Trends are not in fact very innovative, yet if everyone is doing it - then folks trying to introduce truly innovative techniques or products may be drowned out or suppressed. 
  • Most Hype Cycles tend to pose the key buzzword focus area as a "silver bullet" solution - as those of us who have practiced IT for a while can all attest to, there are few if any actual Silver Bullet solutions. Similar the Heisenberg Principle (and we're not referring to Breaking Bad here), introduction of a new element impacts the existing elements in unanticipated ways (well, this isn't an exact analogy to the Heisenberg Principle but it's close enough). The whole of IT might be viewed as "Magic Happens Here" by outsiders but inside we know there is a constant struggle to impose order over chaos - silver bullets are often disruptive. Yes, that's good but not if you don't understand the nature of the disruption before you go whole hog (so to speak).
  • Hype & Buzzwords tend to make people think situations are simpler than they really are, and in some senses actually discourage the necessary analysis that should be occurring when adopting new technology. Innovation cannot be sustained when and where it becomes too expensive to manage.   


We will ever escape the grip of unreasoning Hype in IT? Will our lives be forever ruled by an unrelenting cascade of product focused buzzwords? Who knows - IT is still in its infancy and unlike most other professions we have the opportunity to reinvent ourselves on an almost continual basis - so anything is possible.



Copyright 2014, Stephen Lahanas


#Semantech
#StephenLahanas
#TechnovationTalks