Cyber Security Predictions for 2017

2016 was a big year in the annals of Cyber Security, and 2017 promises to eclipse it.

Creating an Enterprise Data Strategy

An introduction to the process of developing comprehensive strategies for enterprise data manangement and exploitation.

A Framework for Evolutionary Artificial Thought

Let’s start at the beginning – what does this or any such “Framework” buy us?

The Innovation Dilemma

What things actually promote or discourage innovation? We'll examine a few in this post...

Digitial Transformation, Defined

Digitial Transformation is a hot topic in IT and big money maker for consultants - but what does it really mean?.

Saturday, January 28, 2017

E-learning, Twenty Years Later

Once upon a time I was perched at the intersection of two career choices; 1) a path towards teaching and 2) a pragmatic exploitation of technology skills that we’re becoming ever more popular in the workplace. This personal nexus point occurred about 20 years in 1996. Anyone looking at my profile can guess which choice I made, but the story is a little more interesting than merely that of a techie who chose economic pragmatism over an academic career. Little did I know that my interest in teaching would lead me directly into the crucible of invention and innovation that has since changed the way almost everyone on the planet learns.
There are times when one realizes that he or she is the middle of something unique, something historical. I have had other experiences with which to measure that sense of recognition – for instance in 1987 when I was staying with friends in Argentina and witnessed an entire nation rise up to demonstrate support for their Democratically elected government against an attempted military coup – millions were marching in every city in the nation. The coup failed. That even was all the more significant given that Argentina had suffered under 40 years of Fascist rule until just a few years before. I tried to document it in my own way at the time but have since lost the photos I took and the notes I wrote down – what stayed with me was the sense of what true change looks like.
Fast forward a few years and I find myself studying a Masters in English Composition and Rhetoric with a concentration in TESOL (Teaching English as a Second Language). In the intervening years, I had taught English abroad as well as computer related courses and worked with web development among other things. I had both a practical and theoretical grounding in various education concepts & practices including everything from Instructional Design to Chomsky’s Transformational Grammar. All of that was interesting, but not earth-shaking per se. However, what seemed obvious at the time was that I was witnessing almost the exact moment that Education as we knew had reached a cross-roads. In 1995 and 1996, E-learning had more or less just been coined as a term and the initial preview of what was to come began to emerge. Now things like distance learning and CBTs had already been around for awhile (I had in fact helped produce video courses at community college TV station some years before), but the real game changer seemed to be the ubiquitous, global web platform & standards with the first generation of browsers like Netscape, Mosaic etc. With the web came a slew of other new technologies such as Learning Management Systems (LMSs) and collaborative meeting rooms and the beginnings of Social Networking.
In 1996, I faced a tough choice, while I felt very strongly about the importance of Education and considered it my most probable career path, I had some serious misgivings. While coming close to completing my Masters, I decided the best course would be to try to combine my academic interests with my love of technology and thus I proposed to build my Master’s thesis around emerging practice & methodology for extending E-learning into nearly every facet of traditional education. At that point, the decision was no longer in my hands – the academic committee at the graduate school I was attending rejected the proposal and in fact any thesis associated with the topic E-learning on the grounds that the topic and field were not mature enough and thus unworthy of serving as an appropriate thesis. I tried in vain to convince them otherwise; both in the context of the already significant scholarship dedicated to the topic (even in 1996), as well as in what seemed to me to be the obvious conclusion that even if the field were somewhat immature, it was all the more reason to explore and evolve it. One thing was clear – it was the future, whether the world was ready for it or not. Unsuccessful in my attempts to persuade them, I left the degree program (I got another Masters in Information Design later) and left teaching as a career to join the IT workforce. What’s more, I began looking for opportunities to help continue my original intent in helping to define what E-learning might become.
Over the next two years, I had three different projects where I worked as an IT trainer and courseware developer. This gave me both further grounding in instructional technique as well as a lot more exposure to key standards and technologies associated with E-learning. For example, my courseware projects involved creating courses on web application development – at precisely the same time that new web standards were being released by the W3C, including CSS, XML, XHTML, early forms of Javascript and the DOM. Then I got a contract at Cisco Systems and headed out to Silicon Valley. Mountain View and San Jose were quite shockingly different from Dayton, Ohio. The culture was vibrant, the Terminator was running for Governor and Silicon Valley was in the midst of the biggest boom anyone could recall – it was the age of Start-ups, IPOs and stock options - and paper millionaires (folks whose stock was valued in millions due to the massive speculative tech bubble) were literally everywhere. It was also one of the most innovative times in American history; there was an expectation that not only were things going to change for everyone, but that change would be defined in this valley.
Cisco was right in the middle of that milieu. When I arrived, the superstar CEO John Chambers had just made the announcement that E-learning was the next Internet Killer app. I even saw him walking around the giant Cisco campus a few times – I recognized him from the many magazine covers he had graced in the months beforehand. Cisco wanted to make good on Mr. Chamber’s promise and had assembled a large team of very talented people across several groups to help redefine education for the 21st Century. They tackled it from multiple perspectives, including the Cisco Academies as well as the Field E-learning Connection, a project that I became involved with. This Cisco team became a focal point for a wider group of companies, universities and institutes that began defining what next generation E-learning solutions would look like. Like all great changes though, this one was borne in the midst of some fascinating controversies and I found myself deeply involved in them.
Essentially, there were two-world views within the E-learning camp (the camp being an initial set of most self proclaimed experts and tech evangelists). While the group agreed on the shared premise that E-learning would be become pervasive across all aspects of society (e.g. fulfilling the promise of becoming the Internet’s Killer app), they disagreed what it would or should look like. The first group, who were more influential at the time, definitely had a more academic perspective and felt that Learning should adhere to fairly rigid standards and instructional design expectations. The second group (and I soon found myself within that camp), believed that E-Learning was to some extent an outgrowth of the technologies that made them possible and thus offered new opportunities to approach education in general that might prove more effective than the traditional doctrines (some of which actually date to Roman times).
This contest manifested itself in a number of ways over time, but one of the more specific examples of the battles that raged within it was the notion of Learning Objects and the delivery systems that would be used to serve them – Learning Management Systems (LMSs). The core E-learning industry was rallying around the LMS as the primary commercial Learning solution and the Learning Object standard that began driving the industry was something called SCORM (which had its origins in ISD, CBTs and the DoD). The basic idea behind a Learning Object was solid enough and seemed to conform with other emerging standards like XML – a Learning Object represented a modular, self-contained portion of a larger set of objects which be configured as needed into courses or curricula. However, this was around 2000, and the architecture behind the standard moved learning content development towards greater complexity which turned led to a higher cost per hour of content created.
Those of us on the other side of the argument certainly saw the value of SCORM based systems and content, but we had a wider view of what learning content was and how it could be delivered and managed. For about two and half years I continued participating in the debate; in the middle of which the Tech Wreck happened and I left the Silicon Valley and headed back to the Miami Valley. During these years I continued to fight for my vision of what E-learning could look like, through articles and an online community called Learning Leaders. For me it wasn’t just about business or winning a debate, the philosophical contest was very personal – to me this seemingly technical question held within it a much larger question regarding the nature of all education.
In the early 2000’s, after the tech wreck, the E-learning market nearly disappeared, just barely holding on. Then during the mid-2000’s things started to pick up again as mobile technology, portals and social networking become more prevalent. It was also around this time that learning delivery systems become more flexible and began to be adopted on an institutional scale – then quickly – almost overnight it seemed, E-learning was everywhere. Every college, every tech school and even K-12 education began hosting a myriad of Learning Technologies. Business models began changing and yet the big question was still not being addressed. The question hinted at in the dichotomy between rigid or flexible Learning Objects could be characterized this way when viewed in a larger context – should Education be flexible and learner focused or rigid and expert-driven? This is a big question – one that came up quite a lot in the recent 2016 Election although perhaps many didn’t see the connection. Essentially, anyone who was making comparisons with the Finnish Education system versus standardized assessments (a recent trend in the US), was tapping into the very same controversy. It’s a fascinating question, one that spans both personal motivation as well as the mechanisms for learning delivery (e.g. technology).
Over the past seven years in particular, the free market seems to have been moving more towards one side of this conflict than the other. Despite there still being a very rigid focus on standardized assessments in traditional environments, corporations and consumers who have been given the opportunity to choose informal or dynamic learning options versus traditional instructional design driven offerings have overwhelming moved to take advantage of informal learning. If you’re reading this on you can see this right now by clicking on the Learning menu item at the top of the screen or perhaps you’ve had a chance to experience the Kahn Academy or any number of similar sites. It’s been a long time coming, but we’re finally getting close to point where personal learning solutions with access to unlimited content and the ability to dynamically define one’s own courses and education paths will become ubiquitous. I think the conclusion to this story could be characterized this way – in E-learning the medium has become the message in that the medium has given us an excuse and a freedom to view Education in an entirely different way. I feel privileged to still be around to see how this field has evolved moving ever closer to its true potential and even more privileged to have been involved in helping to define what it might or ought to look like.  And that’s one of the really cool things about working in IT, because believe it or not, anyone anywhere has given the opportunity to contribute those sorts of ideas and innovation and as a result, change the world, one step at a time. I think I made the right choice…
Copyright 2017, Stephen Lahanas

Friday, January 27, 2017

A New Framework for Knowledge Management

Several weeks ago I wrote two posts on the topic of Knowledge Management (KM):
  1. Whatever happened to Knowledge Management
  2. Redefining Knowledge Management
This post represents the third in this series and tackles what a new framework for Knowledge Management might look like.
At the highest level, the Framework that I’m proposing for KM acknowledges several new technologies that weren’t readily available back when the notion of KM was first promoted within IT. The Framework also clearly acknowledges that KM is part of a larger ecosystem of related but separate technologies that cooperate to achieve a variety of Knowledge-related goals. Those goals could be characterized in the following manner:
  • The capture of insights on multiple levels, e.g. Individual, Organizational and Community. In this context, Community represents a community of practice (probably global in nature) but could also be a market of some sort.
  • The ability to both define knowledge expectations as well to discover hidden knowledge.
  • Support the assimilation of source information within “levels of context” (e.g. personal, organizational and community).
  • The ability to capture and reuse Knowledge Relationships & Learning Paths (I’ll describe these in more detail in a bit). This particular goal goes to the heart of one of the original value propositions behind KM in the old days – the idea that knowledge capital ought to be captured in a fashion that allows it to be reused such that if individual knowledge holders were to leave the organization it wouldn’t suffer a true “Brain Drain.”
I wanted to briefly explain what I meant by “Learning Paths” and “Knowledge Relationships.” Learning Paths are more or less Dynamic Curricula, in other words self-defining paths that traverse specific learning topics in the context of both individual and organizational learning. Let’s say you have access to 1,000 learning resources and choose to build your own learning program using say 20 of them to learn (for the sake of argument) Node.js. The chosen topics and their sequence can become a learning path which could be reused by other individuals or the organization as a whole. A Knowledge Relationship on the other hand is a little more complicated because it can be manifested in more than one way.  A Knowledge Relationship could be as simple as terms combined through metadata, or terms listed within a shared taxonomy or as complex as a SQL query or defined relationships using RDF. Knowledge Relationships is an area where the initial promise of Semantic technology has fallen a bit short of expectations but will likely continue to improve in the coming years
The most immediate realization when considering both the proposed framework and its potential goals is the fact that there isn’t now nor is there likely to be one tool that accomplishes all of what we consider to be part of a larger KM process. Maybe someday this will change, but for the foreseeable future, establishing and taking advantage of KM within an organization will require a mix of tools (many of which are likely already in place). Here is a conceptual view of the proposed framework:

There are several principles associated with this proposed KM Framework:
  1. The idea that learning drives knowledge assimilation
  2. The idea that analytics drive discovery and that discovery also drives knowledge assimilation
  3. The idea that AI or AT can make both Learning & Discovery more effective
  4. The Idea that AI or AT can empower Search capabilities in new ways and that Search drives both Analytics and knowledge discovery
  5. The idea that knowledge can be layered up from the individual to the community level, thus supporting a variety of collaborative knowledge capture and assimilation capabilities  
  6. The idea that underlying all these knowledge layers can reside a defined identity – this can take the shape of shared semantics, business rules and more
  7. The idea that the source information (content, databases etc.) can be acted upon simultaneously from several related processes to produce meaningful insights which in turn can be captured and built upon
In many ways, I think this type of a framework is actually preferable to dependence on single class of tools in that it has a certain flexibility more or less built in. As we’ve witnessed over the past decade, there have been not one but many disruptive new technologies which can be applied to Knowledge Management including but not limited to Mobile technology, AI and Big Data. There are likely several more trends waiting on the wings that could enhance or otherwise contribute to KM in the future. This type of framework can easily accommodate any such advances.  
I will write at least one more follow-up post on this theme and in that article will explore what a real-world next generation KM process and architecture might look like within a typical enterprise and how it might be exploited in a variety of real-world scenarios.
Copyright 2017,  Stephen Lahanas

Monday, January 9, 2017

Politics will Never Be The Same: Revelations & Mysteries from the Intelligence Report on Russian Hacking

A declassified version of the combined Intelligence Report on Russian Hacking of the 2016 Election was released on Friday, just after the classified version of the briefing, (Background to “Assessing Russian Activities and Intentions in Recent US Elections”: The Analytic Process and Cyber Incident Attribution), was presented to President-Elect Trump. This report appears to be the final deliverable associated with President Obama’s request made several weeks after the election to consolidate findings related to Russian hacking during the election. This report follows a release about week ago of a more detailed or specific report related to the larger hacking campaign; the Grizzly Steppe analysis. I review the Grizzly Steppe findings last week and I had hoped there would be a follow-up report and thankfully we’ve finally gotten it.
First, I’d like to commend the Obama administration and the Intelligence Community for doing what is overall a fairly decent job of tackling what is a complex and somewhat explosive issue. Perhaps the most immediate result of the release on Friday was the distinct change in tone coming from our President Elect on the subject – while he is still downplaying the significance of the Russian interference with the 2016 election process – he is no longer denying that it occurred. 
The final report itself includes a number of interesting revelations, but also still contains a few too many mysteries for my tastes and I’ll cover both of these perspectives in the remainder of this post. It is also important to acknowledge that there obviously must be more tangible evidence within the classified version of the report, but whether any of those elements may be declassified later remains to be seen.
The Revelations Include:
  • An acknowledgement that both major political parties were attacked, but that only stolen information was only leaked from the Democratic Party hacks.
  • That there seems to be some level of disagreement between the FBI, CIA and NSA on conclusions regarding the intent of the attacks, although there is a general consensus.
  • The Intelligence Community has taken the time to give at least a general sense of how their analytic process works, including the all-important issue of how attack attribution is assigned / determined.
  • There was an acknowledgement that various election related organizations were compromised, including boards of election, although the report emphasizes that no vote tallying machines were involved, I’ll return to this point a bit later.
  • There is a strong focus on the multi-faceted and coordinated nature of the activities, which extended beyond hacking to include manipulation of public perception through 21st century variations of propaganda techniques utilizing a variety of technologies, including Social Media and cable networks such as RT (Russia Today).
  • The entire campaign was personally mandated, approved by Vladimir Putin with the expectation on his part that Trump would likely be more favorable to Russian policy objectives than Clinton. When it was looking like Clinton would win, the Russians launched a messaging campaign questioning whether the election was rigged if Clinton won (they were also preparing a Twitter campaign called #DemocracyRIP the night of the election). 
  • The Russian General Staff Main Intelligence Directorate (GRU), used a persona called ‘Guccifer 2.0’ to leak DNC emails onto the Internet.
  • RT (Russia Today Television), also actively coordinated with Wiki-leaks on the release of Democratic Party emails.
  • The 2016 Election activities represent a significant escalation of Russian Intelligence activities against the US, reaching levels not seen since the Cold War and perhaps surpassing those. This new level of interference may be expected to be the new normal, especially given the success enjoyed in the US and other similar outcomes such as BRexit, where Russia is seeking to destabilize Western Liberal democracies.
The Mysteries That Remain
While there were some interesting revelations, many questions remain unanswered; including but not limited to the following:
  • There is still no clear explanation of how the US government is safeguarding or not – the election-related systems and processes across the country. I discussed this in some length on my post on voting integrity, but the bottom line from the latest report is that it seems as though there really isn’t any coherent or national strategy for how to deal with this. In the report, while the Intelligence Agencies acknowledge that some voting organizations were compromised they also explain that the voting systems themselves weren’t. But that begs the question, how do they really know that? How many voting systems were audited either during or after the election? Nobody knows and it wasn’t disclosed in the report.
  • Related to the above point, take a look at this excerpt from an article on the attempted recount in Michigan: “According to the newspaper, officials “couldn’t reconcile vote totals for 610 of 1,680 precincts” during last month’s countywide canvass of Election Day returns, adding that most are in Clinton stronghold Detroit, “where the number of ballots in precinct poll books did not match those of voting machine printout reports in 59 percent of precincts, 392 of 662.” Trump won Michigan by only 10,000 votes.
  • Friday’s Intelligence Report references the Department of Homeland Security in relation to the compromises or attacks on election boards, but doesn’t explain what part of DHS collected that information or who if anyone is actually responsible for safeguarding election systems. There is no discussion of the audit processes that may or may not be in place and what those audits are based upon. 
  • There is no discussion as to what an appropriate response could be or ought to be – in other words – it’s an analytic assessment without any policy implications. This, in my opinion, is a big gap in what should have been included in the final report. Making this information declassified and initiating a dialog about what happened is of course valuable, but what are we to make of it? How does the United States intend to respond to these types of events in the future if North Korea, Iran, Russia, China or any other nation decides that it wants to manipulate our political or economic processes to their advantage? We surely can’t expect to just document it, there needs to be a discussion of appropriate levels of response – a policy discussion – and this is needed primarily so we don’t make radical decisions in a haste.
In the title of this post, I’ve claimed that Politics will never be the same after the 2016 election - I firmly believe that. I think that this election has changed how politics will operate for decades to come, both here and abroad. I also don’t think that the US has fully assimilated the impact of what has really happened here yet. There will likely be countless papers, books and courses developed to explore the subject in greater depth over the coming years, but I also fear that the US has somehow lost the initiative in the midst of all of this. The idea that other nations or international actors have used technologies we’ve developed against us in this fashion is quite disturbing; all the moreso given that this just seems to be the start of an merging trend.
Here are the other election-related posts I’ve written since the election:
  1. How Technology Defined the 2016 Election
  2. Technology & the 2016 Election Part 2: Voting Integrity
  3. Technology & Election 2016 part 3 – The Failure of Data Science?
  4. Technology & The 2016 Election part 4: A New Age for Political Campaigning
  5. Technology & The 2016 Election part 5: Voter Beware
  6. The 5 Principles of Cyber Warfare
  7. What We Just Learned about Grizzly Steppe
  8. Cyber Security Predictions for 2017
Copyright 2017, Stephen Lahanas

Wednesday, January 4, 2017

Redefining Knowledge Management

Earlier this week I asked the question; “whatever happened to Knowledge Management?” The bottom line answer to that question is that perhaps the reason this once emerging field within IT has seemingly disappeared is due to the fact that it was never properly defined in the first place. This follow-up post is dedicated to examining the philosophical question more closely and trying to draw some clearer distinctions between Information, Data and Knowledge Management. This examination in turn will then support another follow-up post where I’ll attempt to characterize what a more succinct Knowledge Management framework might look like.

Before I start, I’ll try to give some rationale as to why this ought to matter. Just because Knowledge Management wasn’t properly defined before and has largely dropped off the map within IT, doesn’t mean that it isn’t needed. There most definitely needs to be something that transcends the practice of Data Management – but before we go too deep here, let’s take a another stab at defining some of terms we’re referring to…

Information Management – This has been a catch-all term for quite some time now. Information Management could be viewed synonymously with Information Technology and thus represents an aggregate of many non-related IT technologies. There is no unifying expectation behind this term nor is it necessarily even directed at data per se. The best way to describe Information Management might be as the top level of the Taxonomy – the superset into which all other IT-related practice should fit.

Data Management (DM) – As I alluded to in the previous post, Data Management is much more focused on the operational aspects of a variety of directly and indirectly related technologies. The DMBOK (the Data Management Body of Knowledge) has taken pains to classify this more precisely through a fairly wide-ranging taxonomy, covering everything from Data Architecture & Governance to Metadata Management. Perhaps the one common denominator across all of these areas is the expectation all of the capability is either resident upon or associated with systems that house and manage data. This includes both structured and unstructured data. The DMBOK also addresses the lifecycle aspect of those capabilities, which is why it includes Data Architecture, Development and Governance.

If we look at these two definitions again, despite the likelihood of some overlap with Business Intelligence or Analytics, we might consider these terms bookends enclosing Knowledge Management, with Information Management being the top-level umbrella for all IT capability and Data Management being the foundation upon which KM capability is derived.

So that brings us back to the question, what makes Knowledge Management different than the lower-level Data Management capability categories? This of course requires us to consider the difference between information / data and knowledge. An analogy might help to illustrate the question better; one of my favorite history reads is Shelby Foote’s History of the Civil War. When writing this 3 volume masterpiece, Foote spent years examining the complete archives of official records North & South – hundreds or perhaps thousands of volumes. He synthesized all of that into a somewhat concise timeline (concise being a relative term here in that each of the volumes is around 700 pages long) and was able to both educate us and tell an extremely complex story in what appears to be a very logical fashion. In this analogy, the historical archives represent data, collected from various sources during the war and the resulting book utilized an analytic process to transform a large quantity of source information into a more compact set of Civil War Knowledge. By that analogy, then I’m insinuating that KM is both a process and product – one that rests above another foundation. Knowledge Management then must necessarily be concerned as a field or practice with both the transformative creation of Knowledge (from source data or information) as well as its effective dissemination, application or exploitation.

Let’s examine now what the key characteristics that Knowledge Management ought to consist of:
1.      The ability to aggregate source data and add value to it. As mentioned previously, this is a very near approximation to the practices of BI, Analytics and perhaps even Data Science. An open question here might whether we’d consider a Data Warehouse or Data Lake a knowledge source or a data source. I tend to think of them as the latter.

2.      The ability to extend individual analysis to collective analysis and assimilation. This is an important consideration stemming from the observation that Knowledge in general has always been considered more of a collective than an individual enterprise. While individuals may attain, create or otherwise possess knowledge, knowledge is not bound by those individuals and indeed if it is kept secret to some extent it no longer qualifies as knowledge in the traditional sense of the term.

3.      The ability to collect information from many sources and not just aggregate it, but integrate it in meaningful ways. Data Integration is also included in the DMBOK, but again it may be construed to have a more operational focus in that context. For example, the focus in data management integration may be more about interoperability whereas with KM the focus is directed more towards answering complex questions.

4.      The ability to derive new meaning from existing source information. This can occur with Data Science and perhaps Analytics as well, and I tend to think that those practices fit better within the realm of KM than DM. Again, the distinction from DM here may be the expectations associated with the analysis – the idea being that each organization eventually develops its own unique perspective – both of itself and of its industry. That perspective is based on both experiential (a posteriori) and defined knowledge (a priori) synthesized from available sources and assimilated based on filters or needs unique to the organization.

5.      The ability to harness Artificial Thought or Intelligence to add further value and perspective to source information. This is where Knowledge Management merges with other topics I’ve been discussing recently – it is also the area of greatest potential for KM. And necessarily, this combination implies that AI or AT belong as part of KM – perhaps the most important part as this is we might expect the greatest value add to occur. One sticky question that this combination raises though is the notion as to whether machine Thought or Intelligence can add enough value to turn data into Knowledge, my feeling is that the answer here is a qualified yes – qualified because the expectations driving that transformation are still pretty much derived from humans.

Back in the old days of the early 2000’s, there was a flurry of debate back and forth between the relative value or application of OLAP versus OLTP within the Analytics realm. That debate has largely subsided, but it provided us with a partial preview of the differences between DM & KM in general. If we were to distill those differences into a single thought, it might be this – Knowledge Management extends us beyond ordinary operational concerns and begins to imply some level of organizational awareness. As such, it clearly builds upon the lower tiered architectures to do more – just what can be done has hardly been tested yet. Also, in the early 2000’s there were early efforts in the military realm to create Common Operating Pictures or Data Fusion Centers. These have since evolved some and become particularly important in the context of Cyber Security. Here too, Knowledge Management is a more fitting description of what we’re trying to achieve – in this case the synthesis of potentially billions of data sources to discover hidden patterns. Some might think this is actually a discussion of Big Data – but it really isn’t. Big Data belongs squarely within the realm of Data Management as it merely another data management platform with little if any expectation as to how that data might be transformed into something else (despite all of the hype to the contrary).

We’ll wrap up this post in the series with an updated definition for Knowledge Management:

Knowledge Management (KM) is a field of practice within Information Management that builds upon a foundation provided by Data Management capabilities. KM adds value to source information in both a collective and subjective manner, helping to create unique organizational perspectives and insights through assimilation of source data into directed or targeted types of knowledge.

In my next post in this series, I will explore the architectural boundaries for the next generation of Knowledge Management capabilities.

Copyright 2017, Stephen Lahanas

Monday, January 2, 2017

Cyber Security Predictions for 2017

2016 was a big year in the annals of Cyber Security, and 2017 promises to eclipse it. While the drama of the election has largely subsided, the after-effect of the DNC attack and related Russian hacks is still building steam. President-elect Trump hinted this weekend that he had special information which he will share later this week, which in all honesty may tend to grow the controversy rather than silence it. The election hacking was not by any means the whole Cyber Security story for 2016 – but it did highlight that the stakes for Cyber security are slowly but steadily escalating. So what can we expect for this year?
I’m going to make ten predictions for what may happen in 2017 in the field of Cyber Security. These prognostications are not made with any unusual knowledge but rather through examination of previous trends and logical extrapolation of where those are likely to lead us to.
  • Prediction 1 – The situation relating to Fake news will shortly lead to various types of Internet self-censorship. This may involve techniques such as badges for legitimate news outlets which will then trigger innovation on the part of attackers in attempts to mimic them.
  • Prediction 2 – Cyber-attacks will play a significant role in one or more world conflicts. We are getting close to the point where a Cyber Attack alone might be tip the balance in some of these conflicts.
  • Prediction 3 – President-elect Trump will eventually get on the same page with the US Intelligence Community in regards to Cyber Security, but it will take a number of months and several high profile incidents to turn him around.
  • Prediction 4 – The cost of Cyber Crime will reach an all-time high; 2017 will mark the first year that Cyber Crime takes out one or more major financial institutions (causing damage to operations or reputation so severe that it forces closure).
  • Prediction 5 – Two-factor authentication will be more or less forced into becoming the primary way of logging into most online services.
  • Prediction 6 – Related to the above, password management as we know it (or have known it) will begin changing drastically. Applications will be provided that assign better passwords and password management apps (vaults) will become much more common. This always been the weakest link in security.
  • Prediction 7 – 2017 will be the worst year yet for the hacking of personal data. The techniques for obtaining such data have gotten ever-more effective yet most organizations still don’t know what all sensitive data they possess. It’s a formula for disaster.
  • Prediction 8 – 2017 will likely see a greater degree of integration between Cyber and traditional military and intelligence forces and not just in the US. One area of particular concern will be Cyber vulnerabilities within various traditional warfighting technologies.
  • Prediction 9 – While there will be continued discussion regarding the after-effects of the Russian election hacks, there will be little if any effort this year to safeguard American voting systems or processes.
  • Prediction 10 – 2017 will likely become the year of our first Cyber Demonstrations. So, what is a Cyber Demonstration? Essentially, it is a demonstration of power – some sort of disruption that is likely accompanied by a political message. We haven’t had too many of these outside the context of Wiki-leaks or the election. However, I have a feeling that this type of activity may become more common relatively soon.
I don’t think any of these predictions are particularly surprising – but then again surprises are hard to predict, so we’ll have to check back at the end of the year and see what we missed…

Copyright 2017, Stephen Lahanas