Cyber Security Predictions for 2017

2016 was a big year in the annals of Cyber Security, and 2017 promises to eclipse it.

Creating an Enterprise Data Strategy

An introduction to the process of developing comprehensive strategies for enterprise data manangement and exploitation.

A Framework for Evolutionary Artificial Thought

Let’s start at the beginning – what does this or any such “Framework” buy us?

The Innovation Dilemma

What things actually promote or discourage innovation? We'll examine a few in this post...

Digitial Transformation, Defined

Digitial Transformation is a hot topic in IT and big money maker for consultants - but what does it really mean?.

Wednesday, October 31, 2012

What is a Digital Customer Experience?

Digital Customer Experience is a term that I've heard discussed often over the past three years or so. It's different than how we used to think of User Experience or web Portals because in its name there is an implicit implication of multiple converged customer channels. What has happened over the past three years is that Mobile and Social Media have both become mainstream from an IT perspective and have done so more or less simultaneously. Thus, it's no longer a question of the web on one's desktop - we've now entered the Web of Things - some smarter - others not.

The challenge is how to manage this disruptive convergence while maintaining expectations for more traditional outlets and capabilities. Obviously, this question has more meaning for organizations that have a strong customer-facing mission, like Retail for instance. However, the need for a Digital Customer Experience really spans across most organizations.

The Web of Things is transforming our expectations for IT and Business


So let's take a stab at defining this. What is a Digital Customer (or User) Experience?
  1. It is the unified presentation of capabilities from an entity or organization to any potential audience of that entity or its capabilities.
  2. Unified presentation refers to the ability to coordinate offerings and message across a diverse set of provisioning platforms and dissemination channels.
  3. Digital Customer Experience implicitly encompasses not only coordination between offerings and messages but also some level of integration across provisioning platforms. (the integration challenge here is even more complicated than the one IT is still working to solve from older paradigms).
  4. A Digital Customer Experience is by nature, interactive. This doesn't mean that all elements of the experience are interactive, but some must be. (the most obvious candidates for interactivity right now are smartphones and social media but this can also include kiosks and other interactive displays and more)
  5. A Digital Customer Experience is also by nature, more user-driven and less organization-directed than previous paradigms.
So, if we accept even a portion of this extended definition, the implication is that Digital Customer Experience crosses every silo in IT; process, application and services, data, hosting infrastructure and more. It also requires an even closer relationship with the business and in many cases will lead to business leadership becoming "experience" developers themselves.



Copyright 2012  - Technovation Talks, Semantech Inc.

Tuesday, October 30, 2012

The Intelligent Healthcare Manifesto

The United States is poised for a revolutionary change in how Healthcare is perceived, how it is managed and how it serves us; both as patients and as a society. For many, the only solution to better Healthcare lies entirely in massive amounts of new spending – however a reexamination of the processes underlying Healthcare suggests that money and technology alone can be wasted if not properly orchestrated. The true potential for the next generation of Healthcare provision in this country requires a paradigm shift in perceptions, a recognition of several fundamental truths that when taken together will redefine the practice of medicine in our lifetime.
Truth 1 - Technology has always played a pivotal role in medicine across the ages, yet today technology is much more than an enabler - it is transformative force. Technology is becoming intelligent.

This is not the same as Artificial Intelligence (AI), in the sense that the technology thinks for itself or for us. Rather, the intelligence we're talking about here refers to the ability to combine and contrast information and infuse that data within the day to day practice of medicine and to view medicine as continuum rather than a series of potentially unrelated events or incidents.

Truth 2 – Healthcare is a system of systems. The human body is an integrated mechanism rather than a set of disconnected parts and pieces to be managed in care “stovepipes.” Every element within any given hospital exists symbiotically with every other to form a single, working entity. And every hospital, doctor’s office and clinic forms a symbiotic relationship with every other throughout the nation to comprise our Healthcare infrastructure. Impacts to any portion of this delicate, tiered structure affect every other aspect of Healthcare provision.

Truth 3 – The number one challenge in medicine is now and has always been complexity. Complexity takes its form in the vast wealth of medical knowledge, techniques and tools available to us, as well as in the monumental amount of biological diversity present within each and every human body. Defeating complexity requires the ability to adapt rapidly to changing paradigms and situations – institutions too rooted in tradition often loose opportunities to excel and overcome previously insurmountable challenges.

Truth 4 – The time spent outside of the healthcare system is just as important or perhaps even more important than time spent within it. Quality of care outcomes do not begin or end based upon treatment events – they are the result of a continuum care that lasts a lifetime.

Truth 5 – When comprehensive Healthcare isn’t provided to all, all are placed at risk. This imbalance costs more than having paid for everyone’s care as it throws the entire system into havoc and introduces an element of chaos that cannot be properly mitigated. All care may not be equal, but there are standards for adequate care that can be followed as a public health safeguard that can and will improve our current situation.
The potential that is nearly upon us is staggering - we are close to being able to position all medical knowledge at any practitioner's fingertips and provide it to them in such a way that allows for efficient contextualization, per patient and per scenario. Intelligent Healthcare is a responsive interactive process - a dynamic collaboration between caregivers from across the globe and between individual caregivers and patients.




Intelligent Healthcare is Patient-Centric

Intelligent Healthcare recognizes that healthcare quality and the ability to de-conflict knowledge are intricately intertwined. As the nation stands ready to reexamine its Healthcare policies, we have an opportunity to extend that exploration beyond considerations of access, affordability and insurance coverage options to a core examination of the processes of Healthcare itself.

We don’t need more money, more equipment more prescription drugs, more genetic testing per se, what we need first is the ability to correlate the capabilities and knowledge we’ve already attained, and a way to provide generic care options to all – doing that would provide us with a phenomenal increase in efficiency and quality of care in the very near future.


Copyright 2012  - Technovation Talks, Semantech Inc.

How to Create Enterprise Learning Environments

Much of the learning that people do during their lifetimes occurs in the context of informal, job-related discovery or mentoring situations. What we learn on the job determines how well we do in our careers regardless of what prior formal education we may have received. Yet, the workplace is notorious in its lack of support for this informal learning environment. The enterprise sits atop of a wealth of unstructured data and informal processes, so far no one has been able to harness these to take their organizations to the next level of productivity and efficiency. This represents the next great challenge for Information Technology—and learning will be in the forefront of the solution.

Each day in and in every organization knowledge is lost—it is lost because most of all organizational knowledge is retained not in our systems or learning materials but in the minds of those who work in the organization. Human knowledge capital is an informal resource and by nature unstructured data. Today the only way to reach that information is through a translation layer that abstracts or generalizes such experience to produce a very small subset of learning. For every 1000 such experiences we’re lucky to capture 1 and of course not all knowledge captured in this manner is ever shared. The cost of going through this translation layer is extremely high, in fact the cost of education of all types is greatly outpacing the rate of inflation in this country and most others.

Some years ago, close to the dawning of the Internet age, many researchers, engineers and educators realized a variety of immediate educational opportunities posed by emerging technologies. It seemed as though the challenges of education cost and knowledge exploitation were about to be solved. The benefits seemed obvious; greater access to information, combination of learning materials in novel ways, the ability to extend the classroom into every home just to name a few. Yet as Internet technology has matured and new capabilities expanded within it, the anticipated Learning Revolution has failed to materialize. The conceptual framework for exploiting this new infrastructure simply never developed, this missing framework involves both philosophical and pragmatic considerations. The framework in many ways already exists if we choose to acknowledge it, and it can be referred to as the Enterprise Learning Environment. This environment comprises the sum of any number of cultural, organizational or personal level learning ecosystems or perspectives. So how do we exploit these learning environments that already exist ?

The Enterprise Learning Environment is an ecosystem of related capabilities

First let’s explore the concept of what a Learning Environment is. A learning environment is any space, virtual or actual, that facilitates the discovery and assimilation of knowledge. This is a broad definition; it applies equally to a library, the Internet, television, a school or a corporate intranet portal. If the environment allows individual learners the ability to search for, or otherwise discover information and assimilate it within their own personal context, it constitutes a learning environment. Within that simple definition lies a revolution in our approach to learning. Once we acknowledge that learning can and does occur in a multiplicity of environments rather than merely within formal learning approaches, then we can begin to devise lower-cost solutions towards mining human capital and we can also turn every organization into a continuous learning medium. This does not mean that current formal learning strategies need to be replaced; it does however open the door to a much larger set of learning opportunities – and precisely the type of learning opportunities that most people tend to find useful in their daily lives or careers. 

So what is a Learning Opportunity ? A learning opportunity is a potential learning experience for any given learner. Before the advent of recent technological breakthroughs it would have been impossible to consider capturing the knowledge capital of all or even a significant percentage of the members of any typical organization. If those members did not write papers or books or did not have a team of course developers tap them as a subject matter expert, their knowledge would walk out the door with them as soon as they left the organization. If we multiply the learning environments that exist within the larger framework of a connected Global Learning Environment, then the sum of potential learning experiences may eventually grow to the millions or hundreds of millions or more.

Now we must consider the philosophical impact of these suggestions as this will be critical to developing technical solutions. The implication here is that there are different levels of learning experiences available. There are two primary categories: formal and informal. Within the formal category are the courses we tend to be familiar with using some instructional design approach. Formal learning tends to be characterized by its emphasis on outcome-based assessment. The cost of providing formal learning experiences is well documented and the per-hour creation and delivery expense is relatively high. This leads to three core problems:
  •  Fewer people can afford to gain access to these learning experiences
  •  Fewer learning experiences are captured, providing incomplete pictures of most topics
  •  Organizational knowledge capital is seldom if every harnessed to provide learning experiences
These problems are not just an issue for any one learner or organization, these are society-wide issues. As education becomes more expensive, we lose operational efficiencies, competitive advantages and productivity. This in fact may be the single most costly factor any organization faces today. Recent surveys have shown that a typical employee spends as much as 20 hours per week looking through various unstructured data sources to find the information necessary to do their jobs. Few employees are provided with educational opportunities in today’s workplace.

A Pragmatic Solution
So how does an organization begin to exploit the unique connected learning environments that have arisen within the context of their IT infrastructures? It merely involves viewing those infrastructures in a new light and applying a practical methodology. The methodology could be referred to as learner-centric continuous learning. Continuous learning encompasses several basic assumptions:
  • That people don’t stop learning when school is finished
  • That improving skills and expanding knowledge makes people more effective workers
  • That an organization, a culture, a society can learn from its experiences just an individual can, continuous learning lends itself to continuous improvement. 
Learner-centricity is a concept that has been around for some time but in the context of how we might exploit learning environments, the key elements of this approach include:
  • The ability for learners to select and aggregate their own learning experiences and build their own learning paths.
  • The ability for learners to rate learning experiences and thus contribute to the evolution of the content.
  • The ability for learners to build learning experiences from separate content elements (which ultimately can be linked through recommendations, and / or thematic or scenario-based relationships, but this should not occur though application level integration or if it does any such development should follow generic standards rather than employing or creating news specific to “learning” – this will have the effect of opening up development across a much wider range of potential content providers).
  • The ability to easily mesh these learning experiences within collaborative learning communities
  • The ability to quickly and easily capture one’s own knowledge-base as learning experiences that can be shared with others.
An Enterprise Learning Environment must support several key technical capabilities:
  • The ability to support universal discovery; this implies search access to all content types that may be aggregated together to form a learning experience. In order to facilitate this capability, an IT infrastructures would need some content management approach and more than likely the ability to archive and compress that content while still retaining text level search functionality. 
  • A converged approach towards content in general. Separating out other forms of data discovery from learning content makes little sense; workers require a wide variety of resources to solve problems and accomplish their tasks, providing them all under one framework makes it more likely that they’ll find what they need and will help organizations adopt a continuous learning process.
  • The ability to rapidly build learning content from simple templates covering multiple presentation and media formats.
  • The ability for the content to ‘connect’ the learner to the expert or experts who produced it through embedded collaborative technologies (in most cases this would take the form of links to relevant wikis, commons, newsgroups or other similar features).
All of these capabilities already exist in one form or another in most IT infrastructures and are supported by well established technical standards. The difference in this approach is that systems that are today considered separate stovepipes are considered as a single view into the enterprise and that learning as process is integrated within in it.

The key with this approach is that the learner centric content development and discovery is not deterministic or outcome-based but rather dependent upon provision of a wider array of options and the learners’ ability to find them, much like we use the Internet. What a learner finds in order to help accomplish their job, be it step by step instructions, diagrams, an audio interview of expert’s description of some topic, is more likely to be assimilated as it is discovered and used in the context of real world scenarios. In essence, what this amounts to is the convergence of all structured and unstructured data / content within a unified discovery framework. Within that converged environment, members of the organization can then input their knowledge using templates and simple taxonomies of organizational topics which can be built into either a portal structure, included as metadata or both. The idea of ordinary members of any organization contributing their knowledge as potential learning experiences is not very common and would likely represent the greatest hurdle to exploiting learning environments. Thus lessons learned or knowledge capture efforts would need to be encouraged and built-into the routines of most employees. The added benefit of doing this means that knowledge is captured and shared on continuous basis without the need to scramble at the end of employee’s tenure to obtain bits and pieces of a larger picture.

Perhaps the best way to understand how the Learning Environment might be exploited is to follow a typical use case by someone within an organization that has deployed a continuous learning paradigm. The user enters a central portal to gain access to all of their content, including documents, archived email, systems and collaboration. The data for all of these capabilities are stored centrally thus allowing for optimized storage and retrieval across the entire user base. The user then enters the discovery section of the portal to search for information related to an issue they must resolve – in this case a network systems configuration troubleshooting situation. The unified search pulls results from newsgroups, knowledge-bases, documents and both formal and informal learning content, each designated as to source and data type. The user then selects those topics pertinent to the situation aggregating and saving the results within a unique experience folder. The experience folder then can be connected or embedded in hierarchies of other experience folders which combined represent a learning path through a particular topic or set of related topics that needed to be addressed to resolve a real life situation. They can then add their own lessons learned and expertise through capture templates and make those and their learning paths available to the entire organization for reuse.




Copyright 2012  - Technovation Talks, Semantech Inc.

Monday, October 29, 2012

The Trouble with "Big Data"


How can there be trouble with one of the two biggest trends in IT you ask? Well, perhaps from a hype and marketing perspective there isn't any trouble yet. But from an expectations perspective, the trouble began nearly two years ago and has only gotten worse. And it starts in a name, it sounds simple, but is it?
Can you define "Big Data" and if so would your define match an industry standard expectation?

What is Big Data, anyway? Well, this is where the trouble begins - it means something different to a fairly diverse set of interests. For some, Big Data implies use of a parallel processing paradigm (which BTW has been used for more than a decade in Data Warehousing as well), use of commodity hardware and a clever algorithm created by Google about a decade ago to help index the web. Much of this is now combined with the use of "Hadoop," although use of Hadoop doesn't always imply that companies will follow the same hardware path as some of the giants who pioneered the paradigm. In fact, more of than not the real market for commodity hardware is moving to the Cloud. But wait, aren't we talking about Big Data? What's the relationship between Big Data and the other biggest trend in IT today, Cloud Computing? Are they really two separate trends or variations of the same trend? The answer to that question is - who knows.

There are some other problems with Big Data; let's review them:
  1. It seems to encompass a wide range of emerging technologies, such as storage, parallel processing, cloud technology, high performance discovery, new DBMS paradigms and more. 
  2. The Use Cases for Big Data tend to blur into the same set of Use Cases for most enterprise data related functions. This wasn't always the case - the original Google exploitation its technology was fairly narrow and unique to its business model / mission. It sill isn't entirely clear how smaller enterprises will harness the newer Big Data capabilities - that clarity is vital - especially in regards how to integrate within the existing ecosystem.
  3. There is no universally accepted definition for what it represents, but just as important, there is no recommended solution approach or set of approaches or even a recommended solution methodology. The largest IT trade group dedicated to Data Management, DAMA, has barely scratched the surface as to how integrate Big Data within the larger set of Data Management activities. Or should we assume that Big Data will somehow eventually swallow all of rest of what we were viewing as Data Management?
Let's step back in time for moment. Back in early 1999, I attended a technology conference in Washington D.C. that was convened to assess emerging technology trends for the next decade and beyond. One of the most interesting discussions that occurred during their main panel revolved around a question on how much bandwidth or data would be utilized in coming years. Recall, that in 1999, having a Terabyte of memory in a DBMS was  big deal and few if anyone had DSL like speeds for Internet access. The majority of the panel did not see any explosive growth happening in the foreseeable future. I disagreed - I countered that the demand had already been pent up and that a torrent of Digital content and communication would explode as soon as the hardware prices and bandwidth allowed. It's this exponential growth in data that the proponents of Big Data expound upon a lot these days (supposedly 2/3 thirds of all data ever created was generated in the last two years).

Well, guess what - that exponential data growth was merely a drop in the bucket to what's coming. And if that is truly the case, then we have to ask ourselves what this really means. Sure, we needed more affordable hardware and more affordable software to handle volume; we needed better algorithms and architecture to handle performance. The thing is though, we still haven't defined what this all means in relation to how we manage the enterprise. We've still got and we continue to support all sorts of legacy architectures and approaches - and now we've been handed a whole new set of challenges. But those challenges aren't just focused on bigger, cheaper, faster - we also have to deal with smarter, integrated and targeted. And we also have to become a bit more visionary when it comes to imaging what we can and should do with emerging worlds of data and that may take us right back to another set of technologies that have been emerging over the past decade right alongside Big Data - Semantic Technology. 
 
So, let's ask ourselves again:
  1. Is Big Data about handling larger volumes of data faster?
  2. Is Big Data about making Data Management more efficient / less expensive?
  3. Is Big Data about harnessing the Cloud and Storage?
  4. Is Big Data about expanding Data Discovery to cover ever-increasing sets of data?
  5. Is it all of the above and / or something else?
Perhaps it's time for so more or better definition. Without better definition it will likely be to understand what the ROI is that you're shooting for or whether your organization is actually achieving it.


Copyright 2012, Semantech Inc. All Rights Reserved

Friday, October 26, 2012

Cyber Security & Threat Management

Threat Management is still a relatively new concept; there is no industry standard definition for it. In fact, the few people who are talking about it right now tend to view it from at least two very different perspectives – one a product focused approach to unifying perimeter security tools and two, a practice-focused management paradigm. As it evolves, Threat Management will eventually encompass both of those perspectives and will likely become perhaps the single most important element within any given Cyber Security solution.

The reason why it will become so critical is that Threat Management allows us for the first time to build upon a complex conceptual framework with a variety of analytical tools which will automate an ever-growing percentage of Cyber Security tasks. Without this framework it would remain difficult or nearly impossible to manage Cyber Security in a proactive and coordinated manner. For the purposes of this discussion let’s define Threat Management as:
“The conceptual and technical framework dedicated to discovering, defining and managing threats to operational security and mission assurance. Threat Management is software & hardware agnostic and can apply as an integrated IT practice in any functional domain. The goal of Threat Management is not merely to ensure that immediate (local domain) threats are mitigated but that threats are also managed in the context of communities of interdependent or inter-related entities. Threat Management depends upon top-down, bottom-up and lateral participation or guidance to build knowledge frameworks which can then be used to define security policy and solution mitigation.”
So, what is a “Threat” given this construct? A Threat is “any event, vulnerability or behavior (or combination thereof) that either poses a danger to the operational mission or if combined with other events, vulnerabilities or behavior could constitute a threat to the operational mission.”


The first step towards identifying threats is to define what threats actually represent

In that last sentence we begin to see the systems implications of what we’re talking about. The goals here are two-fold; one – block a threat before it is manifested or two – stop a threat in motion that wasn’t blocked in time to preserve operational capability. The other key consideration here is that we’re viewing this practice as evolutionary – it learns as it goes and learns from the community which uses it.

Threat Management and Semantic Technology
Much of what we’re describing with Threat Management already occurs in some fashion; however that is not consistent from one enterprise to another and in fact much of it is handled using manual processes with little ability to correlate or manage various aspects of the problem in a unified approach. To unify Threat Management we need a mechanism which allows us to characterize all aspects of Threats and to correlate that information from information collected from the full spectrum of security related software or hardware appliances. 

Threat Management as we’re describing it here is wholly dependent on a Semantic Knowledge layer and data exchange architecture. This allows us to:
  • Provide non-proprietary data exchange approaches (for security-related data capture and analysis).
  • Characterize complex or aggregated data “patterns” in utilizing ontologies or RDF-based databases or related tools.
  • Provide a knowledge sharing framework for the community of defenders and security experts who analyze existing or predict future threats.
  • Build policies based upon Threat Activity and Threat Prediction – policies that can also be captured, manifested and distributed using Semantic technology.
  • Drive dynamic reconfiguration of H/W and S/W infrastructure in response to policy definition and distribution.
While there are Security vendors that have made incredible progress in being to integrate some of these capabilities in the context of their proprietary tools, this approach ultimately will fail without the Semantic layer for one simple reason – the entire world is never going to standardize on one security tool. However, the Semantic Layer for Threat Management can extend to encompass any infrastructure or combination of security tools.



Copyright 2012, Semantech Inc. All Rights Reserved

Thursday, October 25, 2012

PLM and the PMO

Program Lifecycle Management is the recognition that specialization is not the only or even the best answer towards managing complexity. Often times, an excessive focus on specializing specific areas of expertise merely adds to the level of complexity and confusion that typical PMOs face every day. The truth is that many if not most of the people who support PMOs need to be generalists to fully grasp the breadth of topics that they are expected to deal with. It is very difficult to get work done if a parade of experts is required to fulfill everyday tasks and worse yet if that parade constantly changes as the industry rapidly evolves.

The key to PLM is understanding that the PMO runs on information. That information must be easily accessible, transportable, translatable and must be available directly to the decision makers without going through layers of expert interpretation first. This doesn’t mean that other folks don’t add value to the information, there will always be a need for diverse skills in the PMO, however it means that EVMS analyst is no longer primary interpreter of financial data and that the requirements analyst is not the only person who can produce requirements reports. The reality is that no matter how many specializations are created, the core processes are still all related within specific contexts. Those contexts then allow us to provide a holistic view of what’s happening in the PMO and more importantly illustrate why it is happening.

The PMO is the entity charged with what Gartner describes as "Integrating Eosystems."

So, What is an PMO
A Program Management Office, or PMO, is an entity charged with management of one or more programs and portfolio of systems or perhaps specifically with the integration of those systems. The Enterprise PMO concept or title began appearing in print about five years ago, but despite the amount of time that's passed since then, the practice of Enterprise PMOs haven't progresses much beyond the original PMO paradigms.

In other words, the true potential of the PMO has yet to be fully realized but for a few exceptions. The obvious question is why isn't this occurring more rapidly? Some might feel that the charter for an enterprise PMO is beyond the scope of what most PMOs are charged to be accomplished. It might be considered dangerous or out of scope to try to plan for or manage relationships and interactions that occur around the PMO rather than within it.

The problem with this thinking though, is that nearly every IT focused PMO is now expected to integrate within the larger context of their enterprise. Even non-IT PMOs feel the pressure for increased oversight and accountability and all PMOs share one characteristic in common - complexity.

The complexity that must be managed in order to successfully execute a program is perhaps the single greatest challenge facing leadership today. The advantage with a PMO that is designed to be an enterprise PMO from the ground up is that complexity is tackled directly, with mitigation built into a set of fused processes.



Copyright 2012, Semantech Inc. All rights Reserved

Wednesday, October 24, 2012

Introducing Program Lifecycle Management

Lifecycle Management is more than a buzzword, it is the central organizing principle of all Information Technology (IT) effort. Understanding how to differentiate or coordinate Lifecycle Processes is perhaps the key challenge in IT today. Program Lifecycle Management (PLM) represents a deliberate attempt to reconcile and combine multiple Lifecycle Management tasks within a single, unified approach. 

The Problem
So why is PLM important, why is it necessary? The motivation behind PLM has been with us for decades and despite many attempts it remains largely unresolved. IT projects are getting more complicated, not less – and this trend is accelerating, not decelerating. PLM directly addresses the root causes of this trend and has been developed to attack them in a comprehensive fashion. Those root causes for this IT complexity syndrome include:
  • System / Service / Solution sophistication continue to increase as the pace of technological change accelerates (thereby driving new and more demanding expectations from end users and stakeholders).
  • Interoperability Expectations have increased exponentially (both within the enterprise and externally between enterprises and stakeholders).
  • Bandwidth and resource exploitation expectations have become much more complicated (this covers storage management, virtualization, security, wireless connectivity etc.)
  • The pent-up expectations for data exploitation are only now beginning to be realized (after 10 years of evolution, BI tools and data warehouse capabilities are finally cost effective and now integration with unstructured sources through ECM, collaboration or Web 2.0 has begun).
There are a number of different places where we could tackle this growing complexity, but one place stands out – at the beginning. The beginning of most things IT tends to be the office or group charged with making projects happen and funded to manage them. Whatever else we do elsewhere with the myriad of Lifecycle issues that exist in most enterprises, if we don’t tackle the management office first our efforts will likely be somewhat frustrated.

PLM is quite literally a Lifecycle of Lifecycles...

How Many PLMs are There ?


Some of you might be familiar with the acronym “PLM” as representing Product Lifecycle Management. So how does these variations on PLM relate to one another? There are in fact five PLMs that are closely related:

  • Program Lifecycle Management
  • Portfolio Lifecycle Management
  • Project Lifecycle Management
  • Product Lifecycle Management
  • Process Lifecycle Management
These PLM variations can be viewed as a hierarchy within a single, unified enterprise context. More importantly, this unified context allows us to apply a common semantic foundation which in turns allows us to coordinate all of the related data within a single PLM data repository. This is not a Master Data Management solution although it does help greatly in establishing enterprise-wide MDM governance. Program Lifecycle Management supports active working processes and capabilities already familiar to those practitioners of the five PLMs.

There are a few organizations, IBM for example, who refer to something similar called Enterprise Lifecycle Management. However, the PLM described here is meant to serve a more specific purpose, namely the unification of the Program Management Office (PMO) efforts. The PMO is ultimately responsible for all IT program success or failure, but there are some enterprise details or processes that fall somewhat outside their normal management scope. PLM as a unified practice has been designed to optimize a consolidation of significant number of essentially related processes and capabilities. PLM does not integrate all IT activities though.       


Copyright 2012, Semantech Inc. All rights Reserved 

Data Driven Cyber Security

Data without meaning has no value. Data that is interpreted too late to respond to a situation has only forensic value. For too many years, computer network security and information assurance practices have focused solely on forensic capabilities. Semantics is the science of applying meaning – to symbols, to language, to data and to events. If meaning can be mastered, it can then be portrayed effectively in analytical displays. The combination of Semantic definition of the Cyber landscape with innovative analytic engines provides us for the first time with the ability to link multiple communities together in a proactive unified Cyber response, in real-time.



Data is the glue that binds together our ability to perceive and mitigate Cyber Threats.

A Comprehensive Cyber Security Methodology requires Cyber Semantics & Analytic solution components - those components include the following core capabilities:
  • (Attack) Pattern Definition – The beginning of the Semantic foundation is the collection and / or predictive definition and provision (or definition) of attack patterns. 
  • Dynamic Threat Correlation – Attack elements are correlated against patterns in real-time to help determine both the threat level as well as potential actions. This becomes a pattern matching exercise; and more importantly, one that occurs across multiple partner organizations. 
  • Dynamic Incident / Event Collection – Provides the ability to collect and synthesize attack data as attacks are occurring (for use both in immediate remediation as well as later analysis and reconfiguration)
  • Cyber COP – COP stands for ‘Common Operating Picture.’ The ability to build this atop a Semantic foundation allows for dynamic and community views as well as comprehensive activity aggregation.
  • Cyber Enterprise Architecture (EA) – Enterprise Architecture is the blueprint for infrastructure environments as well as the software and analytics which are housed in those infrastructures. Our Cyber EA approach is built using the same focus on Semantics – allowing for coordination from the ground up.
  • Mission Intelligence or Reporting / Cyber Health Dashboards – One thing that has become abundantly clear over the past decade is that Cyber Security is a time sensitive activity and that traditional security analytics are painfully slow.  In order to get ahead of the curve – there must be automated alerts and warnings built into our Cyber oversight mechanisms. This Cyber Health Dashboard can exist within or separate from a Common Operating Picture. The Cyber Health Dashboard allows individual security managers to catch activity real-time and then coordinate within their larger communities through collaboration to reduce the impact of the attacks. 


Copyright 2012, Semantech Inc. All rights Reserved 

Tuesday, October 23, 2012

Communities of Innovation

In a previous post we mentioned a concept that we refer to as "Communities of Innovation." We'd like to explore that a little further today. We think this concept is important because it helps determine where or whether Innovation will occur - in other words without the locations that Innovation tends to incubate - it may not occur at all. These communities are not a new concept per se, there have been communities of Innovation in most cultures across the span of history, from Medieval Florence to the Mayans.

The notion of "Community" has changed a bit over the past century and that change has accelerated at quantum speeds at the beginning of this century. "Community" now implies not merely regional connotations but also something that might be referred to as shared practice, shared concerns or shared interests. These collaborative communities span geographical boundaries and tend to center around unique groups of like individuals. The best example of this is witnessed in science; where initially several centuries ago, these communities began developing through personal correspondence and then those relationships became formalized through journals and "societies." Now of course, we have a mixture of relationships, societies and journals as well as physical centers of innovation and virtual collaborative communities which are significantly more dynamic thanks largely to recent advances in Internet technologies.

At one time, Dayton Ohio (home of the Wright Brothers) was one of the world's preeminent Communities of Innovation.
So, given that background, let's try to define what a Community of Innovation is today:
A Community of Innovation is a network of formal and informal relationships based upon shared practice, vision or interests. It can be either physical or virtual in nature and more often than not in the 21st century is both. The key characteristics which all such communities exhibit are:
  • A long-term commitment to solving a specific set of problems / challenges.
  • A reliance on technology and innovation as the chief or primary mechanism for solving those problems.
  • An open-minded approach to viewing problems and potential solutions.
  • The desire to established shared criteria, standards and methodologies for solving problems.
So today when our leaders talk about investing in education or research and technology in order to foster American competitiveness and innovation, they are in fact discussing how such Communities of Innovation can be created or nurtured. The difficulty of course in viewing this within national boundaries is that as technology has evolved the regional boundaries within these communities has become more fluid. There is still no substitute for meeting with people in person and collaborating within a narrow geographic region, but those are no longer prerequisites. This then requires us to become more creative in helping to define what Communities of Innovation will look like and how they will function in the future if we wish to have the ability to target innovation to specific national outcomes.


Copyright 2012, Semantech Inc. All rights Reserved 

What is Cyber Strategy?

Security has always been driven by Strategy, Operations and Tactics – Cyber Security is no different.  Cyber Security encompasses “Cyberspace” not only from a traditional security management approach, but includes the emerging role of Cyber operations as well. Given the evolving nature of Cyber Security, the ability to address Cyber Strategy is the most logical place to begin providing comprehensive solutions.

A comprehensive approach to Cyber Security Strategy consists of the following core capabilities:
  • Application of the or an underlying, unifying Methodology – to any individual environment or across federated environments. The methodology must be designed to coordinate previously separate Cyber “Stovepipes.”
  • Enterprise Cyber Lifecycle Management – The Methodology must include the ability to deploy and manage complex Security Lifecycle coordination both within and across organizations.
  • Enterprise Cyber Governance – The Methodology must also includes program and project governance as well as the ability to govern dynamic technical solutions once deployed. 

  • Cyber Doctrine – As the arsenal of Cyber capabilities continues to grow and the complexity of Cyber Security increases, many organizations will require guiding principles upon which all other capabilities or actions will be derived from. Doctrine differs from Strategy in that it is not dynamic in nature.

  • Cyber Operational Strategy – The day to day management of Cyber infrastructures is becoming ever more dynamic. The ability to coordinate cross domain operations requires clear definition of operational strategies.  Those strategies must also be dynamic and cannot be solely reactive in nature.

  • Cyber Operational Tactics – The key to ensuring a unified Cyber Security management solution is the ability to define dynamic Strategy & Tactics in context with one another – the tactics are more specific and subordinate to strategies which in turn are derived from doctrine. 

  • Predictive Strategy - This represents the ability to model and counter threats before they’re experienced.

Cyber Security requires Cyber Strategy...


Copyright 2012, Semantech Inc. All rights Reserved

The Cyber Security Challenge

Over the next several weeks we're going to introduce a number of concepts relating to Cyber Security. We'll begin today by explaining some of the things wrong with how security is often viewed today...

Network defense and management for the past two decades has focused primarily upon reactionary responses to security breaches or “exploits.” Determining whether an attack has occurred is a forensic rather than a proactive activity.

Continuation of a reactive defense paradigm allows our adversaries to enjoy a more or less permanent offensive advantage and leaves us vulnerable to novel attacks not previously experienced and accommodated within our current defensive structures. In other words, Situation Awareness without predictive and dynamic responsive capabilities will continue to leave us relatively unprepared for the scenarios we are likely to face in the near future. 




Cyber Security must be an integrated discipline in order to work...

Another facet of the problem relates to the nature of Network Defense and attack as a collaborative activity. Network attack is and already has been collaborative in nature for more than a decade; however most network defense implementations are still highly segmented. This also provides a significant advantage in information sharing and freedom of action to Cyber adversaries.

This becomes particularly important when we consider the relative complexity required to support federated defensive collaboration as opposed to the relative simplicity required to mount a coordinated, distributed attack. The natural advantage again resides with our adversaries. This advantage is both technical and economic in nature, which is why Cyber attack represents perhaps the lowest cost option for asymmetric operations (i.e. the relation of the cost of organizing an attack versus the potential cost of damage inflicted).

Over the past decade, Computer and Network defense has consisted of ever-increasing levels of perimeter controls and sensors as well as identification and sharing of specific exploit “signatures.” The exploits represent specific attacks at the OS, application or network level and their signatures are derived from incident histories. While this represented a major breakthrough when it was first introduced nearly a decade ago, the incident focused perspective of network defense may now be hurting us more than helping us prepare for current and future scenarios by obscuring a larger invisible threat.

An analogy helps to place the issue in context – “while an army has specific capabilities relating to its various weapon systems, training and logistics support elements; ultimately it is an intricate combination of all factors that eventually become synthesized into specific tactics and strategies.”

Incidents or exploits detected in network attacks are but individual elements within an arsenal of Cyber-weapons or capabilities and by themselves are not as meaningful as the manner in which they may be employed or orchestrated. Incidents are in fact part of larger “Event Patterns” which may in turn be part of Cyber tactics and strategies.


Copyright 2012, Semantech Inc. All rights Reserved 

Monday, October 22, 2012

Capability-Based SOA


Two of the most complex problems facing all SOA implementation projects tend to be:
  • The determination of what to do with existing capabilities.
  • The determination of how to define the specific nature of new or transformed capabilities.
Existing capabilities are usually systems but of course can include services or other code that could be readily included within some type of SOA development effort. We tend to refer to this as legacy capability, but the term legacy often has a negative connotation. Perhaps "realized" capabilities better describes the situation in that it reflects that these elements were at some point already successfully deployed and adopted.

The up front portion of most SOA engagements tends to be the most critical part...
 Deciding upon the nature of the new capabilities is also extremely important. If one approaches the situation with the assumption that the development of standards-based services code will magically allow for a plug & play enterprise they are likely to be fairly disappointed. Building service applications does not represent a solutions architecture - with the advent of SOA we must now be more cognizant than ever before of the various inter-dependencies and relationships between infrastructure, application logic and data architecture. This then implies that we need to consider and plan how all of these will fit together or else will leave our fate in the hands of unforgiving chaos.

The fact that all of this involves SOA doesn't change the fact that one must apply a number of basic system engineering techniques and principles to help manage these issues. The diagram below illustrates one possible approach; it breaks the process into three parts:

Part 1 - Capability Taxonomy
Understanding our domain starts here - this ensures that everyone is on the same page from day one, skipping this step (especially in a large enterprise) will place most transformations at risk from the get go.

Part 2 - Capability Granularization
First of all, you need to decide what this is going to look like - are you going to manage like services within some sort of module framework (i.e. statically determined compositions), or are you going to make more discrete services available - ones that can be combined by the users at run-time or added to pages / portals. Or perhaps there will be a mixed approach - whatever that approach is though there will need to be rules guiding how certain capabilities are going to be developed.

Part 3 - Capabilities to Systems Allocation
Then, systems analysis needs to occur to determine if existing capabilities (within systems etc.) can be parsed or transformed into the various new forms defined in the previous step. Only then will the eventual reuse and migration paths become clear.


Copyright 2012, Semantech Inc. All rights Reserved 

Capability-Driven Acquisition

The term ‘Acquisition’ is almost a bit misleading. The primary image that comes to mind when one first hears the term is that acquisition represents the purchase of some tangible set of goods or services. Once upon a time, that may have been true, but today things aren’t so clear cut. Acquisition goes well beyond procurement, and in the federal realm it has become an incredibly complex set of rules regulations and practices. Federal acquisition answers demands for specific oversight into the management and dissemination of tax dollars, but does it really work the way it was intended to? How can all of those rules, all of those folks watching still fail to obtain what they intended to receive on our behalf – it happens all of the time…

The federal acquisition community is well aware of these issues and has been struggling for the past two decades with new approaches to solve the problem. A lot of positive developments have come about; pieces to the puzzle, but yet the puzzle remains incomplete. The most significant advance in Federal Acquisition theory emerged about seven years ago; “Performance-Based Acquisition.” The best way to describe what that means is by reviewing its process elements or steps (taken from an executive summary here,
  • Step 1 – Establish an integrated solutions team
  • Step 2 - Describe the problem that needs solving
  • Step 3 - Examine private-sector and public-sector solutions
  • Step 4 - Develop a performance work statement (PWS) or statement of objectives (SOO)
  • Step 5 – Decide how to measure and manage performance
  • Step 6 - Select the right contractor
  • Step 7 - Manage performance
So why did the federal government move in this direction and what were they doing before Performance-Based Contracting and what do the seven steps mean?

Background
Government acquisition didn’t begin in The United States. Many ancient civilizations had highly complex bureaucracies; Egypt, China, the Roman Empire. All of these societies had governments charged with managing the public wealth, and they exercised that power to build; roads, temples, ships and armies. And in every case, there was always the possibility for mismanagement and corruption. Some of this country’s most famous scandals involved misappropriation of public funds. Any time an event such as that has occurred it led to reform initiatives, that combined with the general evolution of procurement practices and technology gradually built up to the acquisition laws and workforce we have in place now. But something else has been happening recently that has fundamentally changed the nature of acquisition – the advent of information technology.

Dealing with procurement of relatively unsophisticated materials or services on an ‘enterprise scale’ can be challenging enough in its own right; however those both buying and selling shared a significant advantage in the great majority of instances that the nature of what was being transacted was thoroughly understood. A hammer, an engine a sack of potatoes all share a certain lack of ambiguity that is familiar and comforting. It is very easy to determine whether the hammer is faulty, whether an engine runs or whether the potato is fresh or rotten. We also have an easily identifiable cultural context for how much each of these should cost so that when someone discovers that the government paid $150 for hammer the discontinuity is obvious.

When the procurement world was largely concerned with this type of acquisition; the process of buying was directed mainly on materials management. The exception of course for the past 75 years or so has been research and development which has always been considered a high risk proposition. The focus on materials management translated into a contract-centric, bureaucratic approach to acquisition. There was an implied assumption that if all of the steps built into the process (designed to protect the government from fraud), were followed that all would be well – there was little need or incentive to actively oversee or otherwise examine procurements. This assumption began to break down during the cold war as the United States began to increase the number and complexity of its R& D projects and began deploying more complex technologies. The early manifestations of the problem mostly involved the failure, delay or cost overruns of a variety of complex weapons systems. At that point, the failures were still the exception, not the rule.

Then, beginning in the mid-1970’s, two things happened; first those complex weapons systems become dependent on software, and second the federal government became the largest systems development and integration entity in the world. In many ways, the commercial IT industry owes its very existence to the massive investments and fundamental breakthroughs provided by federal programs in the late sixties and throughout the 70’s. The federal acquisition community was being hurtled from a post civil war paradigm to the space age in little more than a decade. Materials management is an objective discipline; it depends upon clearly defined, deterministic principles. Software, systems, integration and their related services however define their own realities in entirely new contexts – assessment of their value is nearly always an exercise in subjectivity.


The larger the Acquisition, generally the more complex and subjective it becomes...

Performance-Based Contracting
By the 1990’s, it became apparent that the previous approaches to acquisition were no longer suited to the new environment. A series of reforms were undertaken by the Clinton administration to streamline processes, streamline the workforce and rethink the basic assumptions. Yet, problems remained. And then at the end of the decade a new philosophy was posited, one designed to deal with the subjectivity of modern day acquisition. When one examines the seven basic steps of Performance-Based Contracting a number of realizations become apparent, including:
  • It is not materials focused
  • It represents a lifecycle process; in a sense it is advocating that acquisition and development lifecycles are part of the same lifecycle.
  • It forces the government to ask the question; what is that we’re really buying and why? This may sound like an obvious question but believe it or not, sometimes it is never answered properly or even asked.
  • It attempts to superimpose some level of objective measurement atop areas that are notorious for their subjectivity. The Performance Work Statement is the most obvious manifestation of that.
  • It acknowledges that from this point forward, the acquisition community must become active participants in the programs they fund – they are no longer limited to managing the dollars, or contracts or paperwork. Acquisition is now a vested partner in the process.
All of these developments associated with Performance-Based Contracting are positive and represent a move in the right direction. Unfortunately though, we’ve only come half-way to where we need to be.  For example, how do the other 6 steps ensure that we know we’ve selected the “right contractor?” Industry research may not be pertinent to the nature of the RFP you’re presenting. Worse still, most source selection processes have become so top heavy and burdensome that the only way an acquisition can deal with them is by arbitrarily limiting the number of participants; more or less ensuring that the same set of contractors compete for all of the business regardless of their previous performance (which is still not entirely clear due to problems with defining measures and political pressure).

Capability-Driven Acquisition
So what’s the answer? How do we get from here to where we need to be? Like most things, the answer begins at a philosophical level and all good philosophies begin with a question; ours is this - if we are measuring performance, what exactly is it performance of? Are we measuring the ability to solve a problem (as perhaps hinted by step 2) or is the problem resolution designed to provide something else? Put another way, do we want to judge a contractor by their effort, by their knowledge at a point in time or by what they produce for us? If they do produce something, will it be relevant or address what we had actually requested?

Quite a lot of questions; quite a bit of subjectivity if our primary focus is only performance, isn’t there. The truth is that a performance-only based paradigm will never work precisely because it stimulates an even greater level of uncertainty than currently exists if all things are measured only in this dimension. There needs to be a stable foundation to build upon before we can tackle and manage the uncertainty and complexity associated with modern acquisition. That foundation is an understanding of the expectations surrounding anticipated capability. By focusing on capability we completely redirect the attention and efforts of the acquisition community and we provide them hard targets for defining and managing performance. 

Capability implies an outcome. Capability can apply equally to hardware, software, integration or services; all of these elements imply use that will lead to practical application. This means that all performance must necessarily be directed to the delivery of capability. The capability becomes the primary criteria for acceptance, not knowledge, not effort, not pieces of work which cannot on their own accomplish a function, but only the resulting outcome-based test demonstrating a capability. So, for our previous materials management example with the hammer, that hammer would be stress tested with however many blows needed to show that it wouldn’t fall apart. For something more subjective, like software, the software must meet the exact capability and performance expectations specified in the requirements. If it is must manage 247 data sources, totaling as many as 400 million records with reports or queries that are returned in less than 2.5 seconds then the capability must be demonstrated and validated.

Now, this leads to some obvious questions. How does one know whether the program is working towards that effective capability or simply burning money? This is where performance measures are supposed to help, but again if those measures only track project progress in its own context (i.e. milestone’s A & B are complete), what does that really tell us about capability? Does it solve our problem – what is a milestone, is it a marker for how much resources were expended, how much code was developed or does it perhaps represent how much of the capability has been achieved? It has to be the latter to matter – and this is precisely the premise for most approaches to what is referred to as ‘agile’ development. For this to succeed, projects or programs cannot simply be divided across work breakdown units tracking progression across time and space. The project must determine up front how elements of capability can be subdivided and build the development structure around those so that each element can be released and demonstrated in turn. Progress must be tangible to ensure project success and reduce the risk typically associated with complex endeavors.

So how does one define a capability, let alone subdivided it; requirements, requirements, requirements? Requirements engineering is the most underestimated, undervalued activity in IT. Acquisition professionals have some idea of the importance of requirements engineering and management but still have not really grasped the most fundamental truth of acquisition – an acquisition is only as effective, will only ever be as effective as the requirements associated with it. And capability is expressed, chiefly, through requirements, both functional and technical requirements. This implies that the most important stage of any project or acquisition process is the beginning. The requirements provide the basis for all performance measurement, for capability demonstration and for the entire scope of acquisition.

If we were to develop seven steps for capability driven acquisition; it might look like this:
  • Define a scenario in terms of the problem or challenge and the capability needed to resolve
  • Validate the scenario
  • Develop an acquisition strategy, determine capabilities
  • Expand the scenario – build detailed requirements
  • Build a scenario-based RFP, let the contractors describe how they will solve this problem; how they will meet the desired capability test.
  • Build performance measures & project oversight based upon the detailed requirements, continue to validate requirements & performance throughout the lifecycle
  • Continually demonstrate capability through the final iteration to achieve acceptance.
Capability-Driven Acquisition removes much of the ambiguity that is revolving around the current approaches. It provides a paradigm designed specifically to cut through the layer of subjective confusion that plagues us today. Either the acquisition passes its demonstrations or it doesn’t. A program that fails to pass its capability demonstrations is ripe for cancellation or restructuring – it should be obvious much earlier that it will fail, why wait until all of the money is spent? The application of Capability Driven Acquisition will likely involve new tools, techniques and methodologies – however if they logically flow from the core premise they should remain relatively straightforward. A first step might be the Capability Work Statement…


Copyright 2012, Semantech Inc. All rights Reserved 

Friday, October 19, 2012

Capability-Based Enterprise Architecture



Capability and performance; one logically precedes the other, if either is missing they both become meaningless. There has been a tremendous emphasis in recent years on the ability to measure and track performance – this has led to many positive improvements in the approaches to portfolio management, governance and accountability. However, when these efforts are discussed it feels as though we’re only getting it half right; if we artificially separate the two most important aspects of any IT project why should we be surprised if the outcomes still aren’t predictable ? IT projects as compared with other business endeavors tend to suffer from a much higher ratio of unpredictability and the risk of failure tends to increase in proportion to the relative complexity of the project. While there is no silver bullet to change this situation, improved outcomes are possible if we re-examine some of the core assumptions we tend to start with on any given IT project. There is also a set of actionable techniques to make this happen, something that could be referred to as Capability-Based Architecture. Why the emphasis on capability, because it is our logical starting point…

Enterprise Architecture (EA) is a very powerful tool, one that has yet to realize its full potential. Too often it has been applied to parts of the overall IT oversight process but seldom if ever to all of it. That is precisely what needs to happen; enterprise architecture can and should become the central organizing mechanism for management of IT capability, and not simply be employed as a mechanism to facilitate governance or design. What has been missing in this equation is some sort of methodological context that would allow for use of EA as a medium to both visualize all the associated concepts and track the pertinent data. The focus on capability management provides a suitable framework and blends very well into efforts already in place to manage performance. Let’s take a closer a look at several typical IT project assumptions and how employing a capability based architecture methodology could change our perspective.

A DODAF view of how Capabilities can be represented by EA...
 Assumption 1 – The RFP = Requirements
While many projects are different in that some are overwhelmingly dedicated to developing new (unfielded) capabilities and others are dedicated to the provision of services for existing capabilities, ultimately every IT project starts with a capability expectation. The problem with an RFP being requirements focused is that often as requirements change the core capabilities associated with them change or become distorted – this is the dreaded ‘expectation disconnect’ that tends to wipe out many large-scale projects. Each and every RFP that goes out should contain a capability expectation description and this should map directly into a capability contract. The main idea here is that specific requirements will roll up to the capabilities but it is understood from the beginning that under no circumstances will the capability expectation change, requirements can remain flexible as long as they do not exceed or underachieve the agreed upon expectations.

To combat this assumption, each project can model as the very first step in an expanded EA process the set of solution capabilities they are seeking to achieve and then drill down to “requirements” level objects. At the early stages these requirements objects would likely be equivalent to business or functional requirements. Any capability has a matching set of functions which can be described using requirements objects, this can be visualized as well as verbalized to help ensure understanding of project scope and expectations. Depending on the extent of integration an organization has achieved with its EA process, all capability sets can be derived from business goals & objectives and mapped to projects and project elements. The visual roadmap can scale up through the enterprise or down to the lowest level of development. 

Assumption 2 – It is difficult to determine performance before a solution has been fielded
If we follow the premise that all aspects of a project branch out from the initial capability expectations, then it stands to reason that capability expectations can drive definition of performance expectations before project launch. The key of course is to explicitly link them in the RFP or statement of work. The reason most people find metrics definition and performance work statements difficult to build is that they lack the solid foundation that a capability map and contract would provide them.

This assumption can be addressed through the capability based EA by creating Key Performance Indicators, Measure of Effectiveness or similar objects mapped to each capability. Each set of requirements under a particular capability object hierarchy would then have starting point for more specific measures. This can later be extended as the project progresses as long as it is clearly understood which measures are contractually binding are which aren’t (i.e. additional tracking of project or capability health may lead to metrics not originally considered but these would not be tied to service level penalties if not met). Of course, this highlights the importance of determining most of this conceptual framework up front, whatever is not defined and made legally binding becomes a risk.  

Assumption 3 – The Project and Capability are Separate
Many if not most IT decision makers tend to view the anticipated solution or delivery of capability as largely separate from the project which produced it. This seems a logical enough usually with capability only emerging towards the end of a project in milestones sometimes referred to as Initial Operating Capability (IOC) and Full Operational Capability (FOC). This leads to several problems starting with the inability to create a linkage from the larger set of enterprise goals and objectives to the project which is in reality solely defined by a subset of capabilities mapped to related enterprise goals. The other major problem that arises is that the project management itself loses its capability based context when schedules and work efforts are viewed out of the context that makes them relevant. The project is the facilitating medium for the evolution or emergence of specific enterprise capabilities, as such everything that occurs during the project itself necessarily impacts the outcome of those capabilities.

This basic premise has a number of common sense implications:
·       If the project isolates itself from the leadership who determined the initial linkage of goals to objectives to capabilities it is likely that there may be gaps in interpretation.
·       If the project isolates itself from the intended user community of the capabilities, they will have a difficult time mapping those capabilities to lower level design requirements.
·       The project thus represents both a lifecycle and communications continuum. The capability lifecycle begins at conceptual definition and continues through capability retirement – the development / deployment project becomes the primary means of linking all of the lifecycle elements and communities together.
·       All aspects of any IT project can be visualized or modeled using the same framework applied to the solution architecture (preferably within a larger enterprise context), and in fact this integration of project and solution is perhaps the most effective way to reduce uncertainty in complex IT projects. Existing technologies can provide the ability to merge data from various tools such as MS project or EA modeling tools allowing for comprehensive tracking and analysis of capability evolution.    

Assumption 4 – New Capability = New System
Too often in IT simple conceptual roadblocks tend to cause larger practical problems, some of which become so complex that it becomes difficult to untangle what went wrong. One of the conceptual roadblocks that is particularly dangerous is the notion that any new capability requires the development, procurement or major modification of a system. This problem translates even into several of the popular paradigms use for EA modeling. If decision makers followed a capability based approach this roadblock could be bypassed in many cases. The capability does not distinguish between the relative options which may allow for its actualization. In other words, new capabilities can be derived from existing technologies or systems which simply need to be re-purposed. The extent or scope of the work involved in repurposing can vary widely, however it is almost in every case smaller than the scope of work needed to generate a new capability from ground zero as a new system (be that as software or through integration of various system elements).

There are a number of examples of how this is emerging right now with web technologies:

  • Podcasting leverages the RSS standard and available media formats to create a new audio delivery capability.
  • Blogging is a variation of several web publishing technologies.
  • Wikis & Commons are variations on previous collaboration and content management web technologies.

How does one build in the flexibility to determine whether a capability should manifest itself as a new system or whether it can leverage existing organizational assets & technology ? This can all be modeled as part of the capability based architecture approach. Each capability can display its own set of alternatives (objects) which in turn can display hierarchies of detail per the needs of the organization covering aspects such as risk, cost, complexity and assets available for reuse. In essence an Analysis of Alternatives (AoA) should occur for every capability-based process. While most view an AoA as an obscure engineering exercise it is in reality perhaps the single most important yet underutilized project management technique available to decision makers.

What has been described here doesn’t in most cases require significant new investments in technology or expertise. Capability based architecture and management is a conceptual framework for mitigating complexity – it accomplishes this by allowing any sized project to keep a clear focus on the core elements which remain constant throughout a lifecycle that extends beyond the project itself. Once this approach is adopted, a ‘capability roadmap’ can emerge, linking all enterprise projects and all stakeholders.     


Copyright 2012, Semantech Inc. All rights Reserved