Cyber Security Predictions for 2017

2016 was a big year in the annals of Cyber Security, and 2017 promises to eclipse it.

Creating an Enterprise Data Strategy

An introduction to the process of developing comprehensive strategies for enterprise data manangement and exploitation.

A Framework for Evolutionary Artificial Thought

Let’s start at the beginning – what does this or any such “Framework” buy us?

The Innovation Dilemma

What things actually promote or discourage innovation? We'll examine a few in this post...

Digitial Transformation, Defined

Digitial Transformation is a hot topic in IT and big money maker for consultants - but what does it really mean?.

Friday, November 30, 2012

Intelligent Healthcare Scenarios

In yesterday's post, we introduced a number of core capabilities that would facilitate deployment of enterprise-wide Intelligent Healthcare solutions. In today's post, we're looking at the core problems and specific scenarios under which those capabilities might be applied.

Inter-operability is over-arching problem set behind most Healthcare related innovation

There are a number of what might be considered ‘problems’ associated with Healthcare-related data inter-operability. In this context, a “Problem Set” represents a meta-category of related aspects that together comprise all elements of a problem.

The initial problem sets for Healthcare Inter-operability are:

  • Document-centric message focused integration.
  • Conflicting (Healthcare-related) terminology.
  • Conflicting standards and resulting implementations.
  • Exclusion of free text data in traditional EHR/EMR systems.
  • Lack of specific support for Healthcare practices / processes.
  • Security & Data Integrity of exchange transactions.


ID
Scenario Title
Description
S01
Cross-Provider Diagnostics
Care scenarios involving more than one diagnostic participant.
S02
Cross-Provider Treatment
Care scenarios involving more than one treatment provider.
S03
Multi-Provider Care Management
This refers to situations where care is spread across many different offices / organizations, some perhaps not affiliated.
S04
Multi-National Incident Response
For epidemic, pandemic or bio-terrorism events.
S05
Group Characteristic/s Identification
Requires access to wide ranging data sources
S06
Group Pattern/s Identification
Requires analytical capabilities and access to wide range of data sources
S07
Patient Pattern/s Identification
Even at patient level, data may be required from multiple caregivers, systems
S08
Lessons-Learned Capture
At the organization, personal or regional level
S09
Lessons-Learned Dissemination
No real mechanism exists for this besides medical journals which are proprietary, costly and restricted
S10
Intra-Organizational Integration
Enterprise level integration within the caregiver organization today is costly and complicated
S11
Organizational Healthcare Reporting
Reporting requires access to all pertinent data sources
S12
Regional Healthcare Reporting
Regional reporting requires access to all pertinent data sources
S13
Cross-Organizational Research
Currently difficult due to access issues
S14
Communities of Practice Knowledge Sharing
Today this occurs through continuing education and conferences (indirectly), there is no real-time feedback mechanism
S15
Trend Visualization & Statistical Analysis
Usually requires manual support to accomplish





Healthcare Scenarios which may apply core IH capabilities



Copyright 2012  - Technovation Talks, Semantech Inc.

Thursday, November 29, 2012

Intelligent Healthcare Capabilities

We introduced the concept of Intelligent Healthcare across several posts so far:


Over the next several months we're going to cover a number of related IH topics and we thought it would be worthwhile to capture some of what we consider to be the most important IH-related enterprise level solution capabilities.

Next Generation Healthcare IT innovation begins with vision...
There are two perspectives into Intelligent Healthcare (IH) capabilities; the first is based upon how IH can be exploited as a methodology to orchestrate the design and implementation of Healthcare IT solutions. The second perspective is based upon how those solutions developed using the IH methodology actually impact the practice of Healthcare. Both perspectives are vital in helping to understand and support the eventual realization of the benefits implicit with Intelligent Healthcare

ID
Capability Title
Capability Description
C1
Patient-Focused Performance Engineering
Attributes that can be identified and included within any Healthcare solution design paradigm.
C2
Agile Lifecycle Federation
The ability to define solution best practices and make those practices available to communities either as recommendations or solution compliance guidelines.
C3
Solution Performance Metrics
Definition of systems performance expectations with strategies for mitigation of issues when or if they may realized. (SLAs with engineering support)
C4
Scenario-based Design Patterns
Definition of integration patterns; at the data level, services level, infrastructure level and combined. 
C5
Dynamic Collaboration
Web 2.0 morphing into 3.0 – using Social Publishing capabilities to replace other traditional software support and tie together communities at a more detailed level. This covers project management, requirements management, COI management etc.
C6
Community Process Management
The ability to collaboratively define, publish and share process-based knowledge and functionality.
C7
Clinical Decision Support (CDS) Reference EA
Definition of follow-on architectures that can benefit from the Connect core – these reference architectures can help to further harmonize inter-operability initiatives.
C8
Semantic Reference Architecture - (plus management of Healthcare Ontologies)  
Illustrating prototypical architectures for extending data inter-operability using emerging technologies such as RDF databases.
C9
Federated Enterprise Architecture
The ability to reconcile EA’s across domains using a Wiki-based knowledge framework rather than through proprietary EA tools.
C10
Innovation Management
Development of Innovation techniques, tools and processes designed to support integration of new technologies into existing initiatives. This capability provides IH with the proactive perspective needed to anticipate issues years ahead of time.
C11
Trend Identification
The ability to engineer trend management capabilities both within individual (local) and group (global) perspectives.
C12
Automated Data Collection
Self-configuring interface negotiation and message transport. 

All of these capabilities are designed to facilitate a new paradigm wherein medical information becomes dynamic and problem-solving becomes collaborative.


Copyright 2012  - Technovation Talks, Semantech Inc.

Wednesday, November 28, 2012

Exploring Governance Process - part 2


Part two of the Exploring Governance post focuses on prototypical process steps. As noted in the previous post, this example relates to an enterprise-wide SOA governance initiative. Governance process can be applied to any number of areas or can be made available universally to all IT functions (e.g. Data, SOA, Security, SDLC/ALM). Each organization needs to determine for itself how lean or robust the process (and its steps) should be - those decisions will determine which steps (or variations thereof) listed below might be appropriate. 

  1. Submission – submission can involve "ideation" but usually is handled as requirements management. The goal is to make this as flexible and open-ended as possible in order to allow the process and the SARB to serve as a funnel for all the necessary information to build an enterprise roadmap for services across the enterprise. Given that this process will be occurring before any automation is in place and at the beginning of a SOA adoption initiative it is important to make sure that we provide paths for different types of information to come in. So there will be legacy capabilities without specific designs for formal requirements. There will be functional requirements without designs. There will be functional requirements with designs and there will be designs without functional requirements. All of this could be viewed as idea submission. Depending upon whether or not designs or requirements were included the governance process would obviously have to coordinate with related processes to make sure that the service submission met basic standards.

  1. Requirement Acknowledgement – this is a relatively straightforward step in that the SARB would simply be providing some sort of formal acknowledgement that an idea or requirement had entered the services governance process.

  1. Capability Allocation – capability allocation or classification is somewhat more complicated. This involves mapping requirements to a specific set of definitions for capability from the retail enterprise perspective. As noted previously this will require the prerequisite of some sort of preliminary capability inventory. This inventory will take the form of a taxonomy or ontology that can later be applied to a variety of process automation tools including ALM tools, project portfolio management tools as well as configuration management software (including SOA CM).

  1. 1st Level Analysis / Review – the first level analysis and review step involves coordination between review board members and designated technical staff to ensure that the idea or requirement submission meets basic technical standards. The deliverable provided by this step in the process would be a checklist indicating that those initial criteria have been met or not and based upon that a determination would be made whether or not the requirement satisfied the standards necessary to achieve first level approval. It is important to note here that the governance process is not the design process nor is it meant as a replacement for the application lifecycle management process. Governance is meant to provide technical and business oversight and to ensure that enterprise standardization is occurring.

  1. Requirement Logging – requirement logging is a fairly straightforward step in the process and that it merely represents annotations being made by the CM point of contact within the SOA COE wiki.

  1. Initial Approval – initial approval then merely represents that the requirement has been properly submitted and formatted and meets basic criteria for moving forward in the process.

  1. 2nd Level Analysis – second level analysis represents the most complex part of the overall process. This requires coordination through a small group or integrated product team including board members, technical staff and business stakeholders. This is not meant to be a complex engineering endeavor, but rather an examination to determine whether the well-defined service requirements provided from the previous steps in the process conflict or overlap with other requirements or other existing services. In this sense then we are dealing with enterprise or systems level reconciliation as opposed to validation of the service logic in its own context. This analysis involves technical review of service contracts, WSDL's and other service logic at the functional level. It also involves comparison of business logic, rules and functional expectations.

  1. Recommendation & Architecture Update – The result of the second level analysis is a consensus recommendation that is then properly documented by the SARB and included into the SOA architecture roadmap. It is likely then that the architecture board would support some sort of “to be” reference architecture which would include all of the recommendations that had been approved up to this stage.

  1. Handoff to ALM – any artifacts that reach SARB recommendation level would then be provided as roadmap updates to the ALM process. No actual work would occur on these until management approval was obtained.

  1. Deployment Preparation – the CM point of contact would then prepare the service information for final potential deployment. This would include any standardization for modification to WSDLs or service contracts.

  1. Final Approval - this step is provided to ensure that management retains control of the service release process and/or any follow-on work that may need to occur.
The PMO must govern a wide spectrum of IT processes and capability

Process Implementation Considerations
As noted previously, all the roles defined in this process may not necessarily be used. For example the CM role may represent a member of the SARB. This process provides a general guideline and the templates for future governance processes, but in order to test its efficacy within the enterprise environment we would need to demonstrate it utilizing actual requirements and working with the likely participants among the stakeholders. In order to engage in that sort of dry run activity, the prerequisites would also need to be in place.


Copyright 2012  - Technovation Talks, Semantech Inc

Tuesday, November 27, 2012

Exploring Governance Process - part 1


Today's post presents a Governance Case Study; in this case the introduction of SOA Governance within an enterprise environment. 

The larger context for SOA Governance 

Process Context
The process depicted in the figure above represents one of several related processes. We've referred to this process as the SOA Service Approval and Governance process. This is used to help document to provide oversight for SOA service development and management. The other processes that this will become integrated to will include a modified project and portfolio management process, an engineering design process and an application lifecycle management process. Each of those processes will have hooks into and out of the governance process. Both the governance and design-related processes will definitely involve collaborative small group involvement (and that involvement will include collaboration between both technical and business representatives). The eventual goal would be to further integrate the related processes and to provide automation for all of them. 

Process Overview
The SOA Approval and Governance Process is at its heart both a requirements management process as well as a configuration control process. We've also included some elements of design review within the process in order to provide  the most flexible approach and least number of processes possible. The primary objective of this process is to help provide a documentation baseline of all potential service opportunities within the enterprise. The primary role within the process is that of the SARB - this could be translated to either Services Architecture Review Board or Software Architecture Review Board. At first the focus will primarily be directed to SOA services but eventually the board can extend governance across all business logic . The other roles presented in this process include:

  • Stakeholder - stakeholders represent business analysts, management and possibly even customers.
  • Developer - developer represents any technical resource that would be involved in either constructing a service or utilizing a service as a system-level consumer. Developers and stakeholders are the primary participants in both design process activities as well as design review activities within the governance process.
  • CM - CM stands for configuration management and in this context would represent one person designated as the point of contact and responsible for documenting specific configuration information within the SOA Center of Excellence (COE) wiki.
  • Manager - this is self-explanatory and of course the management role is responsible for integration of governance with project portfolio management.
Prerequisites
There are several prerequisites for the governance process; these include a Center of Excellence wiki, a preliminary capability inventory, templates for the process deliverables and extended guidelines splitting the function of this process in the SARB which would be posted on the wiki.

Goals
There are several critical goals associated with this process:
  1. Standardization of requirements capture, documentation and management.
  2. Standardization of SOA services design through enforcement and oversight.
  3. The ability to reconcile requirements and designs across the retail enterprise.
Service Level Versus Enterprise Engineering
One of the key concepts associated with this particular process is the separation of design and design review as well as the separation of discrete design from enterprise design. All these considerations are managed through the first and second level analyses within the process. In other words, we are viewing the SOA designs at two different stages in the governance process – stage I allows us to ensure that the granular service is constructed according to agreed upon standards. Stage II allows us to ensure that services fit within the enterprise in the most efficient way possible.

part 2 of the post will examine the process steps...


Copyright 2012  - Technovation Talks, Semantech Inc

Sunday, November 25, 2012

Why Quantum Computing is a Big Deal

2012 will likely be remembered for two key events in the history of Information Technology:
  1. Watson defeating the all-time champions on Jeopardy.
  2. The realization of multiple Quantum technologies after decades of predictions, theoretical musings and research...
Of these two events, the latter is probably more profound as information technologies like those used in Watson will eventually harness foundational capabilities provided by quantum chips and networks to exponentially increase their capacity for semi-intelligent reasoning (in their eventual pursuit of Natural Intelligence). But that's not the only way Quantum Computing or Quantum Information Technology (QIT) as its beginning to be referred to as will change our landscape. This post will provide a brief overview relating why all of this is important and introduce some of the key concepts  In later posts we will examine some of those concepts in greater detail.

Qubits can store and process almost infinitely more values and calculations than classic bits 

Key Concepts (in Quantum Computing or IT):
  • Quantum Information Theory - Quantum information theory is the study of the achievable limits of information processing within quantum mechanics. Many different types of information can be accommodated within quantum mechanics, including classical information, coherent quantum information and entanglement. 
  • Quantum Information - In quantum mechanics, quantum information is physical information that is held in the "state" of a quantum system. The most popular unit of quantum information is the qubit, a two-level quantum system. However, unlike classical digital states (which are discrete), a two-state quantum system can actually be in a superposition of the two states at any given time.
  • Superposition - Superposition is the ability of a quantum system to be in multiple states at the same time — that is, something can be “here” and “there,” or “up” and “down” at the same time.
  • Qubit - A qubit is a two-state quantum-mechanical system such as the polarization of a single photon: here the two states are vertical polarization and horizontal polarization. In a classical system, a bit would have to be in one state or the other, but quantum mechanics allows the qubit to be in a superposition of both states at the same time, a property which is fundamental to quantum computing.
  • Quantum Entanglement - Entanglement is an extremely strong correlation that exists between quantum (or larger) particles — so strong, in fact, that two or more quantum particles can be inextricably linked in perfect unison, even if separated by great distances. The particles remain perfectly correlated even if separated by great distances. The particles are so intrinsically connected, they can be said to “dance” in instantaneous, perfect unison, even when placed at opposite ends of the universe. This seemingly impossible connection inspired Einstein to describe entanglement as “spooky action at a distance.”
  • Quantum Decoherence - Decoherence can be viewed as the loss of information from a system into the environment since every system is loosely coupled with the energetic state of its surroundings. Decoherence occurs when a system interacts with its environment in a thermodynamically irreversible way. This prevents different elements in the quantum superposition of the system+environment's wavefunction from interfering with each other.
  • Quantum Teleportation - Quantum teleportation, or entanglement-assisted teleportation, is a process by which a qubit (the basic unit of quantum information) can be transmitted exactly from one location to another, without the qubit being transmitted through the intervening space. It is useful for quantum information processing. However, it does not immediately transmit classical information. Quantum teleportation is unrelated to the common term teleportation – it does not transport the system itself, and does not concern rearranging particles to copy the form of an object.
Quantum memory...
Milestones achieved this year:

There's a lot more going on than the items listed above but what's really interesting is how many of these breakthroughs are occurring this year. The implications of the breakthroughs that are occurring right now include the following:
  • The field of Cyber Security will have to be totally rethought (or significantly adjusted).
  • The nature of high speed communications will make an exponential leap relatively soon (from a variety of related quantum technologies including encryption).
  • Quantum computing devices are getting closer to reality. While the technology may not become immediately available to most users, quantum computers may become available much sooner through the Cloud.
One of the biggest controversies associated with the potential of Quantum Computing is the theoretical possibility of transmitting information instantly across entangled bits at faster than light speeds. We will explore that and other related issues in Quantum information technology in coming posts.


Copyright 2012  - Technovation Talks, Semantech Inc

Thursday, November 22, 2012

Enterprise Integration Maturity Levels


In the previous posts, we have defined the key concepts of:

Today we're going to explore a related topic - Enterprise Integration. Integration is involved to some extent in all of the above listed topics. We will first review some guiding principles for integration and then provide a high level descriptions for integration maturity levels. 


Integration Principles
Enterprise Integration Maturity Levels

Legacy Integration Level
This level represents organizations transforming from a silo maturity level. In these situations integration is tightly coupled and usually not comprehensive. Even still the legacy integration maturity level generally reduces operational and maintenance costs while enhancing capability. These cost reductions are realized by reducing redundant and laborious data entry processes and reducing batch cycles to transform and transfer the data from one system to another. From this transition the data is available on a with reliable delivery of data and automated data format conversion for the integrating systems. The transformation from structured programs to all modular also leverages re-usability of the code and help and re-usability and reduction of the software maintenance complexity since the software is more modular. The modular code increases readability of the code best reducing maintenance time.

Componentized Level
Organizations transforming from an integrated maturity level to componentized maturity level  benefit in preparing themselves to expose business functionality at more granular levels. The re-usability also matures to a business function level as compared to an application level. Enhancements and new functionality are achieved through re-factoring in existing applications into smaller reusable components. The dis-aggregation of the business logic in itself helps in reducing the complexities of systems and facilitates the analysis of the impact of componentized solution on new business models and business transformations. This componentization also helps the business in reducing the time to market and increases IT response to business changes.

Shared Consolidated Infrastructure Level
This involves the deliberate unification / consolidation of Data Center capabilities including:

  • Shared Hosting
  • Virtualization 
  • Process Integration (ITIL)

Service Level
The transformation from a componentized maturity level to a service maturity level evolves the organization into a service provider. The service provider role can serve other organizations within the enterprise as well as external organizations. Business services now become reusable as this maturity level reduces the need for redeveloping the same functionality for multiple systems by the provision of reusable business services called through a standardized interface irrespective of the technology platform on which the application is running. These business services can also offer access to data in a controlled timely manner which reduces inconsistencies in the data within systems that access and update the data. The investment of effort in service identification, specification, development, testing and implementation is paid back when new systems require the same service from the providing organization since the cost of infrastructure and maintenance of common functionality is reduced.

Composite Service Level
Organizations transforming from the Service maturity level to Composite Service maturity level have structured their business and IT support so that new business processes may be more rapidly constructed out of business services and provide new business functionality to different parts of the organization more efficiently. This maturity level is also associated with reduction of the time to market for new business models or capabilities. At this level transformation it is primarily a re-composition of the business services provided by different organizations within an enterprise of the value chain in the enterprise.

Software as a Service (SAAS) Level (Cloud)
Software as a service is the next logical maturity level. SAAS builds upon the shared (Cloud-based) infrastructure and shared enterprise capabilities developed in the previous maturity faces. SAAS allows organizations to provide capability as a commodity which either enhances existing revenue generation functions or creates new ones. In addition, systems may be reconfigured to achieve higher reliability without the consumers having to modify their code. Software as a Service enables organizations to better align business requirements with IT capabilities by building robust services that are highly flexible, manageable and scalable.


Copyright 2012  - Technovation Talks, Semantech Inc.

Tuesday, November 20, 2012

Standardization versus Innovation

In yesterday's post, we discussed how the most common components of any Enterprise Transformation is usually a set of specific or tactical Standardization exercises. These exercises are meant to help:

  1. Improve inefficiencies
  2. Drive down costs
  3. Enhance current capabilities

The third point here is key - most standardization efforts are directed at moving an organization towards commoditization of already well-understood, previously deployed capabilities. Now it so just happens that standardization efforts are also needed in order to facilitate migration to new capabilities as well (within the context of an Enterprise Transformation), so standardization does play a dual role in some cases. Often times standardization occurs separate from any larger Transformation initiative - a good example of this might development of standard desktop images or movement from traditional desktop management to virtual desktops. Now at first glance, it may seem as though the Desktop Standardization efforts are in fact facilitating enterprise innovation, but this is not always the case.

Virtual Desktops represent progress, right?
Today's topic deals with the more or less constant tension between Standardization and Innovation that exists within most enterprises. We'll use the Desktop Image Management example as our case study. So, let's step back for a minute and ask the obvious - why should Innovation and Standardization necessarily be at odds? The quick answer is this - IT moves between cycles of expansion and distribution to periods of consolidation and centralization. This has been occurring in one form or the other since the beginning of Information Technology as an industry. A good example of a current manifestation of this cycle is the movement towards Cloud Computing as a way to facilitate centralized management (or the consolidation of the data centers needed to provision those Cloud solutions).  

At the same time as Cloud solutions are making inroads, other more disruptive Mobile solutions are forcing a more distributive paradigm - although the hope is to link all of the new disruptive technologies through Cloud. The battle between "taking control" of the enterprise and "taking advantage" of emerging technologies never ends and it never will. The reason that this is the case is that each activity (standardization and innovation) has a somewhat different Use Case:

  1. Standardization is primarily focused on optimization of existing capabilities
  2. Innovation is primarily focused on exploitation of newer capabilities

As we pointed out earlier, there are ways to combine the two Use Cases together in the context of a larger Use Case (e.g. Transformation). There are other ways to do it as well, but we'll return to that in a minute.

So, let's examine the desktop case study again. How might we interpret the ability to centrally provision standard desktop images or do so using virtual desktop technology? Is the goal of standardization to:

  1. Reduce licensing costs (for desktop software)
  2. Simplify OS management, thus reducing Desktop management staff 
  3. Make it is easier for the organization to build new applications and services and apply them to standard environments
or are the goals to...
  1. Extend use of nonstandard devices by being able to centrally manage all sorts of OSs and OS combinations
  2. Facilitate the development of systems that use more than the standard desktop or laptop hardware
  3. Help to open up new business models or mission opportunities for the organization by supporting a flexible set of platforms?
The first set of goals are connected to the Standardization Use Case, the second to the Innovation Use Case. So, what happens if an organization chooses the first set and refuses to acknowledge the latter set? This often results in an innovation bottleneck; wherein only small amounts of innovative new capability can be absorbed into the enterprise - whether the larger organization wants it that way or not. And this is where it gets tricky, because often times decisions made to optimize or commoditize certain areas of IT that are seen as needing to be super-efficient can thus lead to severe restrictions in the ability of any part of the enterprise to be innovative. Worse yet, these conflicts are generally not understood or often not even recognized. The constraints and dependencies of a complex organization aren't generally what's communicated to senior leadership - CXO level folks are generally more interested in hearing where savings have been achieved. 

The problem is though, that more often than not the inability to reconcile the Use Cases leads to more losses than the savings gained in achieving super-efficiency if the enterprise fails to recognize what's really going on. 



Copyright 2012  - Technovation Talks, Semantech Inc.

Monday, November 19, 2012

Understanding Enterprise Standardization

Just last week, we introduced the concept of Enterprise Transformation. Enterprise Standardization is often considered to be a part of Transformation, with special attention usually being given to data and shared service and infrastructure standardization. We can think of Standardization efforts as being the tactical elements of any given Strategy or (umbrella) Transformation initiative. Transformation is more EA focused, while Standardization must necessarily grapple with issues at the project level.


The ability to develop, harness and manage enterprise business capability begins with the ability to standardize the design process and with it the solution architecture. Standardization encompasses a number of different tactical areas including, but not limited to:

  • Business Process
  • The Data itself
  • Hardware / Infrastructure (this generally involves both process as well as data center consolidation)
  • Network Management
  • Application Lifecycle Management (ALM, this includes Agile)
  • Business Rules
  • Data Architecture
  • Security
  • Commercial Software (and COTS Lifecycle Management)

A shared services paradigm (based on SOA) usually traverses many of these areas and facilitates their integrated management within an Enterprise perspective. The idea here is that with all of the different design approaches available, there needs to be mechanisms in place to integrate them. (if you don’t or can’t integrate design approaches, how can you hope to integrate the solutions developed from them?)

Enterprise Transformation generally requires multiple Standardization efforts
We’ve alluded to the need for greater standardization both in the context of architecture / design as well as in solution management in order to support a wider Transformation. Standardization is thus applicable at many levels. In particular, standardization provides a mechanism to help move from IT silos to unified Enterprise solutions. In that sense it becomes both a means and an end goal.
Specific decisions that lead to standardization include things such as:

  • Adoption of standard software platforms. 
  • Adoption of standard hardware platforms.
  • Adoption of technical standards and best practices.
  • Adoption of standard enterprise processes.
  • De-conflicting of redundant logic, rules and systems.

Ultimately, the management of integration of heterogeneous IT environments and business models becomes the most expensive component of the business and the one most likely to add operational risk (w/o standardization). Thus, specific Standardization projects within a Transformation are likely to yield the highest return on investment - exactly which ones though hold the most value is usually determined on a case by case basis as every organization is somewhat unique.


Copyright 2012  - Technovation Talks, Semantech Inc.

Sunday, November 18, 2012

Key SOA Design Principles


SOA primarily refers to a set of architectural principles. These principles represent a design continuum that links software development to Cloud deployment and capability exploitation. To support those principles a variety of standards and products have been developed to facilitate implementation and maintenance of SOA solutions. These standards include but are not limited to the following:

  • SOAP - Simple Object Access Protocol
  • UDDI - Universal Description, Discovery and Integration
  • WSDL - Web service definition language
  • WS Policy
  • WS Security
  • WS Addressing

The core SOA Use Cases are dependent upon its architectural principles

The design principles most often agreed to as representing SOA include the following (and yes some of these principles are inherited from Object Oriented Design or expand upon it):
  • Standardized Service Contract – Services adhere to a communications agreement, as defined collectively by one or more service-description documents.
  • Service Loose Coupling – Services maintain a relationship that minimizes dependencies and only requires that they maintain an awareness of each other.
  • Service Abstraction – Beyond descriptions in the service contract, services hide logic from the outside world.
  • Service Re-usability – Logic is divided into services with the intention of promoting reuse.
  • Service Autonomy – Services have control over the logic they encapsulate.
  • Service Statelessness - Services minimize resource consumption by deferring the management of state information when necessary
  • Service Discoverability – Services are supplemented with communicative meta-data by which they can be effectively discovered and interpreted.
  • Service Composability – Services are effective composition participants, regardless of the size and complexity of the composition.
  • Service Encapsulation – Many services are consolidated for use under the SOA. Often such services were not planned to be under SOA.

The history, technical goals and expectations for SOA can be described thusly: “Service-oriented architecture (SOA) is an evolution of distributed computing based on the request/reply design paradigm for synchronous and asynchronous applications. An application's business logic or individual functions are modularized and presented as services for consumer/client applications. What's key to these services is their loosely coupled nature; i.e., the service interface is independent of the implementation. Application developers or system integrators can build applications by composing one or more services without knowing the services' underlying implementations.”


Copyright 2012  - Technovation Talks, Semantech Inc.

Friday, November 16, 2012

How to Build a Cyber Security Consortium - part 1

by Stephen Lahanas

Several years ago, I was involved in an effort to build a Cyber Security Consortium. This initiative was focused primarily on the Federal marketplace but was not in any way limited to government-related missions or solutions. Our effort was one of several that sprang up more or less at about the same time - most of these parallel efforts, like ours, were being driven from the Defense community. So, this introduction begs a couple of questions:

  1. Why would someone want to create a Cyber Security Consortium?
  2. Why should something like that, if it were necessary, tend to be spear-headed by Defense contractors?
  3. What does all of this have to do with innovation and technology?
In response to the first question - the reason why Cyber Consortia are necessary is due to the complexity of the problem space. When I began on the project, I had not yet run across any organization that had developed a comprehensive Cyber Security methodology and the notion of a single solution to cover all aspects of security has been a non-starter for a long time. In fact, there were very few organizations that were even tracking or managing security architecture at that point. While there has been some improvement in these areas over the past 3 or 4 years, it is still relatively uncommon to see these things being practiced in larger enterprises.

When we're referring to complexity here what we really mean is the extraordinary stack of tools, processes and knowledge that is required for anyone to fully understand let alone counter-act a juggernaut of disruptive technology designed primarily to defeat standard enterprise security paradigms. There are literally dozens of areas of specialization now in this field and only a handful of organizations on the planet have the resources to evaluate and experiment with most of them. And even those few organizations that can tend to be forced more or less to play catch-up as new developments emerge.

What was needed was not one organization to track and react to the full scope of Cyber Security evolution, but rather a community approach. The community approach for managing Cyber exploits began in the late 1990's - this provided an elementary model for much more complex collaboration approach.  

In response to the second question - The defense community in reality serves both government and commercial clients (something called the Defense Industrial Base or DIB). Companies like Lockheed Martin, SRA, Serco and Northrop Grumman (and many other systems integrators) all tend to have both the resources and relationships necesasary to help make something like a Cyber Consortium happen. Also, in the past several years as the Federal government moved from an IA perspective to Cyber Commands, for the first time companies were being asked to support requirements for comprehensive management of Cyber Security issues - defense contractors were thus the first to respond to such needs.

In response to the third question - It turns out that this problem space represents an ideal case study into how super-complex problems can be dealt with across dozens of entities to help redefine both technology and practice across an entire industry. No one company or organization was capable of fully managing the problem space on its own - however, through the consortium construct many specific individual issues could be tackled within a shared solutions methodology and practice framework.

This is a prototypical Consortium framework
In part 2, I will explore the components of a Cyber Consortium (depicted above) and in part 3 I will illustrate how this approach is applied to specific aspects of Cyber Security technology and utilized to resolve security challenges.



Copyright 2012  - Technovation Talks, Semantech Inc.

What is Enterprise Transformation?


by Stephen Lahanas

At the highest level, “Enterprise Transformation” simply represents the set of initiatives necessary for an organization to adopt new technologies, better facilitate processes and enhance core capability applied to achieve mission objectives. The main differentiation between enterprise and specific or domain level transformations is the scope of the effort. Once an organization moves to this type of effort it typically involves:

  • Enterprise-wide Analysis
  • Enterprise-wide Architecture Design
  • Sequenced or Concurrent Implementations
  • Evolution of Management & Governance


A conceptual visualization of the architectural implications of SOA
What tends to come out of such a transformation is an architecture better aligned to business objectives. Services Oriented Architecture or SOA represents a critical first step towards attaining that improved alignment but by no means represents the full scope of potential transformation opportunities; once movement to a services paradigm is achieved the following additional transformation activities may occur:

Migration to Cloud Computing business models and architectures.
Full integration across Data Architectures built atop a Semantic and MDM foundation.
Migration to a holistic enterprise Cyber Security management paradigm.

Services Oriented Architecture represents an enabling capability for all of these other objectives mainly because it facilitates the move from stove-piped IT management to an enterprise perspective. SOA also provides in many organizations the first opportunity to enact a global governance approach that allows organization to coordinate management across the entire enterprise. While there are some forms of enterprise transformation that don't require a move towards a service-oriented paradigm, those solutions are becoming less and less common. Even ERP solutions are now being positioned as services, although the mechanism for deploying them is a more focused on large COTS software packages - but much of that too is moving towards the Cloud and being served from there.


Copyright 2012  - Technovation Talks, Semantech Inc.