Cyber Security Predictions for 2017

2016 was a big year in the annals of Cyber Security, and 2017 promises to eclipse it.

Creating an Enterprise Data Strategy

An introduction to the process of developing comprehensive strategies for enterprise data manangement and exploitation.

A Framework for Evolutionary Artificial Thought

Let’s start at the beginning – what does this or any such “Framework” buy us?

The Innovation Dilemma

What things actually promote or discourage innovation? We'll examine a few in this post...

Digitial Transformation, Defined

Digitial Transformation is a hot topic in IT and big money maker for consultants - but what does it really mean?.

Saturday, December 29, 2012

IT Predictions for 2013

Before we continue with our discussion regarding how to develop Conceptual Models, we're going to pause for a moment and look at the year ahead. In many ways, Enterprise IT seems to move a bit slower than the overall tech sector - that's often because the nature or mission of Enterprise Information Technology is the application of new technology rather than its invention (in most cases). Even still, the last two years have seen some fairly remarkably developments, including:

  • Industry-wide adoption of Cloud Computing models (both internally and externally)
  • Rapid movement towards the concept of Big Data (before that concept has been fully fleshed out).
  • Very rapid movement towards adoption of both social media and mobile applications (and integration of the traditional Enterprise IT with both).
Sometimes, the future is hard to read...

Two years is a quick turnaround and what it really represents is the turning of the ship in a new direction - as opposed to having reached the distant shores. While progress is being made in these new directions - they are in most places 'works in progress.' And of course they must be added to the backlog of works in progress that are already present - yesterday's hype-driven initiatives such as:
  • Cyber security
  • ITIL
  • SOA
  • EA
  • ERP (yes it's still out there and very much alive).
  • Business Intelligence
  • Semantic Technologies (the Semantic Web)   and so on...

Today's IT department is a busy place indeed - it is coping with more new technology expectations than ever before. Yet seldom do IT budgets reflect the widening scope of those expectations. Given this background we're going to make a handful of predictions for 2013...

  • Prediction 1 - Big Data will be redefined into a more actionable approach - this approach will become more architecture driven and less focused on roll-out of databases alone.

  • Prediction 2 - Big Data and Semantic Technology will begin to merge - this will occur through management of meta-data as well as through management of architecture-driven Big Data solutions.  
  • Prediction 3 - More project teams will be consolidated - much of this will happen onshore but some it will be near-shore and offshore. Why? Because more firms are beginning to understand that globally dispersed teams aren't that efficient - initial costs saved must be measured against overall lack of effectiveness in understanding client needs.
  • Prediction 4 - There will be some backlash against the frenzy of IT consolidation that's been occurring for the past several years (driven mainly by Google, Oracle and IBM). This may lead to more anti-trust actions.
  • Prediction 5 - There will be increasing pressure on Congress to examine H1b policy decisions. Most of the influence on these policies have been driven by a handful of the largest IT related companies - their arguments and contention that there is an IT skills shortage in the US will be called into question.
  • Prediction 6 - Mobile App development will become the hottest / most desired job title - at least near-term.
  • Prediction 7 - Massive Online Multi-player gaming environments will unveil new monetization strategies. Massively available online courses will do the same and move towards interchangeable content.

Of course, the most enjoyable part of our industry is how often we end up getting surprised - we're looking forward to that as well...

Copyright 2012  - Technovation Talks, Semantech Inc

Wednesday, December 26, 2012

When to Create Conceptual Data Models

Why are Conceptual Data Models Important? Not everyone does them - especially in situations where many aspects of the data architecture has already been defined by default because commercial software is involved. Also, those folks who do use them don't always build the same way. Yet - perhaps there is more to the Conceptual Data Model (CDM) than many people realize. Moreover - how does the CDM fit into the larger realm of architecture or modeling activities that are likely to take place on typical IT project? First let's begin with an official - or at least semi-official definition...

The primary goal of the CDM is determine entities & the high level relationships
Conceptual Data Model
Describes the semantics of a domain (the scope of the model). For example, it may be a model of the interest area of an organization or of an industry. This consists of entity classes, representing kinds of things of significance in the domain, and relationships assertions about associations between pairs of entity classes. A conceptual schema specifies the kinds of facts or propositions that can be expressed using the model. In that sense, it defines the allowed expressions in an artificial "language" with a scope that is limited by the scope of the model. Simply described, a conceptual mode is the first step in organizing the data requirements. (Wikipedia)
Well, in this Wikipedia definition (there are several) they actually called it a 'schema' instead of a model and we don't think that's quite accurate - so we changed it here. The schema implies that the model is being built to a specific DB technology whereas a Conceptual Model is wholly agnostic. In fact, a CDM is a lot like a UML Domain Model and that's a good place to talk about the big picture nature of Conceptual Modeling:

  • UML Domain Models represent the Conceptual Classes of an application-focused solution or those classes can be used to model database structures if the modeler doesn't wish to capture ERD type relationships (although many data modelers do).
  • A CDM is an ERD construct - the highest level of relational database design. Although, it is possible to use both Domain Models and CDMs to capture Big Data structures as well (we'll talk about that in a future post).
  • Entity Relationship Diagram (ERD) is a specialized graphic that illustrates the relationships between entities in a database. ER diagrams often use symbols to represent three different types of information. Boxes are commonly used to represent entities. Diamonds are normally used to represent relationships and ovals are used to represent attributes. 
  • Conceptual Modeling in general is built into most architecture exercises and Data is always one of the dimensions captured (just not in the same way across all EA approaches).
UML Domain Models can be fairly detailed (and in fact may represent much of what will be become the Class diagrams)
There is no set method to going about this. One could use a UML Domain Model and then Class Diagrams to be their Conceptual and Logical Design before they move to the Database or one could also define a CDM and a Domain Model and Class Diagrams and then a more detailed Logical Data Model (then generate the Database schema from the Logical Model).

Another interesting consideration for the CDM is its relation to Master Data Management (MDM). Since the CDM is focused on exploring what the key data entities in the solution are up front it becomes the first and perhaps the most important reference for determining which of those entities ought to be managed as Master Data.  

Now, with this background, this is our recommendation of when to use Conceptual Models:

  1. Use a CDM when you need a better understanding of the data behind Data Flow diagrams or other business process models or analysis.
  2. Use a CDM if you are sure you will be employing any sort of MDM-related capability.
  3. Use a CDM if you intend to use database modeling tools to generate the DDL for your DB schemas from a Logical Model. 
  4. Use a CDM for sure if you're not using UML or another EA framework to capture data definition /models. 
  5. Use a CDM if you want to understand early in the process where integration may need to occur and whether you will need to establish data interfaces (and if you need to examine data in advance that you may not own).

Our next post will explore how to build CDMs.

Copyright 2012  - Technovation Talks, Semantech Inc

Friday, December 7, 2012

Coming up on Technovation Talks...

This is preview of the posts and presentations you'll be seeing on the Technovation Talks Blog over the next month:

  • An Introduction to IT Architecture
  • How to create Conceptual Data Models
  • Using Joomla as an Enterprise CMS solution
  • The Top 10 IT breakthroughs of 2012
  • Evaluating the Adobe Creative Cloud and MS Office 365 
  • Top 5 challenges in Project Architecture
  • IT Predictions for 2013
  • Principles of Natural Intelligence
  • 5 Ways to monetize massively online multi-player games
  • How to set up a Cyber Security Consortium part 2
  • Mastering Requirements Management for the Enterprise
  • Understanding Quantum Networks
  • Introducing a Data Reference Model for Cyber Security
  • How to use the UML Domain Model
  • Integrating Mobile and Web Applications 
  • Crowd-sourcing and Virtual Collaboration - Problem Solving for the next 100 years
  • The state of the Semantic Web
  • An assessment of DMBOK 2.0
  • How to Set up a Big Data Practice
  • Estimating IT Projects - the Top 5 Skills
  • Exploiting new features of HTML 5 for Web and Mobile Development
  • The Personal Learning Environment
  • Understanding Forecourt, POS and Electronic Payment Solutions
  • Using Intelligent Healthcare to improve Patient Outcomes 
  • Exploring the Windows 8 Interface

Copyright 2012  - Technovation Talks, Semantech Inc

Agile Data Architecture

Traditional data architectures are not by nature Agile. It may be time for us to reconsider some of our previous assumptions regarding data integration. For decades, we've presumed that the best method for managing data was through strict conformance and control – we viewed the enterprise as static, remaining stable once properly defined. What we've discovered is that quite the opposite is true. Today we are facing more complexity in data integration than ever before; more data sources, greater volumes of data, more solution paradigms to deal with and greater expectations for Cross Domain data exchange. Moreover, data integration has become the linchpin within holistic architectures based upon services and sophisticated business process orchestration.

Agile Data Architecture, defined

Agile Data represents the ability to quickly retrieve and dynamically manage data from any number of data sources - this also implies less development up front and more evolution as the solution matures. 

Agile Data Architecture represents the technologies and patterns used to facilitate that management; it builds upon the core premise and provides the combination of a dynamic set of inter-related best practices rather than a standardized or single architectural approach. Much of the strength behind this approach is based upon the ability of this philosophy to accommodate evolving technologies and architectural best practices. 
Agile Data Architecture is not a vendor-focused solution; it is a generally technology agnostic (although not necessarily standards agnostic) philosophy designed to address the new realities of enterprise integration. It adhere's to several core Agile principles including:

  • The focus on rapid development and deployment of capability  
  • User focused requirements
  • The recognition of the evolutionary nature of most solutions 
An important concept within the larger Agile Data construct is the Dynamic Data Model. This is in fact an abstraction layer for integration that resides in Semantic formats – in other words OWL (Web Ontology Language) and RDF (Resource Description Framework). The key to making this concept work is not attempting to dynamically modify RDBMS technology utilizing previous modeling paradigms, but rather the ability to map Semantic data protocols to existing or new data structures, thus insulating them from system changes and allowing for automated updates.

This construct supports much of what we're talking about in other posts when we mention

  • Semantic Integration - This provides a solution methodology that can be applied across a range of Agile Data solutions
  • Intelligent Healthcare - This provides an industry specific application for Agile Data Architecture using Semantic Integration as a methodology.
  • Semantic Common Operating Picture - an instantiation of Agile Data Architecture for Situation Awareness using Semantic Integration as a methodology.

While it is not specifically focused on Big Data technologies Agile Data Architecture certainly accommodates through its approach to management of heterogeneous data sources. We'll explore how Agile Data Architecture supports a wide variety of solutions in upcoming posts...

Copyright 2012  - Technovation Talks, Semantech Inc

Wednesday, December 5, 2012

The Healthcare Portfolio

Today's post is a continuation of the discussion from yesterday's post:

Nearly all existing EHR systems are built with an explicit domain model– a common approach in current EHR software development practice. This means that the hard coded medical domain knowledge in the system results in higher cost when new requirements in clinical documentation routines occur.” Weber-Jahnke, J. & Price, M. (2007).   

One of the major limitations in regards to the current generation of Healthcare IT capabilities is the relative lack of ability to integrate multiple structured and unstructured data sources. Most if not all EHR/EMR systems today are built utilizing a single relational database structure with minimal consideration of how to manage data structures that do not easily fit within the “record” itself. It occurred to us that the problem here is both technical and derived partly from the metaphor chosen to build these solutions around. The current metaphor; an “Electronic Record,” does not properly approximate the traditional capability encompassed within the Medical Chart. A more accurate context is the ability to exist as a continuous “Healthcare Portfolio.” The Portfolio as a metaphor is a flexible and infinitely extensible instrument. This instrument can hold or point to “n” records from “n” different providers or systems. This instrument can also provide an information and collaboration ‘portal’ interface for the patient, all associated caregivers and administrative personnel.

The Portfolio metaphor can also be extended to the Group level rather gracefully which also helps to serve as natural mechanism for shielding certain types of privacy data per HIPAA regulations for group-level analytics. This paradigm shift is founded upon the realization that a complex architecture can be centrally managed “logically” yet manifested across a federated environment. In other words, all elements of a patient’s portfolio do not need to exist in one location; however the central management mechanism requires access to the portfolio to be controlled to ensure data integrity and to allow the portfolio to retrieve information from the appropriate data stores and caches. This is referred to as “meta-information” and can be considered a Semantic Layer that binds that federated environment into a logical integration. 

The core architectural paradigm behind the next generation of automated Healthcare solutions is likely to be data fusion. The reason for this is simple – complexity. The key enabling mechanism within data fusion is reliance on metadata or meta-information. Metadata has been managed historically within individual system stacks. What that means is that the metadata was usually stored within the same system or if in more complex configurations within the system or component close to the primary user interface (and in the case of EMR solutions, the Medical Chart UI). The problem with this for Healthcare is that the community of Medical knowledge is expanding exponentially. Any attempt to centrally manage all types of medical knowledge in one system’s metadata framework is bound to fail. Part of the true strength of any such solution ought to be the ability to accommodate information not originally anticipated.

The implications of this are important, as there is no consistent method of coordinating metadata across multiple metadata sources except through custom interface development or management in most of today’s EHRs. As the scope of custom integration grows, interface management and integration become mores difficult to manage. In other words, those architectures simply are not scalable.  

Copyright 2012  - Technovation Talks, Semantech Inc

Tuesday, December 4, 2012

Medical Charts & the EHR

Today's post is a continuation of the Intelligent Healthcare theme. We're going to talk about the current relationship between Electronic Healthcare Records (EHRs) and the practice of using medical charts.

Even with the advent of new technologies, processes are slow to change, most healthcare is still managed using medical charts (although often now supplemented by or delivered through EHR systems). Ordinary diagnostic and care-related tasks such as data correlation, symptom evaluation and patient management remain part of an intuitive process with only partial system level support. In most cases healthcare system support has not led to widespread process transformation. The possibility for wide variations in both the types and quality of patient care is likely in this type of purely ‘intuitive’ environment. While there are now and have been a wide variety of diagnostic algorithms and treatment paths available to medical practitioners to choose from, determining which ones to apply has always been problematic and the lack of consistent standards in this regard is one of the main reasons for the relatively large number of quality of care issues.

The medical chart as a non-technical entity has never been fully standardized in Healthcare practice. The chart is both an information “form” for individual case incidents and is also used as the folder for iterative versions of the form across cases and related patient documents. Depending on the nature of a patient’s condition or conditions the size and complexity of the paper folder can become quite intimidating.

While there are federally mandated standards for Medical Records Information Privacy (The HIPAA Act), there are still no universally recognized standards for healthcare record data capture and exchange. The closest emerging standards are HL7 Clinical Document Architecture (CDA) and the Continuity of Care Record (CCR). While the vast majority of EHR solutions do not follow the same set of standards, there are in fact generally a number of similarities between most EHR/EMRs. A typical Electronic Medical Record application like a medical chart tends to include the following elements:

  • Some level of patient demographic information.
  • Patient medical history, examination and progress reports of health and illnesses.
  • Medicine and allergy lists, and immunization status.
  • Radiology images (X-rays, CTs, MRIs, etc.), Laboratory test results.
  • Photographs, from endoscopy or laparoscopy or clinical photographs.
  • Medication information, including side-effects and interactions.
  • Evidence-based recommendations for specific medical conditions
  • A record of appointments and other reminders, Billing records.
  • Advanced directives, living wills, and health powers of attorney
The reality is that a complex case can very easily overwhelm a traditional chart or even an EMR. The amount of information collected can in some cases rival a mid-sized database. A single patient can have multiple problems, be served by multiple specialists and can have variations of their care record which extends across multiple case instances over the years.

Copyright 2012  - Technovation Talks, Semantech Inc

Monday, December 3, 2012

The Semantic COP Motivation

The number one challenge associated with the entire domain of Information Technology is data overload. This problem is not merely a matter of increasing complexity, because even as other areas within IT are becoming more integrated or manageable, the ability to effectively exploit the increasing quantity of available data is decreasing. There are a number of reasons why this is the case; those reasons include:

·       Traditional data technologies, standards and related solution architectures were simply not designed to handle this much data. Data sources were built to exist as silos and most of them have not changed in principle for about 30 years.

·       The requirements for sharing data across organizations or across the globe is a recent expectation and changes all previous expectations regarding data management, integration and discovery.

·       Unstructured data is now becoming as important as structured data. This adds a nearly infinite curve to potential data growth.  

Several key capabilities are needed to help get control of data overload

For those organizations or individuals charged with performing sophisticated analytics-based tasks, the future will be challenging. While data continues to grow at nearly an exponential rate, the tools and technologies dedicated to making sense of ever-larger data sets is not keeping pace. This is not merely a military or intelligence-focused problem; it affects many commercial domains as well. The fact that this is such a common problem makes the prospect of a solution architecture designed to address this challenge more appealing and commercially viable. Such a solution could be presented both as specific commercial products and a best practice architectural approaches (not unlike SOA or Cloud Computing). While this solution may take advantage of Big Data technologies such as Hadoop it is not primarily a "Big Data" approach.

This challenge requires both a new architectural approach but also requires a new way of managing IT lifecycles to better exploit the emerging technologies which comprise that architecture. Semantic Technology is the key to facilitating a new type of analytics, one that has the power to harness any type of data source or architecture.  The overall solution approach that we will be proposing is what we're referring to as a Semantic Operating Picture. The primary solution elements which build upon the S-COP principles we presented yesterday are as follows:

  1. Development of a clear definition the Problem Space and related technical challenges.
  2. Conducting applied or directed research into specific areas:
    • Dynamic Tagging , Master Data & Metadata Management.
    • Current Semantic Standards; focus on RDF, OWL, RIF.
    • Semantic-based Visualization technologies; especially those using OWL, RDF.
    • Semantic Search and Inference Engine technologies.
    • Semantic Lifecycle Management capabilities
    • Natural Language Processing (algorithms, strategies etc.)
    • Translation of RDBMS or DW structures to Semantic Standards & DBs.
    • Semantic-driven reporting capabilities; integration w/ visualization.
  3. Identification of multi-modal datasets for use in the prototype solution architecture.

We will explore these in more detail in upcoming posts.

Copyright 2012  - Technovation Talks, Semantech Inc

Saturday, December 1, 2012

The Semantic Common Operating Picture

One of the most important achievements in analytics during the 2000's was the development of what can be described as Common Operating Picture architectures or COPS. These solutions combine data fusion technology along with advanced reporting functionality and role-based dashboards to provide an integrated view of various types of data. COPs are often used in the DoD or Homeland Security to support threat analysis or other Cyber Security missions. However, the current COP architectures are lacking in several key areas including:
  • Scalability (e.g. it's costly to make changes to the underlying integration)
  • Ability to manage Big Data
  • Accuracy across larger sets of disparate data (or domains)
For these reasons a new generation of COP solutions is needed. We're going to discuss one such idea today - the Semantic Common Operating Picture.

A standard COP Architecture
This approach represents a fairly ambitious, evolutionary step to the family of solutions already referred to as Common Operating Pictures or COPs. Sometimes these types of systems are referred to by their functional role, which in this case most often is described as Situation Awareness. Most COPs include a number of technologies within their solution architectures and thus represent a both a product (or set of products) and an integration activity.

The Semantic Common Operating Picture (S-COP) approach builds upon previous COP architectures but differs from them in several critical ways:

  1. It will take advantage of new standards in data management and message manipulation that weren't yet defined or matured when standard COPs first began appearing more than a decade ago. Many of these new standards are associated with the concept collectively referred to as the “Semantic Web.”
  2. It will allow for flexibility in how data is presented with more and richer opportunities for complex visualizations.
  3. It will support collaborative activities by design (and not as an afterthought). The potential associated with Collaborative Analytics was discussed in more depth in the previous section of this proposal.
  4. It will provide more opportunities for data discovery through a more flexible approach towards management of data sources. In other words, this solution will have access to more unstructured data and will give users the ability to add meaning to that unstructured data (and to structured data as well).   
  5. It will support the emerging field (and technical standards related to) Big Data.

The S-COP concept is based on a set of seven principles, listed in the mindmap below. 

These core principles supporting the S-COP concept are as follows:

·       The solution must support multiple levels of analysis – this implies the ability to manage analytical tasks at the personal level, the group level and the organizational level. This in essence is not too different from the traditional COP architecture. However, we can also view this principle in the context of data temporality. It is important that the solution be able to differentiate chronologically sensitive data. 
·       The solution must support complex collaboration – this implies that the solution must be able to facilitate not just simple messaging between participants in various domains, but must also facilitate complex collaborative analytics.
·       The solution must support interactive annotation – what this means in the context of this particular solicitation is the ability for end-users or subject matter experts to add meaning to existing data elements or sources through data tagging or adding other notes.
·       The solution must support multiple visualization capabilities – in the context of this project this means the ability to display both traditional dashboard-like graphics as well as complex concept visualizations. For example, it would be beneficial to be able to illustrate relationships between data without necessarily having to define specific queries or analytic representations.
·       The solution must support intelligent discovery – This means that the solution must have the capacity to learn from previous heuristic paths and results. Intelligent discovery must be able to harness existing subject matter expertise across partner domains as well as existing knowledge bases. It must also be able to exploit patterns.
·       This solution must support pattern management capabilities – This implies the ability to discern, record and manage patterns from what has been previously discovered to more quickly assess current situation awareness. Pattern management must be able to build patterns out of both structured and unstructured data.
·       The solution must support rules management capabilities – Having Rules Management allows for discovery and analysis to be linked in real-time to mitigation options. It also helps to automate more of the analytics processes thus allowing resources to focus their attention on only those areas which really need it.
These principles taken together represent a basic framework or requirements hierarchy which can be used both to assess possible technologies and to build a solutions architecture. We will examine the architecture of the S-COP in greater detail in future posts...

Copyright 2012  - Technovation Talks, Semantech Inc.

Friday, November 30, 2012

Intelligent Healthcare Scenarios

In yesterday's post, we introduced a number of core capabilities that would facilitate deployment of enterprise-wide Intelligent Healthcare solutions. In today's post, we're looking at the core problems and specific scenarios under which those capabilities might be applied.

Inter-operability is over-arching problem set behind most Healthcare related innovation

There are a number of what might be considered ‘problems’ associated with Healthcare-related data inter-operability. In this context, a “Problem Set” represents a meta-category of related aspects that together comprise all elements of a problem.

The initial problem sets for Healthcare Inter-operability are:

  • Document-centric message focused integration.
  • Conflicting (Healthcare-related) terminology.
  • Conflicting standards and resulting implementations.
  • Exclusion of free text data in traditional EHR/EMR systems.
  • Lack of specific support for Healthcare practices / processes.
  • Security & Data Integrity of exchange transactions.

Scenario Title
Cross-Provider Diagnostics
Care scenarios involving more than one diagnostic participant.
Cross-Provider Treatment
Care scenarios involving more than one treatment provider.
Multi-Provider Care Management
This refers to situations where care is spread across many different offices / organizations, some perhaps not affiliated.
Multi-National Incident Response
For epidemic, pandemic or bio-terrorism events.
Group Characteristic/s Identification
Requires access to wide ranging data sources
Group Pattern/s Identification
Requires analytical capabilities and access to wide range of data sources
Patient Pattern/s Identification
Even at patient level, data may be required from multiple caregivers, systems
Lessons-Learned Capture
At the organization, personal or regional level
Lessons-Learned Dissemination
No real mechanism exists for this besides medical journals which are proprietary, costly and restricted
Intra-Organizational Integration
Enterprise level integration within the caregiver organization today is costly and complicated
Organizational Healthcare Reporting
Reporting requires access to all pertinent data sources
Regional Healthcare Reporting
Regional reporting requires access to all pertinent data sources
Cross-Organizational Research
Currently difficult due to access issues
Communities of Practice Knowledge Sharing
Today this occurs through continuing education and conferences (indirectly), there is no real-time feedback mechanism
Trend Visualization & Statistical Analysis
Usually requires manual support to accomplish

Healthcare Scenarios which may apply core IH capabilities

Copyright 2012  - Technovation Talks, Semantech Inc.

Thursday, November 29, 2012

Intelligent Healthcare Capabilities

We introduced the concept of Intelligent Healthcare across several posts so far:

Over the next several months we're going to cover a number of related IH topics and we thought it would be worthwhile to capture some of what we consider to be the most important IH-related enterprise level solution capabilities.

Next Generation Healthcare IT innovation begins with vision...
There are two perspectives into Intelligent Healthcare (IH) capabilities; the first is based upon how IH can be exploited as a methodology to orchestrate the design and implementation of Healthcare IT solutions. The second perspective is based upon how those solutions developed using the IH methodology actually impact the practice of Healthcare. Both perspectives are vital in helping to understand and support the eventual realization of the benefits implicit with Intelligent Healthcare

Capability Title
Capability Description
Patient-Focused Performance Engineering
Attributes that can be identified and included within any Healthcare solution design paradigm.
Agile Lifecycle Federation
The ability to define solution best practices and make those practices available to communities either as recommendations or solution compliance guidelines.
Solution Performance Metrics
Definition of systems performance expectations with strategies for mitigation of issues when or if they may realized. (SLAs with engineering support)
Scenario-based Design Patterns
Definition of integration patterns; at the data level, services level, infrastructure level and combined. 
Dynamic Collaboration
Web 2.0 morphing into 3.0 – using Social Publishing capabilities to replace other traditional software support and tie together communities at a more detailed level. This covers project management, requirements management, COI management etc.
Community Process Management
The ability to collaboratively define, publish and share process-based knowledge and functionality.
Clinical Decision Support (CDS) Reference EA
Definition of follow-on architectures that can benefit from the Connect core – these reference architectures can help to further harmonize inter-operability initiatives.
Semantic Reference Architecture - (plus management of Healthcare Ontologies)  
Illustrating prototypical architectures for extending data inter-operability using emerging technologies such as RDF databases.
Federated Enterprise Architecture
The ability to reconcile EA’s across domains using a Wiki-based knowledge framework rather than through proprietary EA tools.
Innovation Management
Development of Innovation techniques, tools and processes designed to support integration of new technologies into existing initiatives. This capability provides IH with the proactive perspective needed to anticipate issues years ahead of time.
Trend Identification
The ability to engineer trend management capabilities both within individual (local) and group (global) perspectives.
Automated Data Collection
Self-configuring interface negotiation and message transport. 

All of these capabilities are designed to facilitate a new paradigm wherein medical information becomes dynamic and problem-solving becomes collaborative.

Copyright 2012  - Technovation Talks, Semantech Inc.

Wednesday, November 28, 2012

Exploring Governance Process - part 2

Part two of the Exploring Governance post focuses on prototypical process steps. As noted in the previous post, this example relates to an enterprise-wide SOA governance initiative. Governance process can be applied to any number of areas or can be made available universally to all IT functions (e.g. Data, SOA, Security, SDLC/ALM). Each organization needs to determine for itself how lean or robust the process (and its steps) should be - those decisions will determine which steps (or variations thereof) listed below might be appropriate. 

  1. Submission – submission can involve "ideation" but usually is handled as requirements management. The goal is to make this as flexible and open-ended as possible in order to allow the process and the SARB to serve as a funnel for all the necessary information to build an enterprise roadmap for services across the enterprise. Given that this process will be occurring before any automation is in place and at the beginning of a SOA adoption initiative it is important to make sure that we provide paths for different types of information to come in. So there will be legacy capabilities without specific designs for formal requirements. There will be functional requirements without designs. There will be functional requirements with designs and there will be designs without functional requirements. All of this could be viewed as idea submission. Depending upon whether or not designs or requirements were included the governance process would obviously have to coordinate with related processes to make sure that the service submission met basic standards.

  1. Requirement Acknowledgement – this is a relatively straightforward step in that the SARB would simply be providing some sort of formal acknowledgement that an idea or requirement had entered the services governance process.

  1. Capability Allocation – capability allocation or classification is somewhat more complicated. This involves mapping requirements to a specific set of definitions for capability from the retail enterprise perspective. As noted previously this will require the prerequisite of some sort of preliminary capability inventory. This inventory will take the form of a taxonomy or ontology that can later be applied to a variety of process automation tools including ALM tools, project portfolio management tools as well as configuration management software (including SOA CM).

  1. 1st Level Analysis / Review – the first level analysis and review step involves coordination between review board members and designated technical staff to ensure that the idea or requirement submission meets basic technical standards. The deliverable provided by this step in the process would be a checklist indicating that those initial criteria have been met or not and based upon that a determination would be made whether or not the requirement satisfied the standards necessary to achieve first level approval. It is important to note here that the governance process is not the design process nor is it meant as a replacement for the application lifecycle management process. Governance is meant to provide technical and business oversight and to ensure that enterprise standardization is occurring.

  1. Requirement Logging – requirement logging is a fairly straightforward step in the process and that it merely represents annotations being made by the CM point of contact within the SOA COE wiki.

  1. Initial Approval – initial approval then merely represents that the requirement has been properly submitted and formatted and meets basic criteria for moving forward in the process.

  1. 2nd Level Analysis – second level analysis represents the most complex part of the overall process. This requires coordination through a small group or integrated product team including board members, technical staff and business stakeholders. This is not meant to be a complex engineering endeavor, but rather an examination to determine whether the well-defined service requirements provided from the previous steps in the process conflict or overlap with other requirements or other existing services. In this sense then we are dealing with enterprise or systems level reconciliation as opposed to validation of the service logic in its own context. This analysis involves technical review of service contracts, WSDL's and other service logic at the functional level. It also involves comparison of business logic, rules and functional expectations.

  1. Recommendation & Architecture Update – The result of the second level analysis is a consensus recommendation that is then properly documented by the SARB and included into the SOA architecture roadmap. It is likely then that the architecture board would support some sort of “to be” reference architecture which would include all of the recommendations that had been approved up to this stage.

  1. Handoff to ALM – any artifacts that reach SARB recommendation level would then be provided as roadmap updates to the ALM process. No actual work would occur on these until management approval was obtained.

  1. Deployment Preparation – the CM point of contact would then prepare the service information for final potential deployment. This would include any standardization for modification to WSDLs or service contracts.

  1. Final Approval - this step is provided to ensure that management retains control of the service release process and/or any follow-on work that may need to occur.
The PMO must govern a wide spectrum of IT processes and capability

Process Implementation Considerations
As noted previously, all the roles defined in this process may not necessarily be used. For example the CM role may represent a member of the SARB. This process provides a general guideline and the templates for future governance processes, but in order to test its efficacy within the enterprise environment we would need to demonstrate it utilizing actual requirements and working with the likely participants among the stakeholders. In order to engage in that sort of dry run activity, the prerequisites would also need to be in place.

Copyright 2012  - Technovation Talks, Semantech Inc