Cyber Security Predictions for 2017

2016 was a big year in the annals of Cyber Security, and 2017 promises to eclipse it.

Creating an Enterprise Data Strategy

An introduction to the process of developing comprehensive strategies for enterprise data manangement and exploitation.

A Framework for Evolutionary Artificial Thought

Let’s start at the beginning – what does this or any such “Framework” buy us?

The Innovation Dilemma

What things actually promote or discourage innovation? We'll examine a few in this post...

Digitial Transformation, Defined

Digitial Transformation is a hot topic in IT and big money maker for consultants - but what does it really mean?.

Saturday, November 10, 2012

What is Aeronautical Information Management ?

Very few of us who fly from one destination to another have any idea about the information systems that support civil aviation. As one might imagine, the role of technology in helping to manage and regulate the exploitation of our airspace has been growing steadily for decades. Yet, at the heart of every nation’s civil aviation system is something called AIM, Aeronautical Information Management. AIM is the data layer foundation for all other aviation activities and as of today much of it is still manual and un-integrated. The data layer is the foundation for all other modernization initiatives and making sure that it is done right will have a direct impact on the airline industry as well as on passenger safety.

Core AIM Concept


Data as a Product versus Data as a Service – that’s the way that the Global AIM community often portrays the Aeronautical Information Management transformation challenge. Oddly enough though, AIM is the successor to something called AIS – Aeronautical Information Services. The reason for the apparent discrepancy in evolutionary descriptions has to do with a change in how these systems are viewed. The term AIS was derived from ICAO requirements dating back several decades. At that time the provision of manuals extracted from stove-piped data systems was considered a service.

Now of course, we see the term ‘Service’ more closely aligned to specific architectural constructs, i.e. Services Oriented Architecture (SOA). The AIS systems and framework were in fact designed only to support the regular release and distribution of civil aviation knowledge products, which at first were only manifested by printed manuals and later by electronic document distribution. The primary publication is referred to as the Aeronautical Information Publication (AIP) and is released every 57 days.

Aeronautical Information Management is a global initiative and at its core recognizes that in order to move forward to a true ‘Services’ paradigm, civil aviation must consolidate a fractured, stove-piped data layer that had been developed to support paper products and transform it into a single, logical architecture framework.


AIM is part of a much larger global modernization effort for Civil Aviation

AIM Definitions

•    The NAS – A conglomeration of FAA systems which manage the National Air Space.
•    ATM – “Air Traffic Management” is roughly equivalent to the FAA’s NAS.
•    NextGen – The collective name for the set of modernization initiatives which will be rolled out by the FAA over the next decade. 
•    AIXM – The Aeronautical Information Exchange Model. A canonical XML-based data model meant to characterize the entire problem space of civil aviation and provide data exchange support between civil aviation systems through and across AIM infrastructures.
•    SESAR – The ‘Single European Sky ATM Research’ is the equivalent of the FAA’s NextGen efforts.
•    CDM – Collaborative Decision Making (currently mostly manual).
•    SWIM – System Wide Information Management (basically, the support infrastructure for NextGen / SESAR services).
•    NOTAMs – ‘Notice to Airmen’ represents near-real time environmental or situational updates regarding certain portions of airspace.
•    ADS-B - Automatic Dependent Surveillance-Broadcast
•    OATA – The Eurocontrol Overall Target Architecture Activity




Copyright 2012  - Technovation Talks, Semantech Inc.
HyperSmash

Thursday, November 8, 2012

Why E-Learning 1.0 Missed the Mark

I recall distinctly in 1999 when John Chambers, (Cisco’s CEO), proclaimed that e-Learning was the Internet’s Killer App and next big thing. I was working at Cisco at the time as part of their E-learning Architecture Team. I believed that message and was utterly convinced that nothing would stop us from proving it. I however, was wrong. Something did stop us and that something was the industry that sprang up and came to be known as E-Learning 1.0. Much of that industry is still in place and is slowly beginning to realize that the vision one adopts does make quite a difference in the resulting outcome.

The vision that is finally being accepted - the vision we need embrace is simple; E-Learning should be about a convergence of technology, philosophy and practice designed to liberate learners in the same manner that the Internet itself liberates users within a community of unlimited global discovery. E-learning is about trusting that people can in fact think for themselves and design their own learning strategies and learning paths.


Learning is about Content - The current revolution in open courseware is making content accessible and inexpensive

Most importantly though, the true revelation behind a new vision for E-Learning is this; learning does not end when school ends (either at the end of the day or the when you receive a degree) – in fact in many ways that’s when the real-world learning begins. Moreover, unlike at school, knowledge learned outside of formal pedagogy is relevant to your daily tasks, thus knowledge gained within your daily work paradigm can and should be integrated into the larger learning experience. Learning is the proactive mechanism by which all individuals and organizations add value to or receive value from knowledge – thus it should be at the heart of every home desktop and every enterprise IT solution, not as an afterthought but as a core driving process.

This represents a massive paradigm shift – one that will change the nature not only of the E-Learning industry but also for IT itself and the organizational cultures of any enterprise that adopts this paradigm. That is the vision that was missed – but we are fortunate, we can still achieve that vision. To complete  the paradigm shift though, we need to understand why E-learning 1.0 didn't work.

There is no doubt that the E-learning industry as it defined itself for most of the past decade came nowhere close to meeting its original expectations. It is hard to determine exactly how much of market E-learning actually does possess because the industry never properly defined its parameters and now of course what constitutes E-learning is finally evolving. Quite a lot of capability that should have been viewed as elements of e-learning were not (and may still not be) considered as such. To truly move the industry from 1.0 to 2.0 status we need to reassess our core assumptions about it.
  • E-learning must be defined by practice and use rather than by vendor categories. In other words E-learning is not a product or even product-oriented, it is a service. In fact we could easily refer to as Learning as Service to capture what we're working towards.
  • E-learning must be primarily concerned with content. Delivery of content is and always was a secondary consideration - the platform is and must be the Internet. In fact, the entire notion of E-learning is to a large extent predicated upon the protocols and infrastructure inherent within the Internet (de facto delivery). Building secondary and tertiary delivery environments with divergent and idiosyncratic standards took away the primary advantage originally associated with the promise and potential of E-learning - universally accessible, inexpensive content.
  • E-learning must reassess the entire nature of pedagogy, only then will the solutions fully match the technologies and only then will the appropriate business models emerge. E-learning 1.0 never developed its own philosophy, opting rather to graft traditional approaches to instructional design and assessment based outcomes onto the emerging technologies. This led to a fundamental discontinuity between the medium and message. Web-based content and the emergence of social publishing have been rapidly pushing towards an open content model, one built on democratized production and review rather than top-down or bureaucratic micro-management. The assumption that experts know better than we do how we will learn best is anachronistic, and worst of all, expensive.
Traditional educational providers had demonstrated an unreasoning fear of the potential related to E-learning for nearly a decade; this fear was derived from an unwarranted assumption that if the true potential were adopted that their role would be marginalized. Nothing could be further from the truth – traditional education is under fire from every direction, budgets being cut, options being limited and the one tool that could help change that and actually expand opportunities has yet to be fully exploited. E-learning can and should lead the way to that new reality both for learners and educational providers.


Copyright 2012  - Technovation Talks, Semantech Inc.

Wednesday, November 7, 2012

Sequestration & Innovation

Over the past two years, there has been a Herculean struggle occurring in Washington wherein those calling for deficit and debt reduction have run headlong into those who wish to ensure that the recovery has time to take hold. This struggle is about to take place once more in December and the battle lines are drawn between partisan boundaries. The stakes are very high.

Let's look for a moment at what Sequestration is:
Under sequestration, an amount of money equal to the difference between the cap set in the Budget Resolution and the amount actually appropriated is "sequestered" by the Treasury and not handed over to the agencies to which it was originally appropriated by Congress. In theory, every agency has the same percentage of its appropriation withheld in order to take back the excessive spending on an "across the board" basis. 
The size of the automatic cut if it occurs will likely be at least $1.2 trillion over a ten year period. Defense spending for 2013 would likely be cut by about $110 billion. Conservative estimates state that every billion dollars spent on defense equates to about 12,000 jobs, thus the defense cuts alone would lead to 1.3 million jobs lost in January sending the unemployment rate back over 10%. The full impact could be as high as two million jobs lost - and all this right when the economy is still struggling and was supposedly the number one concern of most voters in this week's election.

Sequestration will cost jobs, and will greatly impact IT and R&D in the US
It's not just the number of job losses that are troubling though, it's what types of jobs will be lost. Much of the cuts will fall directly on IT and Research and Development budgets. These programs and jobs represent the fundamental backbone of America's innovation infrastructure. Just last year we barely managed to save the SBIR program (Small Business Innovation Research) but Sequestration would likely kill it instantly along with any number of other activities which help give this nation its competitive edge. Our innovation infrastructure is a job creation and business creation machine - one that has put us in the lead and kept us there for decades.

Everyone agrees that the deficit must be paid off and it will be - but only if the economy stays strong - because if it sinks again no amount of budget cutting will make up for the revenue shortfall in a depressed economy and we will find ourselves stuck in multi-dip Recessions. The amount of debt we accumulated after WWII was higher than our current burden and we paid that off, but we did it after the war was won. If we sacrifice this nation's innovation infrastructure to pay off a small part of a larger bill then the ultimate cost will be much higher than we can bear.


Copyright 2012  - Technovation Talks, Semantech Inc.

Tuesday, November 6, 2012

Understanding Master Data Management



I was speaking with recently on the topic of MDM - it occurred to me not too long after we began the conversation that we more than likely had differing perspectives as to what Master Data Management meant. That's inspired me to write this post to talk a little bit about how better to understand MDM.

MDM, The Core Concept:
Let's start with what Master Data is not, Master data is not:
  1. Meta-data, which is a description  of data  (or data about data as it's commonly referred to as).
  2. Ontology,  Taxonomy or Vocabulary - Master data can be derived from these but is not in itself a formal semantic construct.
  3. Software Tool - ultimately, Master Data is technology-agnostic; it is a logical construct which can be defined through various modeling tools and realized through a variety of data management software solutions.  At the point where Master Data becomes tightly coupled with any one software tool or any one modeling technique it will likely loose a great deal of its potential value to the enterprise.
So, then what is it? How would we characterize what can become Master Data or not ?
  1. It may be considered "data of record" or an authoritative data source, but it might not be also. Data of record implies that there is a system of record with sanctioned data elements that are not meant to be repeated throughout the enterprise across other systems. Or this might refer to data entities which are determined to be unique and authoritative across the enterprise regardless of their current use (in a system).
  2. Master data is reference data, sort of. If we consider that reference data is a definitive set of element definitions or entities associated with any particular business domain, sub-domain or problem space. In this capacity, Master Data may serve multiple roles, including: discovery, registry or repository access, data dictionary foundation.
  3. Benchmark - this is a critical consideration; any data entities defined as Master Data elements within an enterprise are unlikely to remain unchanged or unmodified. Eventually there will be variations of Master Data sets, these variations must be tracked back to their source and there must also be a mechanism whereby others in the enterprise can understand where, why and how those modifications occurred 'atop' the core sets of Master Data. Thus the Master Data is a baseline or benchmark wherein the data chain of custody can be managed or tracked. 
  4. It can be a canonical data model or data exchange model - this is important in cases where the core data architecture has not yet been designed or deployed, or in cases where it is anticipated that there will be a major or radical transformation of the existing architecture to a new one. The model can contain Master Data elements or sets within it.
MDM in Today's Implementations
Much of what we refer to now as MDM solutions have been borne out of previous product solutions that were describe as meta-data management solutions. For many of the MDM solutions on the market, the "repository or registry" architectural construct / pattern is how this capability is harnessed.  Another architectural approach related to MDM might be referred to as the middleware design - this extends MDM into data transport and is focused on supporting accurate message translation. And of course there are solutions that combine both aspects.

One of the most important aspects of MDM is identifying what constitutes Master and Reference Data

Perhaps we can consider that there are at least two philosophical approaches to MDM:
  • Passive MDM - This is most closely aligned to the original Meta-data management solutions with a central repository to support discovery and high level data reconciliation.
  • Active MDM - This is most closely aligned with solutions stemming from EAI, Middleware, ETL based solutions where data reconciliation rules are being applied at multiple levels and in more detail.
  • Hybrid MDM - Both solutions are relatively weak in dealing with bi-directional reconciliation focused heavily on transactional systems (it is much easier to reconcile historical data from multiple sources than real-time data from multiple sources).  Hybrid MDM applies both previous techniques and new ones to tackle the most problematic use cases.
It is clear to anyone who has worked with database development and data-system integration that having the ability to reconcile data sources adds tremendous value to the enterprise helping to improve performance, integrity and overall efficiency. Being able to do some of this automation using COTS tools is even more appealing, however there is still a set of processes which ultimately takes precedence here if one is to deploy a successful MDM solution. The enterprise data governance approach must be defined first, the data and business environment must be modeled and if ownership of data sets is to be handed off to user groups (either fully or partially) the impact to both governance and model maintenance must be considered and mitigated in advance.

As we've discovered with nearly every IT technology and product over the past 40 years - implementation without process or architectural considerations leads to many issues, often more issues than existed before the technology was introduced.  This is no exception with MDM - the most important thing to consider here is that deployment of MDM software can significantly impact or influence both solution performance and integrity but proceeding without working through the implicit architectural / enterprise issues is risky.




Copyright 2012  - Technovation Talks, Semantech Inc.

Monday, November 5, 2012

Artificial versus Natural Intelligence



Too often we dismiss the Semantics of a problem space as a superficial element of the larger whole. However, that is often not the case at all, especially where the Semantics of describing the problem directly interfere with the ability to achieve expectations related to the problem space. Case in point is the field of "Artificial Intelligence" or AI within IT. AI was first describe as a discipline in the late 1950's and has been a part of our popular culture ever since Arthur C. Clarke wrote 2001, A Space Odyssey in 1968. While the question of intelligence in computers or robots had been dealt with previously in popular science fiction; 2001 gave us an entirely new perspective on AI - one that seemed entirely plausible and not too far off from being realized (and hopefully not the part where HAL has a nervous breakdown). HAL or Heuristically programmed Algorithmic computer, was an intelligence that had not been programmed but rather had learned in somewhat the same way humans do; this construct is also closely tied with certain aspects of the computer science discipline of AI and is referred (somewhat inaccurately) to as Machine Learning. Yet, nearly 60 years after AI was envisioned, many feel the field has yet to reach any of its original objectives.

Inside HAL - 2001 A space Odyssey
This is not to say that phenomenal advances in IT have failed to materialize, because we have experienced remarkable gains in a variety of areas. However, the nature of those advances seems somewhat different than what we have been expecting in relation to the question - "what is intelligence." Or perhaps the question more precisely needs to be; what is intelligence in relation to viable Information Technology solutions? Today in IT, we tend to use the term intelligence sparingly; we have one major field associated with analytics called Business Intelligence, but we haven't been willing to go much beyond that in claiming that what we provide exhibits anything we might consider related to the concept of intelligence.

So, let's go back and ask a fundamental question, what is it that we meant by Artificial Intelligence in the first place? Here's the wikipedia definition:

"Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it. AI textbooks define the field as "the study and design of intelligent agents"[1] where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.[2] John McCarthy, who coined the term in 1955,[3] defines it as "the science and engineering of making intelligent machines."

Let's contrast this with the definition for the term Intelligence by itself:

"A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—"catching on," "making sense" of things, or "figuring out" what to do."

Looking at these definitions, a couple of things become apparent immediately:

  1. That computer science though AI has moved towards fairly narrow definition of Intelligence.
  2. Computer Science has not to date managed to reproduce the vast majority of attributes that we tend to associate with (human or generic) Intelligence.
  3. The two terms being defined, which upon first hearing them may seem fairly similar, are not in fact closely related at all.

That last point is critical because it has everything to do with developing and or achieving expectations. As the expectations within the IT domain for AI have in fact gotten somewhat narrower than those which they began with, the expectations across the general public for what Artificial Intelligence will encompass have remained the same or even expanded. It can be summarized thus; most people when they hear the term AI expect that it will result in some sort of thinking machine and thinking cannot be defined or conceived of by most people in terms different than are applied for characterizing human thought.

Our focus over the past several decades has been directed more towards brute force computing - the ability to jam ever more computations into smaller machines at an ever-expanding rate of speed. While this is no doubt vital to the overall goals of achieving true intelligence it has not led to any serious development to what might be referred to a computational cognition. Cognition is the set of functions related to taking the lower level processing and apply real meaning to that "sensory input" without the aid of outside interpreters (e.g. human beings).

Watson won big on Jeopardy
So this brings us to our premise; are we in fact trying to do two separate things in the context of one field of practice and research when we should be splitting it in two? If we consider that there may be at least two separate types of intelligence that can be achieved in computer science, one Artificial in nature and one Natural, might we not be able to realign efforts and expectations accordingly?

This would need to start with some definitions, so we'll return to the Semantics. What if we retain the narrow view currently associated with AI (which is focused on statistical theory, agent technology etc.) as it exists now and consider that Artificial Intelligence is any technology or resulting intelligence that aids, facilitates or other empowers human cognition. In other words, Artificial Intelligence becomes a super-charged version of Decision Support. Then we could pursue a separate (albeit potentially related) field of endeavor that can be referred to as Natural Intelligence. We call it Natural because in fact what we're after is something that functions or behaves in a more human way - something specifically designed to mimic human cognition. 

This year we witnessed Watson defeat several Jeopardy champions; however it is not likely even with that impressive result that we could characterize how Watson operates as Cognition. However, what IBM is trying to do now with Watson is much more akin to Natural Intelligence than what is considered to be traditional AI - even if some of the same technology is being applied. The difference is one of goals and objectives - the expectations. IBM's expectation was the Watson had to parallel human cognition in a typically human task (closer to real time as opposed to earlier efforts with Chess) and it represents perhaps the first real step in achieving Natural Intelligence and realizing the larger set of AI goals first described in 1950's. The larger challenge now is to stop confusing these fields with one another and direct the necessary efforts towards improving the outcomes associated with each. We will discuss how that might be accomplished in future posts...


Copyright 2012  - Technovation Talks, Semantech Inc.

Demystifying Enterprise Architecture - Part 3

Part 3 – Utilizing EA as the Enterprise Interoperability Framework
We have thusfar explored how an EA can be used to help provide and identify a unification framework for the enterprise, how to use it to align all aspects of design but perhaps the most powerful application is as a facilitation mechanism for interoperability. A certain amount of that facilitation is de facto present once the first stages are undertaken. The ability to place all of the architecture information within a context in one’s own enterprise is the first step towards extending capability across multiple enterprises. To follow with our previous analogy with enterprise as organisms, the organisms must coexist to some extent within larger ecosystems. The interrelationships between entities determine the level of cooperation or collaboration that can take place. Those inter-relationships are built upon shared ‘cultural’ or cross cultural understanding.

The entire notion of ‘Services’ within a Service Oriented Architecture is more or less based upon this premise – capability structured through relationships on a ‘need to use’ basis rather than through assumption of need arranged by hierarchy. It is philosophical battle; deterministic logic pitted against the laissez faire availability of flexible pieces of an undefined puzzle. The philosophical view of SOA takes us one step close to expression of architecture as DNA. Within SOA, the term ‘loosely coupled’ points to another important consideration for architecture as interoperability mechanism – the fact that architecture elements or components are more effective when left flexible and in this the flexibility is engendered through abstraction. In the past, using deterministic logic, systems literally hard coded the relationships between data and application logic thereby linking them synergistically in a rather negative way. The interaction and impacts of large, highly controlled inter-relationships made maintenance difficult and eventually impossible across data and application layers. This led to multi-million line chunks of code and highly tangled data structures with little effective way to separate logic back out.




An example of using EA as an Enterprise Interoperability Framework

Our focus on SOA doesn’t mean that you have to adopt SOA per se in order to facilitate interoperability; in many ways SOA as a term or discipline is highly misleading or confusing as it means different things to different people. The principles we’re describing here are not attached to any point in time hype though, when the term SOA goes out of fashion, the following principles will remain.
  • Interoperability is multi-directional, many-to-many in nature. Point to point or one to many paradigms have limited value.
  • Interoperability depends upon abstraction, the levels or types of abstraction are basically unlimited – anything can be abstracted from anything else.
  • Interoperability can be visualized and mapped (but this eventually must become reactionary rather than visionary as the level of complexity increases).
  • Non-deterministic interoperability will result in lower complexity and reduced management overhead.
  • Interoperability is relativistic – the universe of possible interaction is simply too much to imagine; it represents variability at the quantum level, so why fight it? There will be some basic rules governing environmental behavior, but these rules will be designed to allow for flexibility.  

Conclusion
What we’ve described above represents a somewhat radical departure from how enterprise architecture is currently viewed and practiced. To get from where we are now to this will require some time and thought as well as experimentation. However, the philosophical premises postulated here have the potential if implemented together to drastically transform the entire practice of IT. If anything is certain it is this – our technological progress is outstripping our ability to manage information. This is happening because we are still tied to the deterministic philosophies founded when resources, memory and scope was limited to a tiny sample of the potential virtual universe laying before us. Memory, scope and scale are now opening across a wide aperture – we’re viewing the infinite just beyond the horizon and lack the vocabulary to describe it.




Copyright 2012  - Technovation Talks, Semantech Inc.


7X6KFAEETJPC

Friday, November 2, 2012

Understanding Data Warehouse Challenges

Last week we talked about some of the issues arising from the emerging field of Big Data, today we're going to explore a more mature data solution and point out some of the challenges that it's faced - and some of those challenges were instrumental in pushing the IT industry towards Big Data.

The Data Warehouse concept is built atop the notion that all data related to the enterprise can be captured and centrally or holistically managed. This is a powerful idea, yet there is more than one way to achieve that goal. The traditional view of the EDW attacked the problem from a very DBMS-centric perspective. This is primarily why EDW projects become so expensive, difficult and ultimately hard to adopt. The typical EDW approach attempted to gather all of the data related to the enterprise and place it into one massive repository structure. Whether this approach was attempted in chunks or as a “Big Bang” assault made little difference in the long the run as the byproducts of the practice were the same; those byproducts included:
  • A more bureaucratic management approach to the data layer in general.
  • An added degree of separation between the data owners and the data developers.
  • A certain level of inflexibility in regards to how data was updated, corrected or otherwise transformed.
  • An added degree of separation between database developers and data exploitation developers.
  • An added degree of separation between database developers and application developers.
  • An inability to quickly respond to major changes in the business.
  • Dependence upon a sub-set of industry experts and equipment that is more expensive than the industry norm.
  • A higher cost associated with scalability in general.
It is worth examining some of the core concepts associated with Data Warehousing a little bit closer to help understand why these outcomes tend to result from the traditional EDW approach.

DBMS Focus – At the time when EDWs became popular, other areas of data architecture were only just beginning to blossom. Today’s Business Intelligence platforms represent much more than mere reporting engines. Metadata management was only just beginning to be understood in the mid-1990’s and focus on Semantic technologies was virtually non-existent. The world according to DBMS in 1995 had a relational management system in the middle with ETL feeding into and reports coming out. This might be thought of as a three layer, stove-piped database systems view of the data architecture.

The Enterprise Single Instance – While consolidating like capabilities into marts or stores or some other ‘functional single instance’ approach has achieved quite a bit of success over the past two decades, attempting to manage all data in one structure has proven much more difficult. This is why the notion of Massively Parallel Processing (MPP) was needed to make it viable back in the 1990s. MPP in the context of proprietary hardware was expensive though and perhaps failed to recognize the power of networked processors on inexpensive hardware (i.e the Google scalability model). The other key consideration here was the added steps that were needed in order to make such a system perform within reasonable parameters. So, the single instance enterprise faced and still faces major hurdles in terms of costs, manageability and performance. 


Data is no longer confined within the context of single systems
EDW Fallacies
If we were to directly challenge the core EDW assumptions and illustrate the fallacies associated with the philosophy, our list would resemble the following:
  • The Business will remain static over a relatively long period of time.
  • The Enterprise will remain static over a relatively long period of time.
  • That source data and data exploitation should not be managed synergistically, in other words that Decision Support or Business Intelligence solutions built on top of EDW source data should be viewed as separate, albeit related efforts.
  • That the data layer and the application layer can or should be viewed or designed separately.
  • That computer Hardware would not catch up to the processing load – i.e. that the data layer would always require specialized Massively Parallel Processing (MPP) in order to manage very large quantities of data. Furthermore, this assumption also implied the data would remain in a single instance data source. So instead of parallel processors deployed in specialized equipment, Big Data now uses the cheapest processes / equipment possible in a commodity approach with data spread out in sets of distributed file systems. In fact this has been take even further as this week the US Government announced the completion of the world's most powerful supercomputer. It achieved all of its latest gains by using commodity hardware (in this case, GPUs, game video processors widely available on the market).
  • That network architecture, data architecture, application / SOA architecture and enterprise architecture are separate.
  • That the Internet (Cloud) would not represent a viable mechanism for connecting to distributed data sources.
  • That unstructured data was not as valid as structured data (mainly because no mechanism existed to incorporate into the traditional database management approaches).
  • That most major transformations need to occur before data is placed into the primary storage / management entity (i.e. DBMS, warehouse).
  • That there is a single version of the truth, period. This is perhaps the biggest fallacy behind all data warehouse, MDM and Governance solutions. Data can be managed, but it is dynamic and all always will be. Viewing data as incontrovertible, orthodox truth immediately eliminates much of the value that data otherwise provides. Situations change, and every stakeholder views the whole from their unique perspectives. Yet, there can still be order in a relativistic environment (much as there is in the real world).  This doesn't mean data cannot be standardized or managed - it merely takes into consideration the inevitable evolution that will occur.


Copyright 2012  - Technovation Talks, Semantech Inc.

Demystifying Enterprise Architecture - Part 2

Exploiting EA as a Primary Enterprise Unification Mechanism
For many, this is what EA was always supposed to be. When handed the EA tools and methodologies currently recommended by industry, though, this objective seemed somehow to pass out of reach. So how exactly could an EA fulfill this role?  Every enterprise is composed of a series of lifecycles – development lifecycles, integration lifecycles, operational lifecycles and these are all contained within the larger organism of the lifespan of the enterprise itself. The enterprise is alive in a very tangible way, but it is hard for us to visualize how it came to be, how it has grown and how it will continue to thrive. Where is the map, the key to how and why it developed the way it did? All living organisms are built upon a code, one that captures its evolutionary history – we need enterprise DNA.

DNA captures the system of systems as well as all of the constituent components; it captures the rules and road-maps for component and holistic transformation. DNA is universal and wholly interoperable, the code is flexible enough to support infinite variation. DNA is data, it is rules and it can be visualized and mapped. DNA represents the single most successful system control mechanism that we know of in the universe, yet even with our extensive understanding of it we have largely failed to apply the secret to its success. DNA works because it is simple, and because it is simple it can manage infinite complexity without intervention. This is our meta model for enterprise architecture, which is essentially an extension of reality, a virtual organism. 

If we begin with this premise and then determine that we need to apply it to an organization we are already taking a fundamentally different path than most follow. Our perspective for what the EA means, how long it will last, our level of commitment to it, all these change once a decision has been made to capture, maintain and consciously craft the living genetic framework of all aspects of the organization. DNA is not something that goes away once an organism matures – it stays for the entire lifespan across all changes, all transformations and passes knowledge forward to future generations within its genetic heritage. When this is done properly, the organism is oblivious to the mechanism facilitating it. Enterprise architecture is almost always attempted across painful fits and spurts – rarely if ever does it become a permanent part of the larger lifecycles that spawned it. And when the EA efforts are tossed aside, knowledge is lost, opportunities for continuity missed and the roadmap vanishes beneath the shifting sands.

Enterprise Architecture includes both Frameworks and Patterns

Once you take the philosophical leap as to what the true nature of EA ought to encompass a number of other realizations start to become apparent, including:
  • Narrowly defined or highly specialized EA frameworks while interesting in their own right are ultimately doomed to fail. They are not flexible enough to encompass an enterprise, a lifecycle or combinations thereof.
  • Enterprise Architecture cannot be viewed separately from either the systems or processes it is meant to represent – just as DNA is part of an organism, the architecture is embedded into the organization. An EA project that is undertaken as external snapshot cannot hope to achieve a true understanding of what the organization or project is all about and it will ultimately provide disappointing results.
  • The language of architecture must at all times be accessible to all participants in the enterprise who have a vested interested in its health. Once the visualizations, notations, frameworks or terminology become the province of ‘architecture experts,’ the value of the architecture has been lost and the EA will gradually or rapidly whither away.
  • Architecture is also a communication medium through which understanding can be conveyed and assimilated. It becomes the basis for all other collaboration, the reference point for where each conversation begins and the parameters of the discussion.
In order to leverage EA as the primary unifying force in your enterprise, you must first discard your previous assumptions of what architecture is and how it is currently practiced and focus upon how to make it relevant to the purpose it was intended for. It is a map, a guide if you will and shouldn’t have to be deciphered to follow. It is also not something to be handed off to others isolated within or outside of your entity, it must be endorsed and embraced from the top. The form which it takes is largely dependent on the needs of the organization, although for an EA to be leveraged for enterprise unification it ought to share certain characteristics, such as:
  • The EA must have a clearly defined semantic model, using common, easily identifiable terms and conforming with industry standard terminology where possible.
  • The EA must have ‘hooks’ into all of the core processes utilized across the enterprise.
  • The EA must have the ability to be subdivided without losing its ‘DNA’ integrity.
  • The EA must be universally available to anyone who wishes to either reference it or work with it; this implies that it must be updateable using a flexible ‘social publishing’ paradigm and be entirely web-based.
  • The EA must encompass the enterprise lifespan allowing for unlimited lifecycles within – this is the only way to build true organizational traceability. 
  • EA management must support some level of automation – for EA-based unification to function properly, it can’t be a continuous manual process.
In many ways, the philosophical commitment is the most important step in any EA effort; it is also the first one which must be addressed. After an agreement has been reached as to how EA should be used then there needs to be an examination of its practical implementation. The first and most obvious practical application of EA is as the design framework for your enterprise.



Copyright 2012  - Technovation Talks, Semantech Inc.

Demystifying Enterprise Architecture - Part 1

What is Enterprise Architecture (EA); it is mysterious; it is complicated and often times it can appear almost artistic in nature. Information systems represent virtual capabilities; they are often grouped within interwoven boundaries or ecosystems referred to as enterprises. An enterprise architecture is generally considered to be a data driven visual roadmap to the system of systems enterprise. What does an EA really provide though, how are effective ones created and exploited? Why do so many attempts to develop EA’s lead to disappointment?

In these posts, we will examine the nature of Enterprise Architecture and the practical considerations as to why some work and others don’t. Ultimately, we consider ourselves strong advocates for this technique; however that doesn’t prevent us from taking a critical look at how the industry currently utilizes it and how the practice might be improved.



Enterprise Architecture can be viewed narrowly or can encompass all architectures within an Enterprise
Background
Perhaps the first and most famous example of architectural visualization in information technology is the Von Nuemann processing model. This was a logical representation of what later became a physical / hardware architecture in every computer ever built.  However, before the hardware was designed, the conceptual approach to the management of the processing logic needed to be expressed. The original diagram was likely drawn on the back of a napkin or envelope but the medium was not important – what mattered was the ability to clearly visualize the concept and within that visualization to also illustrate the logical workflow.

Many of you have probably already heard countless analogies comparing the construction buildings and the use of blueprints to the development of systems utilizing architectures. To some extent the analogy is accurate, but there are some important differences. Those differences include:
  • A degree of subjectivity in information systems not present in brick & mortar construction
  • A degree of complexity in information systems not present in brick & mortar construction
  • An assumption of a greater degree of interdependence and inter-relationships between virtual components as opposed to physical ones. (to some degree this is becoming less differentiated as more information systems are integrated into core building operational functions)
  • The realization that the virtual system can become cognizant that it is integrated within a global infrastructure (buildings are more locally oriented).

All of the factors listed above tend to point to a singular conclusion – virtual entities are not bounded, either by time or space or by capability, thus their potential for harnessing complexity or becoming mired in it is exponentially higher.

Many people trying to get a handle on Enterprise Architecture make a mistake early on by focusing entirely on one or more EA frameworks. EA frameworks are self-contained architecture paradigms including data models, notation and methodologies designed to address various aspects of EA or specific markets. You may be familiar with some of them; the Zachman Framework, FEA, DoDAF, TOGAF. While these frameworks are important and for some practitioners will be dictated as mandatory, none of them represent the entire EA lifecycle. As is the case for many areas of IT, specialization has become too specialized at times, making it difficult to see the big picture. And not seeing the big picture in EA is one of the primary reasons that it is rarely applied to its full effect.

Enterprise Architecture is more than the visualization of a technical solution; it is also the visualized roadmap of the development and operational lifecycles associated with the solution. EA that is used only to meet a mandate or to support a high level business design or to support only a system level technical design will fail to place those elements in their larger context. To understand how EA can be used to transform the management of your enterprise information system environment we’ll need to look at EA from several core perspectives:

•    EA as a unifying, organizational mechanism
•    EA as design framework
•    EA as interoperability framework 


Copyright 2012  - Technovation Talks, Semantech Inc.

Thursday, November 1, 2012

Cyber Security Doctrine

The foundation for designing the next generation of Cyber Security techniques, methodologies and solutions begins with an updated philosophy or doctrine. We're going to take a stab at that with the following Cyber Security Doctrine.
  • Doctrinal Concept 1 – Cyberspace is an active battlespace; in other words Cyberwar is today’s new proxy battlefield, it is the new “cold war.” 
  • Doctrinal Concept 2 – Cyberspace is not just about defense; in fact, the view of the NETOPs environment as a reactionary architecture is one of the reasons it cannot keep pace with emerging threats. Offense is needed.
  • Doctrinal Concept 3 – Cyberspace is a rapidly changing paradigm, the current processes for deploying capability are not yet aligned to this rapid pace.
  • Doctrinal Concept 4 – Cyberspace combines systems (information & weapons systems) and communications in ways previously not imagined. This has a strangely unifying effect, eventually merging the systems involved. Some of this is intentional, some of it is an accidental byproduct.
  • Doctrinal Concept 5 – Cyberspace is not and cannot easily be segmented, the overlap across various participants complicates action and mission success. Coordinated missions will become the norm. 
  • Doctrinal Concept 6 – Cyber “events” occur within a context – in fact they form patterns which can be modeled, predicted and mitigated. 
  • Doctrinal Concept 7 – Cyberspace represents a previously unavailable route of access to government (or any confidential) information and capability; however the approach towards security and privacy must still be balanced against common sense and the core principles upon which the nation was founded.
  • Doctrinal Concept 8 – The most important “Cyber-weapon” is and always will be the human mind; management of Cyber Operations must foster innovative and agile decision making. Threats must be addressed proactively and with a minimum of bureaucracy in order to match the natural advantages held by potential Cyber opponents. 
 
Cyber Attacks are not random events...




Copyright 2012  - Technovation Talks, Semantech Inc.