Cyber Security Predictions for 2017

2016 was a big year in the annals of Cyber Security, and 2017 promises to eclipse it.

Creating an Enterprise Data Strategy

An introduction to the process of developing comprehensive strategies for enterprise data manangement and exploitation.

A Framework for Evolutionary Artificial Thought

Let’s start at the beginning – what does this or any such “Framework” buy us?

The Innovation Dilemma

What things actually promote or discourage innovation? We'll examine a few in this post...

Digitial Transformation, Defined

Digitial Transformation is a hot topic in IT and big money maker for consultants - but what does it really mean?.

Friday, December 13, 2013

The Challenges to Global Collaboration

There has been a lot of discussion lately on the web about how we're beginning to achieve the decades old goal of consistent global collaboration for innovation and problem-solving.  The celebration may be a bit premature, not unlike prior victory dances we engaged in related to:

  • Cancer Research
  • Flying Cars or Electric Cars for that matter
  • Space travel
  • AIDs Cures
  • Big Data changing the world
  • Artificial Intelligence and so on...
We have had a bad habit lately of confusing the initiation of a trend or innovation with its mature realization. The technical foundation for global collaboration has been developed and deployed over nearly four decades. That's how long it has taken to define the network communications paradigm and deploy the necessary bandwidth, information resources and devices to make such a lofty goal possible. But then again, perhaps we need to step back and ask ourselves what Global Collaboration really is and identify the remaining barriers that may otherwise hold it back.

Global Collaboration represents the exploitation of both technological assets and cultural predispositions to share knowledge and jointly resolve common challenges. Now, this model isn't entirely new is it? Academia has been doing this for centuries - sort of. Within higher education there is (and always has been) a tacit expectation for various types of cross-institutional mind-share. However that expectation was always restricted by the following factors:
  1. Competition for ownership of ideas.
  2. Limits in the ability to communicate (technology).
  3. Limits in access to the right information (technology and cultural boundaries or secrecy - and this factor is of course related to factor 1).  
  4. A limited working model in how to organize "virtual communities." In the old days, these communities were characterized as scientific societies and much of what they accomplished was based on direct point to point correspondence, journals and meetings. We are still working with all of those metaphors today even though we have witnessed the birth of real-time communities powered by the combination of social media and mobile devices. 
  5. Orthodoxy. This may sound odd, but in fact it is orthodoxy that puts the institution into "institutionalism." In other words, it represents the ultimate barrier to acceptance of or even discussion of unorthodox or disruptive concepts. 

As much as we'd like to think that modern society is open-minded and innovative - and that we're hurtling from one innovation to another at breakneck speed - that's just not the case. The telephone was invented in 1880's, radio communication was invented before 1920, television around 1930 and computers in the 1940's, miniature transistors in the 1950's  - yet the journey from those various inventions to a mobile device that combines them took more than 60 years. The story of the electric car is much worse - it was invented at roughly the same time as the internal combustion approach was worked out - yet one was promoted and the other neglected. We don't typically move nearly as fast as we think we do and we often make poor choices along the way.

What does this have to do with Global Collaboration? Well, the premise goes that if dozens or hundreds or even thousands of minds were directed at a problem then the solutions to that problem would happen faster and be vetted better. It would no longer be Edison versus Telsa, but countless inventors competing and collaborating on a level playing field - or at least that's idea. The closest thing to this model that we have now is Open Source software.

But does the Open Source software model represent the type of Global Collaboration that futurists have been predicting for the past half century?  The answer is yes and no.

Yes, it represents a prototype of highly specialized working collaboration (on a global scale) - no it doesn't seem to represent the prototype for making truly revolutionary breakthroughs. So, why isn't the "Open Source" model redefining innovation and progress in software or science? Here are some possible explanations:

  1. The (majority of) projects are too narrowly defined to make real breakthroughs.
  2. The context (software development) is too limited. 
  3. Coding is focused on skill in execution - and problem solving at a tactical scale. It doesn't usually require any quantum leaps in conceptual understanding or practical application.
If you think I'm being harsh about the Open Source movement as a force for global innovation - then ask yourselves how much of the core technology that is powering our current infrastructure came out of it:
  • Well, they gave us MySQL - and SQL came from previous working groups - but it was also very similar to existing products.
  • What about Java, Linux, Apache etc. All good stuff, but not truly disruptive - the seeds for all of that had been developed elsewhere. 
  • Big Data, Cloud tech - again the same above.  

Lot's of good stuff is coming out of the Open Source movement - it's just not that revolutionary and much of what has been revolutionary has come out of the older style collaborative ecosystems like Academia and DoD (or combinations thereof).

So, now with all of that as preface; here is a list of some of the challenges that I think is holding back global collaboration (at least the way it has been envisioned):

  1. Competition can be both beneficial and destructive. We need to find a better way to manage it and ensure open playing fields for smaller players. In other words, how can we provide tangible rewards for contribution that doesn't end up excluding the majority of participants? 
  2. Secrecy -  One of the lasting holdovers of the Cold War is a subculture of secrecy that seems to be present in almost every major nation on the planet. This represents perhaps the single biggest obstacle to the eventual global cooperation that futurists tend to describe. Solving problems together requires a level of trust that simply doesn't exist yet. Hopefully it will someday.
  3. Orthodoxy - How many times have brilliant ideas been dismissed because the community in which they were introduced rejected them? More times than can be counted no doubt. The biggest barrier to most innovation is and always has been a lack of imagination. When careers depend of defending existing paradigms; newer paradigms will take longer to birth or die in the cradle.
  4. A framework or methodology for global problem solving. There's perhaps thousands of these or similar approaches floating around (from academia to open source software for example) but a truly effective one remains elusive. 

My next post will explore what "a truly effective methodology for global problem solving" might look like.

Copyright 2013, Stephen Lahanas


Sunday, November 10, 2013

Understanding Data Architecture

Someone asked me what at first sounded like a very straightforward question earlier this week; "what is Data Architecture" - or more precisely, what does it mean to you. Usually, I'm not usually at a loss for words when it comes to expounding upon IT Architecture related topics - but it occurred to me at that moment that my previous understanding of what Data Architecture really represents is or has been a little flawed or perhaps just outdated. So I gave a somewhat convoluted and circumspect answer.

Where does Architecture fit within this picture?

The nature of what's occurring in the Data domain within IT is itself changing - very quickly and somewhat radically. The rise of Big Data and proliferation of User Driven discovery tools represents quite a departure from the previous more deterministic view of how data ought to be organized, processed and harvested. So how does all of this effect Data Architecture as a practice within IT (or more specifically within IT Architecture)?

But before we dive into the implications of the current revolution and its subsequent democratizing of data, we need to step back and look again the more traditional definitions as to what Data Architecture represents. I'll start with a high level summary view:

Traditional Data Architecture can be divided into two main focus areas; 1 - the structure of the data itself and 2 - the systems view of whatever components are utilized to exploit the data contained within the systems. Data in itself is the semantic representation or shorthand of the processes, functions or activities that an organization is involved with. Data has traditionally been subdivided (at least for the past several decades) into two categories; transactional and knowledge-based or analytic (OLTP vs. OLAP). 
Now we'll move to a traditional summary definition of Data Architecture practice:

Data Architecture is the practice of managing both the design of data as well as of the systems which house or exploit that data. As such, this practice area revolves around management of data models and architecture models. Unfortunately, the application of Governance within this practice is sporadic and when it does occur is often split into two views: governance of the data (models) and governance of systems (patterns and configurations). 
So, that seems to be fairly comprehensive; but is it? Where does Business Intelligence fit in - is it part of the data management or system management - is it purely knowledge focused or does it also include transactional data? For that matter, do Data Warehouses only concern themselves with analytic data or can they be used to pass through transactional data to other consumers? And isn't Big Data both transactional and analytic in nature? And BTW- how do you model Big Data solutions either from a systems or data modeling standpoint? Now - we start to begin seeing how things can get confusing.

We also need to take into consideration that there has been an attempt made to standardize some of this from an industry perspective - it's referred to as the Data Management Book of Practice or DMBOK. I think in some ways it's been successful in attempting to lay out an industry taxonomy (much like ITIL did) but not as successful in linking that back into the practice of Data Architecture. The following diagram represents an attempt to map the two together...

There isn't a one to mapping between DMBOK and data architecture practice, but it's close
One of the areas that the DMBOK has fallen short is Big Data; my guess is that they will need to rethink their framework once again relatively soon to accommodate what's happening in the real world. In the diagram above, we have a somewhat idealized view in that we've targeted a unified governance approach for both data modeling and data systems.

Let's take a moment and discuss the challenges presented by the advent of new Big Data and BI technology. We'll start with BI - let's say your organization is using Oracle's BI suite - Oracle Business Intelligence Enterprise Edition (OBIEE). Within OBIEE you have a more or less semantic / metadata management tool called Common Enterprise Information Model (CEIM). It produces a file (or files) that maps out the business functionality of all the reports or dashboards associated with the solution. Where does that fit from an architecture standpoint? It has a modeling like interface but it isn't a 3rd normal form model or even a dimensional model. It represents a proprietary Oracle approach (both as an interface and modeling approach). It allows you to track dimensions, data hierarchies and data structures - so it is a viable architecture management tool for BI (at least for OBIEE instantiations). But some traditional Data Architecture groups would not view this as something the architects would manage - it might handed off to OBIEE administrators. This situation is not unique to Oracle of course, it applies to IBM / Cognos and other BI tools as well and there's a whole new class of tools that are completely driven by end users (rather than structured in advance from an IT group).

Now let's look at Big Data. Many of the Big Data tools require command line interface management and programming in order to create or change core data structures. There is no standard modeling approach for Big Data as it encompasses at least 5 different major approaches (as different say as 3NF is from Dimensional). How does an architecture group manage this? Right now, in most cases it's not managed as data architecture but more as data systems architecture. The problem here is obvious; just as organizations have finally gained some insight into the data they own or manage - a giant new elephant as entered the room. How is that new capability going to impact the rest of the enterprise - how can it be managed effectively?

Back to the original question - what is Data Architecture. I'd like to suggest that the practice of Data Architecture is more than the sum of its traditional activities. Data Architecture is the practice of understanding, managing and properly exploiting data in the context of problems any given organization has to solve. It is not limited by prior classifications or practice but has a consistent mandate to be able to represent and hopefully govern in some fashion data as an asset (internally or shared collaboratively). Data Architecture as we know is going to change quite a bit in the next two years and that's a very good thing.

Copyright 2013, Stephen Lahanas


Monday, November 4, 2013

The Value of Architecture Assessments

While many people are becoming somewhat familiar with IT or Enterprise Architecture, relatively few know much about Architecture Assessments. This is unfortunate given the significant value proposition such exercises provide. For example, had one or more Architecture Assessments been provided for during the course of the project; it is unlikely that the Obama administration would have been surprised by the current mess it's now facing.

Problem Space analysis is one of the techniques used with assessments - it can apply
both to the business and technical aspects of a project

An Architecture Assessment is also different than other traditional Architecture activities in that the expectation is generally that third-party personnel are more likely to perform them. The reasons for this include the following:

  1. Assessments are one of the key tools involved in IT oversight activities (sometimes referred to as Independent Validation and Verification or IV&V).
  2. It is more likely that an accurate assessment can be obtained by architects / investigators without a vested interest in the project. 
  3. The skillset of the person doing the assessment is critical - it needs to be an architect and not merely a technical or product expert. This is the only way to ensure that all options / alternatives are properly considered / assessed. 

So what exactly is an Architecture Assessment? A typical assessment tends to include the following categories of activity:

  • Information Gathering  
  • Design & Project Review
  • Design & Project Recommendations
An assessment also typically includes one or more types of specific analysis as well:
  • Analysis of Alternatives
  • Root Cause Analysis
  • Problem Space Mapping & Resolution 
One way to capture alternatives is through Decision Trees.

Perhaps the most important function that an Architecture Assessment can provide is a mechanism to challenge assumptions and combat complacency. One of the main reasons that IT projects fail today is because there is generally no incentive or expectation to raise issues or problems. Rather than being viewed as a healthy activity - identification of the problems is feared in itself and thus ensures even more pain later on when the issue finally surfaces (which they always do).

When compared to the cost of the rest of a typical IT project (and any potential loss or cost overruns associated with problems that aren't identified or managed in a timely fashion), a relatively brief assessment exercise is generally less than 1% of total cost yet could make the difference between success or failure...

Copyright 2013, Stephen Lahanas


Sunday, November 3, 2013

What Does an IT Architect Do?

This is an interesting question for anyone thinking about working as an Architect but also for anyone else working in IT because there isn't a consistent set of definitions regarding architects within the industry. Architects who do more or less the same things with the same skills could variously be referred to as:

  • Enterprise Architects
  • Solution Architects
  • Project Architects
  • IT Architects
  • Technical Architects
  • Data Architects
  • Application Architects
  • Business Architects
  • Cloud or SOA Architects
  • Security Architects

There are many types of architects, yet they all share certain characteristics...
And the list goes on. What if anything differentiates these roles (1)? More importantly, what elements do these roles have in common (2), if any? Not too long ago in the history of IT, few people used the term Architect to describe anyone. Why has this changed (3) – we've gone from no architects to more than a dozen flavors in perhaps 15 or so years?

Let’s answer some of these questions:

  1. The Architecture roles (listed above) are generally differentiated by some level of solution specific expertise in one area or another. This could involve methodology, toolsets, skillsets or even industry domain knowledge.
  2. An Architect is not or at least should not be tied to one specific element of expertise – if he or she is focused in only one area then they become a Subject Matter Expert (SME) and not an Architect. This is an important and practical distinction because technology keeps changing – today’s stack will not be the same as tomorrows’. Architects must understand the entire stack as it evolves and change skill sets as that evolution occurs. All Architects are designers. All Architects are problem solvers. All Architects are by nature also systems and business analysts. 
  3. The genesis of the Architecture role in IT is directly related to the rapid decentralization and added complexities associated with the PC and Client Server revolution (and everything else that has occurred since then). In other words, as IT environments and systems became more complex, it was apparent that a “complexity manager” was required. That person is the Architect. 

Why would we necessarily refer to a complexity manager as an Architect? The metaphor as designer is obvious – but just as important is the idea that the Architect is the one who has the vision and understanding to see how all of the various pieces ought to fit together. An Architect, in either realm is by nature also an integrator.

So, how does an IT Architect differ from a traditional Architect (the folks who design blueprints for buildings)? Perhaps the most interesting difference between the two roles is that there is often an expectation within IT that the Architect or Designer is also the Builder. In the world of brick and mortar construction, it is very rare to see builders follow a career path from hanging drywall and pouring foundations to drafting the design for the entire building. Yet, in Information Technology it is relatively common to see developers become architects.

There is a good reason why this happens in IT but there is also a problematic result as well. The reasons why it happens are because:

  1. the architecture career path is still unclear and we have to get Architects from somewhere and 
  2. unlike with buildings, Architects in IT need to understand more of the technology associated with what gets built. 
We need that understanding because IT is much more dynamic than building design – we ‘reinvent’ our industry about every 5 to 10 years now. Some people still consider Frank Lloyd Wright’s work to be cutting edge and he’s been dead for more than 50 years – clearly IT and traditional architecture move at a different pace.

The problematic outcome of this situation is that many Architects tend to view the design and analytic processes associated with architecture to be inferior to ‘just building something’. This viewpoint more or less contradicts the true mission of what Architecture is supposed to do and the problem that it solves. In other words, Architecture has arisen to manage complexity; yet rapid build with minimal analysis is the number one culprit behind increasing complexity in the first place. This type of Architect could be referred to as an Architect in name only because it generally implies that they would not be practicing many of the key attributes associated with Architecture. This also potentially opens up a larger debate regarding Agile versus non-Agile IT, and that debate will never go away. The important takeaway from that debate though is when you reach the systems of system level; architecture becomes increasingly important.

Contrary to popular belief, Architecture and Agile Methodology actually complement one another.

There is a another consideration in understanding the role of the IT architect as well – without industry standard expectations in relation to what the Architect role actually represents – there can be wild inconsistencies across or even within organizations that utilize architects.  So, how could we solve this dilemma? Here are some suggestions:

  1. Develop an industry standard set of role descriptions for IT Architecture. (there are groups that have developed standards for Enterprise Architects, but that is entirely too narrow to handle the larger set of expectations associated with IT architecture).
  2. Ensure that any Architect in any role, anywhere – is given the top level training or expectations that are common across all architecture first (before drilling down). 
  3. Help foster the distinctions between lead developers, tech leads, SMEs and Architects. This will help organizations determine when they really need an Architect as opposed to one of the other roles. If the roles are mixed it is highly likely that one part or the other of the combined role is going to get shortchanged – and that could lead to a number of unforeseen consequences. 

There are several other key attributes that help to distinguish IT Architects from other roles in IT; these include:

  • Architects are often asked to act as liaison between other solution stakeholders. Sometimes Architects even become the official solution advocates.
  • While sometimes asked to be advocates, Architects also tend to be the key resource in determining when a solution needs to be dropped. An Architect must be impartial when making such decisions.  
  • Architects are complexity experts or managers – in other words typically Architects are dealing with “Systems of Systems” scenarios. Thus, the Architect has to be concerned not just with how the solution will operate in its own context, but how it will function in the context of the larger ecosystem. 
  • Architects act as honest brokers in being able to question assumptions and drive change in order to mitigate potential risks. While other roles may be involved in this; typically Architects have the best vantage point to deal with it.
  • Architects are change agents – more-so than any other IT role. Architects are asked to either envision or evaluate new technology and lead the move towards it.

The IT Architect is a relatively new role; it is often interpreted differently but it is uniquely positioned to become ever more important. As this role continues to help evolve the IT landscape, it too will evolve.

Copyright 2013,  Stephen Lahanas


Thursday, October 31, 2013

The Real Problem with

The biggest story in IT this year is without a doubt, the rollout issues surrounding - a.k.a Obamacare. The story is both bizarre and quite familiar at the same time and does represent perhaps the first time that national politics has been so laser-focused on an IT project. Many people who have been introduced into the world of IT through this might think that the saga of is unique or somehow unusual - well, it is isn't. By many accounts, more than half or more IT projects fail - this is consistent across government and industry (there have been spectacular failures in both arenas).

As the media circus has picked up steam over the past 2 weeks, we've been treated to a Congressional attempt to hire John McCaffee and Edward Snowden's offer to fix the problem because he knew what it was. We're just waiting on Miley Cyrus to weigh in...

that is, if you can sign on...
The figure associated with the project so far - (approx $170 million) is not in fact that large for a complex IT project. Just last year, the AF Logistics ERP project, ECSS was cancelled after $1.2 billion was spent - no one seemed to notice it (outside of interested circles). The director of Health & Human services has held herself personally accountable in multiple apologies so far for the various problems associated with the site; which seem to include the following:

  • An inability to handle the anticipated user traffic (multiple site crashes)
  • An inability to generate user logins
  • Errors which involve policy termination
  • password reset failures
  • Failure on the underlying data integration hub
  • Multiple page not found related errors
  • Various problems surrounding form processing
  • Data center crash
There have been any number of armchair quarterback giving simple fix suggestions to the problem - as with most similar situations such advice is generally less than useful. The most common one I've seen relates to the need for the government to use open source coding practices (which it was in fact doing with code up on github). So, let's be clear - for projects like this - there is not now and never will be silver bullet fixes - they are too complex. It's also important not make knee-jerk comparisons to commercial solutions either. I've seen a number a quotes trying to compare and; Amazon spent more than a decade perfecting their technologies and likely invested several billions of dollar in it. From news accounts, it seems as though coding for only began sometime early this year.  

So what happened? Well, I've only got clues from various articles but several key elements seem to stand out:
  1. Somehow, the team seemed to have made a decision regarding who should enroll and when didn't occur until very late in the game (when isn't exactly clear).
  2. That enrollment capability seems to be tied to Oracle Identity Management suite - which is a very complicated tool and requires a significant amount of engineering (custom code, performance considerations, etc.).
  3. For whatever reason, the development timeline got pushed to the right but the rollout deadlines didn't change. 
  4. There seems to be some lack of clarity as to who was the integrator (the prime vendor or government agency).
  5. The contract was issued in December of 2011, but requirements were delayed - and requirement changes were being made up through this September. (very common story actually)
  6. There were a great number of moving parts - more than 50 contractors. This may also imply an overly complex architecture.
  7. It appears that there was an excessive amount of code involved - which begs the question - why wasn't more of this handled with prepackaged (portal) software?
  8. The prime vendor didn't seem to have had experience architecting portal solutions with the type of volume that would clearly be associated with 

John McCaffee is not the tech support we need to solve the mystery
Unfortunately, given the nature of the problems, it seems as though fixing while doable may take longer than is being promised. Sometimes, throwing extra money and attention towards a problem at this stage has the unwanted effect of making things even more complicated. Here are some suggestions though that may help alleviate the current crisis:

  1. Eliminate the need to register just to shop for plans.
  2. Refine project roles and responsibility immediately - choose / assign a lead integrator.
  3. Switch to incremental roll-outs for actual enrollment (e.g. shopping available nationwide by enrollment opens up state by state).
  4. Assign tiger teams per critical problem (e.g. one for identity management, one for the data hub, one for performance engineering etc.)
  5. Do not oversell the fix timelime; in other words don't promise something you can't deliver - extend the overall compliance timelines in order to give the project time to catch up. 

Copyright 2013, Stephen  Lahanas


Saturday, October 19, 2013

3 Common Cloud Challenges

Like all new technology trends, Cloud Computing brings with it both opportunities and challenges. Unfortunately, the current hype cycle across the IT industry hasn't done the best job of defining either very well (at least not yet). Most of the information out there tends to portray the Cloud as a the ultimate (latest) silver bullet technology.

So, is it?  From my perspective - it could only be viewed as a a conditional, evolutionary improvement - and only if - the organization adopting it fully understands the implications of the technology. Which brings us to a discussion of common Cloud challenges...

Before we jump into that though, let's quantify what we're talking about a little better. There are several types of Cloud-related capability that an organization can pursue; these include:

  • Construction of one's Clouds
  • Exploitation of 3rd party Clouds (Amazon, Rackspace, Google etc.)
  • Adoption of (limited) 3rd party Cloud services or software (SAAS)
  • or some type of Hybrid solution
We shouldn't get too bogged down in the differences between Infrastructure, Platform and Software as a Service or Public versus Private Clouds at this point because the challenges we're examining today cut across most of these distinctions. 

Common Cloud Challenges:

  1. Proliferation and Governance
  2. Automation
  3. Integration

Now if any of these seem familiar, they should. These are all challenges that first became apparent during the explosion of  "legacy" data centers for distributed computing back in the 90's and early 2000's. Let's look closer...

Proliferation: Just because you can provision a completely new environment rapidly doesn't mean that you really need to or some cases even should. The model that Amazon uses to serve millions of different customers shouldn't be the same as the model you use for a single enterprise. The more environments (virtual or otherwise) that you have to create, the more you have to manage. Growing these exponentially is a particularly bad idea (although knowing that you can is somewhat cool). This is where Governance should come into play. However Cloud Governance is a practice that's running about 2 to 3 years behind deployment and provisioning - not good...

Automation: It took about 15 years to begin to get the traditional data centers running smoothly - much of that was due to the introduction of network and system administration automation tools. The explosion of Cloud solutions over the last 3 or 4 years has led to the creation of mountains of custom code and glueware to help run IAAS, PAAS and SAAS solutions. This is a serious problem and one that can be remedied soon given that nearly every major automation vendor has now re-architected their solutions for Cloud environments.

Integration: The last issue bleeds into this one. What happens when you introduce a Cloud, or multiple Clouds into your organization? Does all of the legacy capability go away? How do you control data, security, performance and interfaces across hosting platforms? Integration is the number one challenge facing Cloud adopters today and will remain so for quite some time. There is only way to solve the Cloud Integration dilemma - that's through the introduction of comprehensive Cloud Architecture. We will define that in our next post...

Copyright 2013, Stephen Lahanas

Thursday, October 17, 2013

Revisiting Agile Business Intelligence

The other day the TDWI (the Data Warehouse Institute) sent me a brochure highlighting Agile BI workshops and seminars. Here's how they define it:
"Agile business intelligence addresses a broad need to enable flexibility by accelerating the time it takes to deliver value with BI projects. It can include technology deployment options such as self-service BI, cloud-based BI, and data discovery dashboards that allow users to begin working with data more rapidly and adjust to changing needs.
To transform traditional BI project development to fit dynamic user requirements, many organizations implement formal methodologies that utilize agile software development techniques and tools to accelerate development, testing, and deployment. Ongoing scoping, rapid iterations that deliver working components, evolving requirementsscrum sessions, frequent and thorough testing, and business/development communication are important facets of a formal agile approach. "
Now I found this very interesting given it's something I have been advocating for some time. Although, the definition above left me a bit concerned that what in fact is being suggested is merely the adoption of Agile methodology with minimal regard to Business Intelligence architecture (we've been given a laundry list of related solutions with not clear idea of how they integrate). More importantly, the heavy focus on the development methodology leaves out what we considered the most important aspect of Agile BI (when we first presented this back in 2007) - the end user and how they are integrated into the development process and /or how they drive the very structure of BI by defining it "on the fly" themselves (this goes beyond data discovery).

We presented this in the Fall of 2007 in Chicago
Agile BI must encompass a wider architectural approach...

Since 2007, a number of tools have come out that specifically answer this end-user consideration. A good example of this is Tableau (which markets itself as "visual analytics for everyone"). So on the one hand it is both gratifying and exciting to see that Agile concepts are being extended to Data Architecture and that new products are being introduced to help bridge the gap between IT development and IT capability - on the other though, it is disturbing to see that they haven't quite merged yet in the data industry.

Why is this important? Well, because when viewed out of context (of each other) the value proposition for these innovations diminishes significantly. Data Architecture, BI Methodology and the expectations for how users will exploit data are part of the same problem space...

Copyright 2013, Stephen Lahanas

Monday, September 23, 2013

You Know You've been in IT too Long When...

Nothing says we can't be innovative and humorous...

This is an Android we can believe in...

  • You use a magic Quadrant to comparison shop.
  • You refer to the wait staff as customer responsive service interfaces.
  • You don't understand any longer how a Cloud could produce rain. 
  • You're confused between Socialism and Social networks.
  • You wish you were working on the Skynet project.
  • You refer to your kids as 2.0, 3.0 and talk about your family as the long-term release strategy.
  • You don't think Cheetah's are Agile because they don't sprint all the time.
  • You refer to politics as "run-time governance."
  • You watch reruns of the Matrix and still think it's cutting edge (the sequels don't count).
  • You're seriously contemplating creating a cyborg using a 3D printer.
  • You can relate to Watson on a personal level.
  • You're waiting for the local haunted house to add the solution release triage room.
  • You are beginning to think Larry Ellison makes sense.
  • You remember the good ole days when an Android looked like Yul Brnner.
  • You code comment your Christmas cards.
  • All your friends are now recruiters.
  • You count calories in megabytes. 
  • You'd like to outsource your relatives (just for a little while). 
  • Your idea of a romantic line is "stroke my touchscreen."
  • You'd like to vote for the Program Manager and Chief in the next election but are confused why the office is not on the ballot.
  • You start referring to talking as wireless communication.
  • You still think the 1996 Mac guy versus PC guy ad is cool.
  • You determine what to cook for dinner using an analysis of alternatives.
  • You think Strings are composed of letters instead of twine.
  • You are beginning to realize that life is merely a digital experience.
  • You remember Big Data being 64k.

Copyright 2013, Stephen Lahanas

Sunday, September 22, 2013

Top 5 Reasons IT Projects Fail

Last Fall, I wrote an article for about the top 5 reasons ERP projects fail: that was in response to news of a specific project shutdown. However, the topic as to why IT projects in general seem to be so failure prone is worth exploring as well.

As far as hard statistics as to how many IT projects succeed or fail; it's difficult to get a clear picture. Part of the reason this is the case is that often when a project is clearly going to fail it gets "redefined" rather than cancelled. That redefinition represents a re-baselining effort that dramatically scales back expectations with the hopes that the reduced scope can be met. Sometimes that works, other times re-baselined projects are later cancelled anyway. If our measure for success was the original set of expectations associated with a project, the likely rate of IT failure would be extremely high - perhaps in excess of 50% of all IT projects.

In IT, building bridges to nowhere is fairly common, but no less costly...
This is a rather incredible statement of course, especially if we compare IT with most other types of projects across industries. What exactly does this level of failure really indicate though? It could represent any or all of the following factors:

  • That the expectations of the stakeholders were out of touch with the reality of what their IT provider organizations could manage.
  • The level of complexity was not properly gauged or managed.
  • IT is changing so fast now that a project defined last year might require major changes if it is still 'in motion' this year. 
  • The project was driven more by the perception of some need than the reality of a need. This of course refers to all those projects that are spun up in response to industry hype without a clear set of requirements. 
  • The technology assumptions (driven from the expectations) were hastily made and then never challenged later (or perhaps more appropriately - no challenge was permitted). 

While these five factors might be taken as the reasons that so many IT projects fail, they are in fact only the backdrop for the real reasons. Our focus needs to be directed to the fifth bullet point above - assumptions. Thus the five reasons we're focusing on here are related to why failure occurs after a project has been launched.

Even if a project wasn't launched under the best of circumstances, it still has a chance of succeeding if it is flexible enough. But what makes a project flexible? Is it the application of some Agile methodology or some other unique lifecycle management approach? Maybe not. Let's look at the real top five reasons that IT projects fail once they've begun:

  1. An inability to examine or challenge assumptions.
  2. A "silo" mentality.
  3. The inability to compromise.
  4. Technology-centric rather than capability centric perspective.
  5. Poor role definition and lack of project dynamics.

Let's look at these a little closer now...

1 - As noted previously, the inability or unwillingness to challenge assumptions is a huge issue on many projects. This is a problem precisely because of the manner that many projects get spun up (the factors listed previously). This is a little different than the traditional dialog on 'requirements' that's heard when explaining Agile methodology because in that conversation the focus is on the detailed design. Here the focus is on the big picture assumptions which in truth are much harder to face or deal with.  In any project, there are perhaps less than 10 issues which will determine the success or failure of the effort. You may have another 10,000 detailed issues or requirements in a backlog and each of those may be dealt with successfully yet still not save the project from failure if the core issues aren't addressed. Those core 10 or so issues are almost always associated with the initial assumptions connected to the project.

Those types of issues include things like the original intent of the solution, preliminary technology choices and solution architecture decisions as well as organizational approaches. There is no one correct mix of all these types decisions that ensures a successful project and that's precisely why those decisions should be reexamined on a regular basis. In other words, we shouldn't view the first hectic weeks or months of a larger initiative as infallible exercises in perfect thought or planning, but for some reason, many people do.

2 - The Silo Mentality. This refers to projects that become over-compartmentalized; where each sub-unit views itself more or less divorced or separate from every other. While everyone on a project needs to understand their roles (point 5) by the same token they must also all share responsibility for the outcome. There is nothing sadder and less justified than internal finger-pointing when failure occurs. This is a problem that is easily corrected with the right leadership and project structure. Whenever you begin hearing various players within a large project stating that issues relating to them are someone else's concern or responsibility then you know the project has developed this problem.

3 - The Inability to Compromise - We talked about this in a previous post from the perspective of team leadership; here compromise has an even broader context. Compromise here represents compromise between teams, by leaders - as well as compromise on large, tough issues. Lack of compromise is one of the chief symptoms of an 'inflexible' project. People will always have excuses and cite deadlines yet invariably where compromise fails to occur, deadlines are shot anyway and often the entire project ends up being cancelled.

4 - Technology-Centric Projects - This may sound funny coming from an IT geek however it is nonetheless valid. Let's look at Big Data as an example. What exactly does that new technology category mean to a business that wishes to jump onto the hype bandwagon? Does it imply faster data systems, better storage management or is it tied to specific Use Cases associated with that organization? If the expectations end up looking just like a Gartner article on who's doing what instead of Use Cases that illustrate how it can be exploited in your organization then you might say you've got a technology-centric project. The fact that you think everyone else is doing something shouldn't drive your decisions. If you've been stuck with a hype driven project then use the initial period to re-tune it to ensure that its aligned to your business. If you find that it's not then have the courage to reverse course if necessary (the earlier, the better - you can always come back to it later).

5 - Poor Role Definition and Project Dynamics - How you build the project is usually just as important as the underlying assumptions behind it. Understanding roles and how those roles and groups ought to work with one another is important. For example, an architect and a tech lead are two very different roles - one owns the technology the other designs the solution (completely different perspectives). Yet many organizations regularly confuse those two roles. Also, there's often not clarity in regards to who has oversight or leadership across groups. This last point relates 'project dynamics.' A flexible project still needs leadership - but leaders who act as facilitators rather than strict commanders. And the folks they lead need to be able to form working groups that have clear purposes and escalation paths to ensure corrective action can occur when needed (and not just on the little issues).

There are of course other factors involved in the failure of complex IT projects, but a large chunk of what tends to happen can be mapped to the issues raised in this article. In a future post, we will look at some of the warning signs associated with a project 'at risk."

Copyright 2013, Stephen Lahanas

Saturday, July 13, 2013

The Top 10 Tips for Team Leadership

There is a lot written about leadership and quite frankly most of it isn't too helpful. In some ways leadership is a fairly simple proposition as long as you approach it in a straightforward manner. Like most things, being an effective leader involves following a consistent and concise set of principles. I suppose you could list hundreds of these guidelines but the reality is you only need a handful – and of course only having a few ensures that you’ll probably remember them when you need to.

Knowing which way to go - how does one achieve it?
In information Technology, the most important leadership role isn't generally the CIO; it starts at a much lower level – with a team. Teams can exhibit the following characteristics:

  • Teams can be formal or informal.
  • Teams can be large (perhaps extending to hundreds of members) or very small – say 3 people.
  • Teams can be local or geographically distributed across the globe. 
  • Teams can be open-ended, project-based or problem-focused – the latter two imply definite end dates which when reached results in disbanding the group. 
  • Teams can be business-focused, technology-focused or both. 
  • Teams can be focused with solution definition or solution development or both.
  • Teams can be targeted to organizational segments or divisions/departments or can be enterprise-wide in nature.
  • Teams can be productive or distracting.

The last point of course is generally due to whether or not the team has an effective leader.

The top ten tips for effective team leadership include:

  1. Efficient Facilitation – This needs to be explained a bit. The fact that a team is communicating is not enough – the communication occurring needs to be productive communication. To be productive – communication generally needs to be framed or directed. This begins when the group is launched but must continue for the life of the group. Yes, it’s somewhat like moderation and for a globally team there may be a lot of moderation involved. But, usually there is some direct interaction between the participants and directed communication goes well beyond typical moderation tasks. Facilitation also includes the ability to elicit the participation of the team members and the ability to keep their motivation level high.
  2. Willingness to Delegate – This is one of the first and worst mistakes that new managers or leaders tend to make. People who have a lot of expertise in an area are often very self-reliant and like to get things done on their own without the hassle of dealing with some type of workgroup. However, if you’re doing all the work yourself then you in effect don’t really have a team.
  3. Being able to identify Key Issues– Being able to determine what is and what isn't important is harder than it seems. When dealing with a group, some things may be very important to certain people and not at all important to others. The team leader must determine what is important for the group as a whole to tackle that within the defined context of the group’s charter. 
  4. The Ability to Make Decisions - This tip is a little bit tricky as it refers not just to the leader’s ability to make decisions but for the group’s ability to do so. In general, it’s not very effective for the team leader to make all of the decisions on behalf of the group. On the other hand, a team that can’t decide anything has little value. 
  5. The Ability to Achieve Consensus– This is not just limited to the group. Generally a team must reconcile its efforts with other external groups; this can be especially problematic in situations where your team is dependent on other groups. 
  6. Demonstrate the Ability to Compromise – This sounds like we’re talking about Congress and in fact that might be a good metaphor for a team – or perhaps a dysfunctional one. There will always be situations where a team may be driven by competing needs or motivations; in those cases compromise is the difference between having a productive team or not. The team leader must be able to compromise themselves before asking anyone else to compromise.  
  7. Have a sufficient level of subject matter expertise in the area where you’re leading – This is one tip some might find controversial. However this advice makes perfect sense if you think about it for a moment. Without having some level of expertise in the subject it will become difficult for a team leader to facilitate the meeting, identify key issues and make key decisions. It places the team leader in a vulnerable position in that he or she must become highly dependent on other team members to carry out the basic leadership functions. 
  8. Be a Peacemaker – As a team leader, you may have to respond to situations where one or more of the group members becomes disruptive or tries to dominate the conversation. This requires some political skill in being able to redirect the group when it gets sidetracked and in being able to defuse potentially tense situations. There are any number of techniques that can be used to achieve those goals, but of course the team leader must know when and how to apply them. It is also important to remember that sometimes in order to maintain peace, someone will need to adjust their behavior – in other words, peace at any cost is not a good policy – sometimes participants will be unhappy by the result or have to leave the team. Don’t be afraid to make the tough decisions in a timely manner.
  9. Excel in Time Management – This is related to many of the tips already listed but has its own dimension as well. Teams that work well generally work to deadlines and tend to utilize every moment of meeting time wisely. The team leader sets all these types of expectations in the way the meeting is run, how it is planned and how all tasks related to the meeting are coordinated with the group’s overall mission.
  10. Demonstrate The ability to acknowledge contributions – The whole point of a team, any team is to harness the collective value of its members. This involves both being able to recognize individual contributions as well as the synergistic output of the combined group. This requires the ability to submerge one’s own ego and focus on harvesting the best ideas in a completely objective manner. 

These tips can be applied to just about any type of team or project context. They seem like common sense yet it is remarkable how few team leaders actually practice all ten of these capabilities consistently. It’s often easy to get caught up in the issues your team is tasked to solve and forget about the processes necessary to ensure that the team actually succeeds.

Copyright 2013, Stephen Lahanas

Wednesday, July 10, 2013

Redefining the Chief Innovation Officer

The role of Chief Innovation Officer has been around for more than a decade but it hasn’t been universally adopted yet across most organizations. In fact, there is quite a bit of confusion as to how the CTO, CIO (information officer), Strategy and Innovation ought to relate to one another. Many organizations combine all these roles within the Chief Information Officer job description. While combining the CTO and CIO roles together makes sense, adding strategy and innovation to the duties for one role is much less effective for the following reasons:

  1. Because in most cases it simply adds too much workload to one person.
  2. Because a CIO/CTO is often siloed off away from top leadership or the business community. This happens because many organizations want the CIO/CTO to focus solely on delivering IT capability rather than helping to define how technology may help the organization evolve.
  3. Because innovation is often directly at odds with management of existing technology. In other words, in some cases the IT department is the group least open to adopting innovations.

Another big problem with the role of Chief Innovation Officer (besides the fact that it shares the same acronym as the Chief Information Officer) is that the fact there is no standard industry definition for what it encompasses. Many of the definitions out there today are somewhat contradictory and worse yet most of them are incredibly vague when it comes to defining what duties this role would perform. I’m hoping to help correct that with the following definition (we’ll start by changing the acronym):

Chief (or Director) of Innovation (CI) - The Chief of Innovation is responsible for facilitating the convergence and evolution of business goals with technical capability. The CI acts as a liaison and link between the business and IT groups within the enterprise. The CI helps to define strategy, envision solutions and facilitate complex initiatives. The specific duties of the CI include the following:

  • Definition of Techno-functional strategy and Vision Statement/s. 
  • Alignment of business objectives to enterprise roadmaps.
  • Product Definition and Design. (this can include internal or external capabilities that aren't marketed as products)
  • Management of an organizational “Continuous Innovation” process (includes Ideation - I will write a post about this soon).
  • Research and Development planning & oversight.
  • Industry-level coordination and outreach. (for all things innovation-related)
  • Start-up management or facilitation (for initiatives groups, products etc.)

The CI role probably also requires us to define what we mean by Innovation in an enterprise context:
Innovation is the set of technology-driven activities or capabilities that represent revolutionary or potentially disruptive improvements in business practice. This implies that ordinary improvements such as adding new servers (similar to those already present), or upgrading software that is already owned is not innovation per se but rather represents steady state enhancements.  
Innovation tends to go hand in hand with one or more organizational Transformations. Transformations can be business or technology-focused or both. An example of a transformation might be moving from custom software to a packaged ERP solution or integrating all the organization's social media-related capabilities into a unified digital presence. Each of these examples would have profound impacts on the nature of the business (in terms of what it could accomplish and how it interacts with its consumers). 
You’ll note that I have linked all Innovation to Technology. The reason for this is simple – it is almost impossible to envision any significant innovation today without technology playing a major role in it. The Chief of Innovation role will become increasingly important in coming years. The CI is in effect the Chief Incubator – the change agent for the organization and potentially the one who demonstrates proofs of concepts. The CI is not constrained by IT’s perceived limitations but also can temper business zeal with a pragmatic understanding of can or can’t be accomplished.

Wednesday, June 19, 2013

Joomla versus Wordpress

Deploying and maintaining websites used to be quite the ordeal, especially if you had content intensive requirements or clients who wanted to be able to update key design features such as navigation on a regular basis. While approaches for how to manage that process improved from the early 90’s through to early 2000’s there was still a great deal of manual coordination and update required. Changes in scripts deployed inline needed to be adjusted in every page deployed. Also, sometimes the pages design rendering was inconsistent from one section of a site to another - this too required polishing to get just right. The addition of CSS (cascading style sheets) and external scripts helped but pre-CMS website development was still a significant challenge – and this was for sites with relatively simple expectations for application or database support.

Old Timey Web development has a strange connection to Married with Children - there may be a similar link between Mobile development and Modern Family...

In the late 1990s and early 2000’s, SharePoint and a whole slew of commercial portal products hit the market, targeted primarily at enterprise users. These types of products offered the chance to build sites without quite so much manual design, but the products soon became fairly specialized in their support for integration with larger back-end systems. That specialization and the cost involved with this class of system made them unappealing or unrealistic for the majority of website developers. At the same time all of this was occurring, the web development community had amassed an ever greater array of applications that could be plugged into site development. Many of these apps at first were built using CGI and later PHP became the standard (although many languages have been used including .net). By 2005, there were several groups that had effectively combined a number of these apps into coherent platforms – those platforms shared the same core characteristics:
  1. Holistic control panel interfaces (not unlike those used by web hosting providers).
  2. Content management features – this was focused around page publication and abstraction of content development from page deployment.
  3. Style blocks – the ability to segment off panels or blocks to position specific content or application features.
  4. Application support – development of core code and APIs for interaction.
  5. Shared data structure – single instance DB to support platform and all applications.
Perhaps most importantly, this new breed of Web Content Management Systems (WCMS) became community-based and open source. This had two immediate and powerful effects:
  • First, it made these platforms universally available to all and any web developers for any purpose they could imagine.
  • It ensured a growing set of application plugins designed to run on that platform would be available. For anyone who had worked with the more expensive portal software or even SharePoint, this difference is striking. The lack of a community to tends to translate into limited capability and an overall high cost of ownership as most added features end up being custom coded.
So, what is a WCMS anyway? At first glance the title seems a bit inaccurate – in 95% of the cases – a WCMS is not used as a Content Management System. A WCMS is more like a web publishing platform that happens to support CMS features (to facilitate content development and deployment). Not too long ago there was another class of software that was referred to as Web Publishing – it dealt primarily with management of complex magazine like sites (software such as Interspire). Most of those packages have come and gone and now the definition of what constitutes "Web Publishing" has more or less merged with social media (Twitter and Blogs are considered web publishing as well yet management of these is in the Cloud at the host, and less complex).

For our purposes though, Web Publishing within the WCMS involves very comprehensive features (some might say almost limitless) for design control and application extension. So, for all intents and purposes, the platforms have become ubiquitous for web development – supporting just about any Use Case that you can think of – everything from a simple site, to a community forum, to a magazine or an online store. You can do all that and more, and in a fraction of the time it previously required.

Value Proposition of the WCMS

Why is all of that important? Well, adoption of a WCMS translates into the following value proposition for most web developers:
  1. These platforms support rapid prototyping – getting the site up and running actually becomes easier than the style-sheet work – although with the superstructure up – it becomes feasible to highlight many more design options simultaneously.  
  2. Using these platforms allows smaller shops or teams to handle more simultaneous work (in most cases).
  3. Using a WCMS allows for plug and play application features (and potentially this could extend to all application features of a site). Custom configuration may still be required but the level of effort is an order of magnitude smaller. 
  4. Setting up training and maintenance processes for any sites developed using these platforms can be consolidated to one effort (e.g. the first manual and training class you prepare becomes the template for all others with relatively little rework required).
  5. The mobile strategy and main site can be planned and rolled out in tandem. 
Now, that we've talked ourselves into a WCMS, what next? Well, the obvious next question is which one ought to be used. There are quite a few WCMSs out there, but the vast majority of all users have adopted one of four (open source) tools right now:
  • WordPress (most popular)
  • Joomla (second most popular)
  • Drupal (probably third)
  • DotNetNuke (in the running).
So, we've already narrowed down the options based on the first criteria, popularity. Popularity is important when considering open source software for the reasons we mentioned previously:
  1. It implies a more dynamic community which means the software will keep getting better and more comprehensive.
  2. It also implies that we will have an ever growing pool of new (plugins or extension) apps to choose from.  
For the purposes of this article, we’re going to make the first round selection based entirely on that criteria and now the choice is narrowed to the top two; Joomla versus WordPress.

WordPress – This WCMS began life as blog platform in 2003. The first CMS features were deployed across two releases in 2005 and by 2008 WordPress had shifted from being primarily blog focused to be a full-featured CMS. One of the key differences between WordPress and Joomla historically had been that many of the WordPress blogs / sites began life within WordPress's hosting environment (what we’d now call the Cloud). WordPress is used in 60 million websites (although a good percentage of those are purely blogs with little functionality used – more or less equivalent to

An example of a Joomla-based solution (circa 2011)
Joomla – Joomla began life as a commercial product called Mambo in 2000. Shortly, thereafter an open source version of Mambo was released and eventually the Mambo project forked into other projects including Joomla which launched in 2005. Mambo / Joomla was built to be a CMS from day 1. There have been approximately 30 million downloads of Joomla since 2005.

Both platforms support very large feature sets, although evaluation just on the competing lists doesn't tell the whole story – in other words, they both look equally impressive on paper. Before we can rate them however we’ll need to add our remaining criteria; those are:

  • Ease of Administration – this is generally typified by the admin console but can extend to other features or considerations.
  • Design Flexibility – This relates more to what is possible from a design perspective.
  • Design Ease of Use – This relates more to how hard is it to conduct those designs.
  • Content Management – Each CMS has its own idiosyncrasies.
  • Application Functionality (extensibility) and integration of that within the platform – Sometimes the same apps developed for WordPress and Joomla behave differently in either platform. Also, there are some apps you can find for one but not on the other.

So, given that popularity is already factored out, we could assign a 20% weight to each of the criteria listed above and we might rate each area from 1 to 5 (1 lowest, 5 highest). Here’s our score card:

Ease of Administration
Design Flexibility
Design Ease of Use
Content Management
Application Functionality
22 (of 25) 88%
15 (of 25) 60%

There are those who might say ratings are determined in part by which tool you used first. In our case this doesn't apply, as over the past 15 years we've worked with more than a dozen different CMS and portal products and hundreds of similar scripts that were eventually pulled together to build these types of WCMS platforms. For us, the true measurement is how long does it take to build using one platform versus another and then how much of that is due to the platform itself (as opposed to unique requirements). For those of us in who work in IT and have had to evaluate dozens or even hundreds of different tools over our careers; we’re able to develop a certain level of objectivity when contrasting technology.

So what do these ratings really mean; here are some associated observations:

  • In the most important metric, time to develop / complete / deploy a site on Joomla on average took 25% or less time than WordPress.
  • The WordPress multiple blog (Network feature) is still not quite ready for prime time.
  • WordPress was built first for blogs and it shows, the design metaphor is little clunky and the widget (design blocks) can take on a life of their own – making design much more time consuming than it should be.
  • You’re more likely to run into application / widget compatibility issues on WordPress.
  • WordPress does though seem to be getting some apps that aren't being made available on Joomla, some of these are more focused towards the enterprise which is a serious long-term issue for Joomla (if it is to remain competitive).
  • In Joomla, management of page (CMS) taxonomy is definitely more complicated than WordPress, however this is a relatively small part of the overall web development effort and isn't too hard to get used to.
  • Both WCMSs have suffered major security flaws in the past, this will likely continue to be the case (but isn't nearly as complex as dealing with the Microsoft stack and SharePoint).
  • One of the biggest issues we ran into with WordPress was the embedded editing capability (and errors associated with it). The editors (which can be added as extensions to the administration consoles) seem to perform better on Joomla – this is a big deal from a content management standpoint.
  • Overall, we found it harder to achieve some of the core design functionality in WordPress than in Joomla (everything from placement of blocks to the ability to run content sliders etc.)
  • The application update notifications in WordPress are nifty, but don’t completely make up for other failings (in getting them to work properly for example).
  • The WordPress blog moderation console is pretty cool and shows where WordPress really excels. However, for many web Use Cases, this is unnecessary.

Bottom line – in the WordPress v. Joomla battle, Joomla wins in most categories (in situations where an organization hasn't already invested a lot of time in one or the other). How long this will remain the case is hard to tell, but based on our review it would probably require several major architectural changes in the core WordPress platform to complete the shift in its evolution from blog platform to WCMS.

Copyright 2013, Stephen Lahanas

Tuesday, June 18, 2013

Deconstructing Time part 7 - Mach's Principle

Before we introduce Mach or Relativity, let's step back for a moment and recap what we've covered so far. We began with a Use Case - how do we find coordinates in space-time? This is a Use Case that could be applied to Astronomy, to GPS scenarios, to writing Science Fiction stories or movies and potentially many to other situations.  We explained how we would take an outsider's perspective using what is a essentially an IT methodology for problem solving. Then we began introducing key concepts:

  1. Time is inherently connected to motion.
  2. Time seems to be constructed of dynamic events.
  3. Those events exist within frames of reference - those frames define the parameters for simultaneity for the events within. 
  4. Motion occurs (and effects time) not just in our frames of reference but in a cascading chain of ever smaller and ever larger frames - from the subatomic to the universal scale.
  5. The universe seems to be comprised of matter, energy and the space in between these elements in motion. Light is not matter but shares wave behavior with matter. Energy, such as light, can be also broken into individual particles just as matter can. 

All of this is a preface to the more complex theories and concepts we're about to explore.  We still perhaps don't understand what time is other than through the abstract event / frame paradigm that's been presented. This is a good point to ask some questions about time again:

  • Is it a physical construct or a behavioral manifestation resulting from physical structures?
  • Is it merely a human-perceived tool for measuring experience and the physical world around us?
  • Is it an absolute, fixed constant of some sort or is it flexible? (if it is a constant is tied to other constants?)
  • Lastly, if we were to view events as "time particles," can we say that time exhibits wave behavior?

The final question is interesting in that we might then be able to visualize time in a more robust sense - as spreading ripples or waves rather than straight linear progression (the metaphor Stephen Hawking used was 'Time's Arrows'). We'll come back to this later.

Ernst Mach - better know for defining the speed limit for your jet...
What is Mach's Principle? Well, it was just a handful of pages dedicated to explaining more of an observation or question rather than a theory. The principle goes something like this:
In his book The Science of Mechanics (1893), Ernst Mach put forth the idea that it did not make sense to speak of the acceleration of a mass relative to absolute space. Rather, one would do better to speak of acceleration relative to the distant stars. What this implies is that the inertia of a body here is influenced by matter far distant. A very general statement of Mach's principle is "Local physical laws are determined by the large-scale structure of the universe." 
Here's another description:
Mach’s Principle assumes, among other things, that “a particle’s inertia is due to some (unfortunately unspecified) interaction of that particle with all the other masses in the universe; the local standards of nonacceleration are determined by some average of the motions of all the masses in the universe, [and] all that matters in mechanics is the relative motion of all the masses.” Furthermore, “Mach’s Principle actually implies that not only gravity but all physics should be formulated without reference to preferred inertial frames (artificially defined motion contexts).  It advocates nothing less than the total relativity of physics.  As a result, it even implies interactions between inertia and electromagnetism.”
At this point we also need to introduce another new concept - "Action at a Distance" (sometimes referred to as 'spooky action').
In physics, action at a distance is the nonlocal interaction of objects that are separated in space. This means, potentially, that particles here on Earth could interact with other particles across the galaxy or universe without being constrained by the speed of light. This is also what drives Quantum Entanglement.
Confused yet? no worries - so is everyone else...
So, Mach's Principle effectively opened the door both to Relativity and Quantum physics. How does it relate to our discussion though? As we will see shortly, it introduced the notion of relativity to time as well as to motion. More importantly, though it begins to highlight how in a complex systems view of the universe that there are multiple competing variables that must be considered when calculating exact measures of space-time. Let's say we're talking about a Science Fiction story that needs to come up with a reasonable explanation for how teleportation works. Anytime a character is beamed up to a ship or down to planet there would need to vastly superior ability to locate space-time. With today's military grade version of GPS when can get with inches or centimeters of locating a coordinate. However that's not good enough, two inches too low might mean our character would have his feet "materialize" within solid rock. More exact measurements, down to the micron (one millionth of a meter) level would be needed to make the technology safe (and it would have to be able to compensate for scenarios where the target ground is uneven etc.).

Now it's time to define Relativity, we'll start with Special Relativity:
Einstein's theory of special relativity is fundamentally a theory of measurement. He qualified the theory as "special" because it refers only to uniform velocities (meaning to objects either at rest or moving at a constant speed). In formulating his theory, Einstein dismissed the concept of the "ether," and with it the "idea of absolute rest." Prior to the generation of Einstein's theory of special relativity, physicists had understood motion to occur against a backdrop of absolute rest (the "ether"), with this backdrop acting as a reference point for all motion. In dismissing the concept of this backdrop, Einstein called for a reconsideration of all motion. According to his theory, all motion is relative and every concept that incorporates space and time must be considered in relative terms. This means that there is no constant point of reference against which to measure motion. Measurement of motion is never absolute, but relative to a given position in space and time. 
Special Relativity is based on two key principles:

  • The principle of relativity: The laws of physics don’t change, even for objects moving in inertial (constant speed) frames of reference.
  • The principle of the speed of light: The speed of light is the same for all observers, regardless of their motion relative to the light source. (Physicists write this speed using the symbol C.)

Contrary to popular belief, the formula E=MC squared was not published as part of Einstein's Theory of Special Relativity, but was instead published the same year (1905) in a separate paper (Einstein published 4 papers that year).  Of particular interest to Einstein in his development of Special Relativity was the ability to apply a coordinate system to space-time (this will sound familiar as we've already introduced these concepts prior but outside the discussion of Relativity):
An event is a given place at a given time. Einstein, and others, suggested that we should think of space and time as a single entity called space-time. An event is a point p in space-time. To keep track of events we label each by four numbers: p = (t,x,y,z), where t represents the time coordinate and x, y and z represent the space coordinates (assuming a Cartesian coordinate system).
Even physicists have a sense of humor 
Now let's define General Relativity...
There was a serious problem with Special Relativity however, it was artificially constrained on several levels; the most important of which is the fact that it doesn't address acceleration (it handles reference frames at rest or at constant speeds only). The other constraint was the speed of light - C. Einstein has initially sought to help unify his theory with Maxwell's theory of electromagnetism (wave theory) - Maxwell had set C as a constant (that couldn't be surpassed) and Einstein carried that through. However, there was already evidence that this wasn't entirely accurate. The force responsible for the most obvious violations of C and the force responsible in many cases for acceleration was gravity... 
It (General Relativity) states that all physical laws can be formulated so as to be valid for any observer, regardless of the observer's motion. Consequently, due to the equivalence of acceleration and gravitation, in an accelerated reference frame, observations are equivalent to those in a uniform gravitational field.
This led Einstein to redefine the concept of space itself. In contrast to the Euclidean space in which Newton’s laws apply, he proposed that space itself might be curved. The curvature of space, or better space-time, is due to massive objects in it, such as the sun, which warp space around their gravitational centre. In such a space, the motion of objects can be described in terms of geometry rather than in terms of external forces. For example, a planet orbiting the Sun can be thought of as moving along a "straight" trajectory in a curved space that is bent around the Sun.
The most important concept from our perspective in both of these two theories is time dilation (which is referred to as Gravitational time dilation in General Relativity:
Gravitational time dilation is an actual difference of elapsed time between two events as measured by observers differently situated from gravitational masses, in regions of different gravitational potential. The lower the gravitational potential (the closer the clock is to the source of gravitation), the more slowly time passes. Albert Einstein originally predicted this effect in his theory of Special Relativity and it has since been confirmed by tests of general relativity. 
Another way to think of time dilation is this, the faster you travel (now were dealing acceleration), the slower your time progresses as opposed to those you left behind at home.  For science fiction fans, one of the few movies that remain true to this otherwise plot-busting feature of modern physics is the Planet of the Apes. In that plot, astronauts leaving Earth around 1973 travel at near the speed of light for 18 months and end up back on Earth 2,000 years later. Of course, it is unlikely that we'd experience the 'damn dirty ape paradox' if we attempted such a trip.

Apes don't kill apes but they will take over if we leave for a few thousand years...
So, let's recap again what this all means:

  1. Several incredible and somewhat intuitive insights, transformed modern physics starting just over a century ago with Mach's Principle. (this isn't to say that other folks like Maxwell didn't provide brilliant insights, but for our investigation Mach's insight was particularly important)
  2. This led to the determination that the universe behaves certain common laws yet those laws support relative perspectives of outcomes.
  3. The notion of space-time as both a coordinate system and as a curved (multi-dimensional) geometry emerged. 
  4. Because of all of this (and more) it was determined that our perception of time is different based upon what reference frame we inhabit. The differences in relative temporal perception are mostly due to the nature of the motion involved (for one or many participants involved).   
  5. Gravity, which was a mysterious force before now becomes part of space-time itself under General Relativity and interestingly also seems to exhibit wave behavior. 

It is often said that General Relativity is the last classical theory of Physics as it is the last major one that doesn't involve Quantum mechanics. Relativity takes us much further down the path towards explaining time - but it stops short of doing so adequately.

In our next post, we'll point out why Relativity falls short and introduce some of the foundational concepts of Quantum Mechanics.

copyright 2013, Stephen  Lahanas