Cyber Security Predictions for 2017

2016 was a big year in the annals of Cyber Security, and 2017 promises to eclipse it.

Creating an Enterprise Data Strategy

An introduction to the process of developing comprehensive strategies for enterprise data manangement and exploitation.

A Framework for Evolutionary Artificial Thought

Let’s start at the beginning – what does this or any such “Framework” buy us?

The Innovation Dilemma

What things actually promote or discourage innovation? We'll examine a few in this post...

Digitial Transformation, Defined

Digitial Transformation is a hot topic in IT and big money maker for consultants - but what does it really mean?.

Wednesday, June 19, 2013

Joomla versus Wordpress

Deploying and maintaining websites used to be quite the ordeal, especially if you had content intensive requirements or clients who wanted to be able to update key design features such as navigation on a regular basis. While approaches for how to manage that process improved from the early 90’s through to early 2000’s there was still a great deal of manual coordination and update required. Changes in scripts deployed inline needed to be adjusted in every page deployed. Also, sometimes the pages design rendering was inconsistent from one section of a site to another - this too required polishing to get just right. The addition of CSS (cascading style sheets) and external scripts helped but pre-CMS website development was still a significant challenge – and this was for sites with relatively simple expectations for application or database support.

Old Timey Web development has a strange connection to Married with Children - there may be a similar link between Mobile development and Modern Family...

In the late 1990s and early 2000’s, SharePoint and a whole slew of commercial portal products hit the market, targeted primarily at enterprise users. These types of products offered the chance to build sites without quite so much manual design, but the products soon became fairly specialized in their support for integration with larger back-end systems. That specialization and the cost involved with this class of system made them unappealing or unrealistic for the majority of website developers. At the same time all of this was occurring, the web development community had amassed an ever greater array of applications that could be plugged into site development. Many of these apps at first were built using CGI and later PHP became the standard (although many languages have been used including .net). By 2005, there were several groups that had effectively combined a number of these apps into coherent platforms – those platforms shared the same core characteristics:
  1. Holistic control panel interfaces (not unlike those used by web hosting providers).
  2. Content management features – this was focused around page publication and abstraction of content development from page deployment.
  3. Style blocks – the ability to segment off panels or blocks to position specific content or application features.
  4. Application support – development of core code and APIs for interaction.
  5. Shared data structure – single instance DB to support platform and all applications.
Perhaps most importantly, this new breed of Web Content Management Systems (WCMS) became community-based and open source. This had two immediate and powerful effects:
  • First, it made these platforms universally available to all and any web developers for any purpose they could imagine.
  • It ensured a growing set of application plugins designed to run on that platform would be available. For anyone who had worked with the more expensive portal software or even SharePoint, this difference is striking. The lack of a community to tends to translate into limited capability and an overall high cost of ownership as most added features end up being custom coded.
So, what is a WCMS anyway? At first glance the title seems a bit inaccurate – in 95% of the cases – a WCMS is not used as a Content Management System. A WCMS is more like a web publishing platform that happens to support CMS features (to facilitate content development and deployment). Not too long ago there was another class of software that was referred to as Web Publishing – it dealt primarily with management of complex magazine like sites (software such as Interspire). Most of those packages have come and gone and now the definition of what constitutes "Web Publishing" has more or less merged with social media (Twitter and Blogs are considered web publishing as well yet management of these is in the Cloud at the host, and less complex).

For our purposes though, Web Publishing within the WCMS involves very comprehensive features (some might say almost limitless) for design control and application extension. So, for all intents and purposes, the platforms have become ubiquitous for web development – supporting just about any Use Case that you can think of – everything from a simple site, to a community forum, to a magazine or an online store. You can do all that and more, and in a fraction of the time it previously required.

Value Proposition of the WCMS

Why is all of that important? Well, adoption of a WCMS translates into the following value proposition for most web developers:
  1. These platforms support rapid prototyping – getting the site up and running actually becomes easier than the style-sheet work – although with the superstructure up – it becomes feasible to highlight many more design options simultaneously.  
  2. Using these platforms allows smaller shops or teams to handle more simultaneous work (in most cases).
  3. Using a WCMS allows for plug and play application features (and potentially this could extend to all application features of a site). Custom configuration may still be required but the level of effort is an order of magnitude smaller. 
  4. Setting up training and maintenance processes for any sites developed using these platforms can be consolidated to one effort (e.g. the first manual and training class you prepare becomes the template for all others with relatively little rework required).
  5. The mobile strategy and main site can be planned and rolled out in tandem. 
Now, that we've talked ourselves into a WCMS, what next? Well, the obvious next question is which one ought to be used. There are quite a few WCMSs out there, but the vast majority of all users have adopted one of four (open source) tools right now:
  • WordPress (most popular)
  • Joomla (second most popular)
  • Drupal (probably third)
  • DotNetNuke (in the running).
So, we've already narrowed down the options based on the first criteria, popularity. Popularity is important when considering open source software for the reasons we mentioned previously:
  1. It implies a more dynamic community which means the software will keep getting better and more comprehensive.
  2. It also implies that we will have an ever growing pool of new (plugins or extension) apps to choose from.  
For the purposes of this article, we’re going to make the first round selection based entirely on that criteria and now the choice is narrowed to the top two; Joomla versus WordPress.

WordPress – This WCMS began life as blog platform in 2003. The first CMS features were deployed across two releases in 2005 and by 2008 WordPress had shifted from being primarily blog focused to be a full-featured CMS. One of the key differences between WordPress and Joomla historically had been that many of the WordPress blogs / sites began life within WordPress's hosting environment (what we’d now call the Cloud). WordPress is used in 60 million websites (although a good percentage of those are purely blogs with little functionality used – more or less equivalent to

An example of a Joomla-based solution (circa 2011)
Joomla – Joomla began life as a commercial product called Mambo in 2000. Shortly, thereafter an open source version of Mambo was released and eventually the Mambo project forked into other projects including Joomla which launched in 2005. Mambo / Joomla was built to be a CMS from day 1. There have been approximately 30 million downloads of Joomla since 2005.

Both platforms support very large feature sets, although evaluation just on the competing lists doesn't tell the whole story – in other words, they both look equally impressive on paper. Before we can rate them however we’ll need to add our remaining criteria; those are:

  • Ease of Administration – this is generally typified by the admin console but can extend to other features or considerations.
  • Design Flexibility – This relates more to what is possible from a design perspective.
  • Design Ease of Use – This relates more to how hard is it to conduct those designs.
  • Content Management – Each CMS has its own idiosyncrasies.
  • Application Functionality (extensibility) and integration of that within the platform – Sometimes the same apps developed for WordPress and Joomla behave differently in either platform. Also, there are some apps you can find for one but not on the other.

So, given that popularity is already factored out, we could assign a 20% weight to each of the criteria listed above and we might rate each area from 1 to 5 (1 lowest, 5 highest). Here’s our score card:

Ease of Administration
Design Flexibility
Design Ease of Use
Content Management
Application Functionality
22 (of 25) 88%
15 (of 25) 60%

There are those who might say ratings are determined in part by which tool you used first. In our case this doesn't apply, as over the past 15 years we've worked with more than a dozen different CMS and portal products and hundreds of similar scripts that were eventually pulled together to build these types of WCMS platforms. For us, the true measurement is how long does it take to build using one platform versus another and then how much of that is due to the platform itself (as opposed to unique requirements). For those of us in who work in IT and have had to evaluate dozens or even hundreds of different tools over our careers; we’re able to develop a certain level of objectivity when contrasting technology.

So what do these ratings really mean; here are some associated observations:

  • In the most important metric, time to develop / complete / deploy a site on Joomla on average took 25% or less time than WordPress.
  • The WordPress multiple blog (Network feature) is still not quite ready for prime time.
  • WordPress was built first for blogs and it shows, the design metaphor is little clunky and the widget (design blocks) can take on a life of their own – making design much more time consuming than it should be.
  • You’re more likely to run into application / widget compatibility issues on WordPress.
  • WordPress does though seem to be getting some apps that aren't being made available on Joomla, some of these are more focused towards the enterprise which is a serious long-term issue for Joomla (if it is to remain competitive).
  • In Joomla, management of page (CMS) taxonomy is definitely more complicated than WordPress, however this is a relatively small part of the overall web development effort and isn't too hard to get used to.
  • Both WCMSs have suffered major security flaws in the past, this will likely continue to be the case (but isn't nearly as complex as dealing with the Microsoft stack and SharePoint).
  • One of the biggest issues we ran into with WordPress was the embedded editing capability (and errors associated with it). The editors (which can be added as extensions to the administration consoles) seem to perform better on Joomla – this is a big deal from a content management standpoint.
  • Overall, we found it harder to achieve some of the core design functionality in WordPress than in Joomla (everything from placement of blocks to the ability to run content sliders etc.)
  • The application update notifications in WordPress are nifty, but don’t completely make up for other failings (in getting them to work properly for example).
  • The WordPress blog moderation console is pretty cool and shows where WordPress really excels. However, for many web Use Cases, this is unnecessary.

Bottom line – in the WordPress v. Joomla battle, Joomla wins in most categories (in situations where an organization hasn't already invested a lot of time in one or the other). How long this will remain the case is hard to tell, but based on our review it would probably require several major architectural changes in the core WordPress platform to complete the shift in its evolution from blog platform to WCMS.

Copyright 2013, Stephen Lahanas

Tuesday, June 18, 2013

Deconstructing Time part 7 - Mach's Principle

Before we introduce Mach or Relativity, let's step back for a moment and recap what we've covered so far. We began with a Use Case - how do we find coordinates in space-time? This is a Use Case that could be applied to Astronomy, to GPS scenarios, to writing Science Fiction stories or movies and potentially many to other situations.  We explained how we would take an outsider's perspective using what is a essentially an IT methodology for problem solving. Then we began introducing key concepts:

  1. Time is inherently connected to motion.
  2. Time seems to be constructed of dynamic events.
  3. Those events exist within frames of reference - those frames define the parameters for simultaneity for the events within. 
  4. Motion occurs (and effects time) not just in our frames of reference but in a cascading chain of ever smaller and ever larger frames - from the subatomic to the universal scale.
  5. The universe seems to be comprised of matter, energy and the space in between these elements in motion. Light is not matter but shares wave behavior with matter. Energy, such as light, can be also broken into individual particles just as matter can. 

All of this is a preface to the more complex theories and concepts we're about to explore.  We still perhaps don't understand what time is other than through the abstract event / frame paradigm that's been presented. This is a good point to ask some questions about time again:

  • Is it a physical construct or a behavioral manifestation resulting from physical structures?
  • Is it merely a human-perceived tool for measuring experience and the physical world around us?
  • Is it an absolute, fixed constant of some sort or is it flexible? (if it is a constant is tied to other constants?)
  • Lastly, if we were to view events as "time particles," can we say that time exhibits wave behavior?

The final question is interesting in that we might then be able to visualize time in a more robust sense - as spreading ripples or waves rather than straight linear progression (the metaphor Stephen Hawking used was 'Time's Arrows'). We'll come back to this later.

Ernst Mach - better know for defining the speed limit for your jet...
What is Mach's Principle? Well, it was just a handful of pages dedicated to explaining more of an observation or question rather than a theory. The principle goes something like this:
In his book The Science of Mechanics (1893), Ernst Mach put forth the idea that it did not make sense to speak of the acceleration of a mass relative to absolute space. Rather, one would do better to speak of acceleration relative to the distant stars. What this implies is that the inertia of a body here is influenced by matter far distant. A very general statement of Mach's principle is "Local physical laws are determined by the large-scale structure of the universe." 
Here's another description:
Mach’s Principle assumes, among other things, that “a particle’s inertia is due to some (unfortunately unspecified) interaction of that particle with all the other masses in the universe; the local standards of nonacceleration are determined by some average of the motions of all the masses in the universe, [and] all that matters in mechanics is the relative motion of all the masses.” Furthermore, “Mach’s Principle actually implies that not only gravity but all physics should be formulated without reference to preferred inertial frames (artificially defined motion contexts).  It advocates nothing less than the total relativity of physics.  As a result, it even implies interactions between inertia and electromagnetism.”
At this point we also need to introduce another new concept - "Action at a Distance" (sometimes referred to as 'spooky action').
In physics, action at a distance is the nonlocal interaction of objects that are separated in space. This means, potentially, that particles here on Earth could interact with other particles across the galaxy or universe without being constrained by the speed of light. This is also what drives Quantum Entanglement.
Confused yet? no worries - so is everyone else...
So, Mach's Principle effectively opened the door both to Relativity and Quantum physics. How does it relate to our discussion though? As we will see shortly, it introduced the notion of relativity to time as well as to motion. More importantly, though it begins to highlight how in a complex systems view of the universe that there are multiple competing variables that must be considered when calculating exact measures of space-time. Let's say we're talking about a Science Fiction story that needs to come up with a reasonable explanation for how teleportation works. Anytime a character is beamed up to a ship or down to planet there would need to vastly superior ability to locate space-time. With today's military grade version of GPS when can get with inches or centimeters of locating a coordinate. However that's not good enough, two inches too low might mean our character would have his feet "materialize" within solid rock. More exact measurements, down to the micron (one millionth of a meter) level would be needed to make the technology safe (and it would have to be able to compensate for scenarios where the target ground is uneven etc.).

Now it's time to define Relativity, we'll start with Special Relativity:
Einstein's theory of special relativity is fundamentally a theory of measurement. He qualified the theory as "special" because it refers only to uniform velocities (meaning to objects either at rest or moving at a constant speed). In formulating his theory, Einstein dismissed the concept of the "ether," and with it the "idea of absolute rest." Prior to the generation of Einstein's theory of special relativity, physicists had understood motion to occur against a backdrop of absolute rest (the "ether"), with this backdrop acting as a reference point for all motion. In dismissing the concept of this backdrop, Einstein called for a reconsideration of all motion. According to his theory, all motion is relative and every concept that incorporates space and time must be considered in relative terms. This means that there is no constant point of reference against which to measure motion. Measurement of motion is never absolute, but relative to a given position in space and time. 
Special Relativity is based on two key principles:

  • The principle of relativity: The laws of physics don’t change, even for objects moving in inertial (constant speed) frames of reference.
  • The principle of the speed of light: The speed of light is the same for all observers, regardless of their motion relative to the light source. (Physicists write this speed using the symbol C.)

Contrary to popular belief, the formula E=MC squared was not published as part of Einstein's Theory of Special Relativity, but was instead published the same year (1905) in a separate paper (Einstein published 4 papers that year).  Of particular interest to Einstein in his development of Special Relativity was the ability to apply a coordinate system to space-time (this will sound familiar as we've already introduced these concepts prior but outside the discussion of Relativity):
An event is a given place at a given time. Einstein, and others, suggested that we should think of space and time as a single entity called space-time. An event is a point p in space-time. To keep track of events we label each by four numbers: p = (t,x,y,z), where t represents the time coordinate and x, y and z represent the space coordinates (assuming a Cartesian coordinate system).
Even physicists have a sense of humor 
Now let's define General Relativity...
There was a serious problem with Special Relativity however, it was artificially constrained on several levels; the most important of which is the fact that it doesn't address acceleration (it handles reference frames at rest or at constant speeds only). The other constraint was the speed of light - C. Einstein has initially sought to help unify his theory with Maxwell's theory of electromagnetism (wave theory) - Maxwell had set C as a constant (that couldn't be surpassed) and Einstein carried that through. However, there was already evidence that this wasn't entirely accurate. The force responsible for the most obvious violations of C and the force responsible in many cases for acceleration was gravity... 
It (General Relativity) states that all physical laws can be formulated so as to be valid for any observer, regardless of the observer's motion. Consequently, due to the equivalence of acceleration and gravitation, in an accelerated reference frame, observations are equivalent to those in a uniform gravitational field.
This led Einstein to redefine the concept of space itself. In contrast to the Euclidean space in which Newton’s laws apply, he proposed that space itself might be curved. The curvature of space, or better space-time, is due to massive objects in it, such as the sun, which warp space around their gravitational centre. In such a space, the motion of objects can be described in terms of geometry rather than in terms of external forces. For example, a planet orbiting the Sun can be thought of as moving along a "straight" trajectory in a curved space that is bent around the Sun.
The most important concept from our perspective in both of these two theories is time dilation (which is referred to as Gravitational time dilation in General Relativity:
Gravitational time dilation is an actual difference of elapsed time between two events as measured by observers differently situated from gravitational masses, in regions of different gravitational potential. The lower the gravitational potential (the closer the clock is to the source of gravitation), the more slowly time passes. Albert Einstein originally predicted this effect in his theory of Special Relativity and it has since been confirmed by tests of general relativity. 
Another way to think of time dilation is this, the faster you travel (now were dealing acceleration), the slower your time progresses as opposed to those you left behind at home.  For science fiction fans, one of the few movies that remain true to this otherwise plot-busting feature of modern physics is the Planet of the Apes. In that plot, astronauts leaving Earth around 1973 travel at near the speed of light for 18 months and end up back on Earth 2,000 years later. Of course, it is unlikely that we'd experience the 'damn dirty ape paradox' if we attempted such a trip.

Apes don't kill apes but they will take over if we leave for a few thousand years...
So, let's recap again what this all means:

  1. Several incredible and somewhat intuitive insights, transformed modern physics starting just over a century ago with Mach's Principle. (this isn't to say that other folks like Maxwell didn't provide brilliant insights, but for our investigation Mach's insight was particularly important)
  2. This led to the determination that the universe behaves certain common laws yet those laws support relative perspectives of outcomes.
  3. The notion of space-time as both a coordinate system and as a curved (multi-dimensional) geometry emerged. 
  4. Because of all of this (and more) it was determined that our perception of time is different based upon what reference frame we inhabit. The differences in relative temporal perception are mostly due to the nature of the motion involved (for one or many participants involved).   
  5. Gravity, which was a mysterious force before now becomes part of space-time itself under General Relativity and interestingly also seems to exhibit wave behavior. 

It is often said that General Relativity is the last classical theory of Physics as it is the last major one that doesn't involve Quantum mechanics. Relativity takes us much further down the path towards explaining time - but it stops short of doing so adequately.

In our next post, we'll point out why Relativity falls short and introduce some of the foundational concepts of Quantum Mechanics.

copyright 2013, Stephen  Lahanas

Thursday, June 13, 2013

Can Enterprise Architecture be Agile?

We're taking another break today from Physics to talk about IT innovation again. IT practice changes quite a bit and certain practices come and go. About 10 years ago, Enterprise Architecture (EA) surged to the forefront of the IT community primarily because it was adopted as part of the key acquisition processes connected to the Federal Government. The law that brought this about was called the Clinger-Cohen Act and it provided portfolio management expectations for IT which were designed to help avoid large-scale IT project failures (ones costing the taxpayers more than $100 million) but also applied to all IT investments. Not too long after this law was passed in 1996, the Department of Defense and other agencies redesigned their IT processes / methodologies to become compliant with Clinger-Cohen. Soon after that the practice of EA became very much in demand.

EA was around before this of course, namely through the introduction of the Zachman framework in the late 80's, early 90's, but most organizations hadn't adopted it yet. The federal mandate provided an excellent incentive both for federal and commercial adoption. It also created some issues however, namely that much of the early work in EA became tied to a more procedurally intense form of practice - in other words, to many it seemed to be a bit bureaucratic in nature. The perception quickly became that EA was not an easy undertaking or something that could be used to produce quick results. This is similar to what happened to E-learning at roughly the same time for about the same reasons - government adoption and stewardship was both a blessing and curse to the industry.

So the following question is now often posed within IT in regards to Enterprise Architecture - is it possible for EA to be Agile? In most quarters, the immediate reaction would be a resounding - no. But perhaps maybe we're being too harsh in our initial judgement. First, though, we need to define what Agile would mean in an EA context:
  1. It would require rapid turnarounds for designs and decisions.
  2. It would require the ability to integrate both with traditional (waterfall) methodology and Agile development methodology.
  3. It would need to support and promote collaborative problem-solving within the organization using it. In other words, an EA group doesn't become another island or silo, it is the mechanism that bridges all of the silos.
Based on these definitions, EA can and definitely is Agile if practiced correctly. The following articles provide examples of how various aspects of EA can be integrated within an organization and remain Agile:

EA provides the superstructure within which all other enterprise capability resides and interacts.

Many of the articles included in this blog were produced from projects that followed an Agile EA approach, including the Intelligent Healthcare framework, Semantic COP and Governance as a Service. I've applied Agiel EA in the following domains thusfar:
  • Defense
  • Aerospace 
  • Healthcare
  • Retail
  • Manufacturing 
  • Finance
I've also applied to the following IT focus areas as well:
  • Cyber-security
  • Data Management
  • Services / application design
  • Portal / CMS
So returning to the original question - can EA be Agile - my answer is yes. The key to making that happen involves the following assumptions:
  1. Clearly defined expectations for the both what the EA group or architect must accomplish.
  2. A willingness to improvise rather than following industry patterns or methodologies too closely.
  3. The ability to work to deadline - to engineer a design around the constraints.
  4. A willingness by the organization to allow the silos to actually work with one another (with an architect as mediator) - this is far less common than you might imagine.
  5. The ability to develop architecture that is easily translatable and traceable to implementation level designs. In other words, alignment of all design related efforts must be built into the Agile approach.
  6. The ability to make decisions quickly. 
  7. A willingness to experiment. Many architects and organizations make the mistake of separating EA into a logical or somewhat abstract exercise and not allowing it to be involved in actual prototypes or POCs. The best way to ensure that key design decisions make sense is through an understanding of the technology in question.
We will talk more about how EA is practiced or ought to be practiced in future posts...

copyright 2013, Stephen  Lahanas

Wednesday, June 12, 2013

Deconstructing Time part 6 - Particle v. Wave

In our last post, we promised to tackle some of the  more important questions in modern physics. Why are we doing this? Well, each question has specific relevance to the main Use Case we began with in regards to understanding how to find coordinates in Time. We'll begin where Einstein began by asking about Light (in fact what many people don't know is that Einstein earned his first Nobel prize not for Relativity but his short paper on the Photo-electric effect which addressed the question we're about to pose).

Light exists within a spectrum of the larger EM field - of course that's only part of the story

What is light? Is it matter, is it energy, is it a particle is it a wave or is it some unique physical construct that has no parallel?  These were questions that inspired Newton, Descartes, Einstein, Maxwell, Lorentz and many others. In the 5th century BC the philosopher Empedocles postulated that light emanated as rays or beams from the human eye outwards (more of wave view). Five hundred years later, Lucretius described Light as atomic particles shooting here and there in straight lines. Renee Descartes proposed that light was more like a wave than atomic particle. Newton came back and said no - light really was particles.  In the 1800's, a number of scientists working with electricity discredited the Newtonian view of light as particles and the Electromagnetic Wave theory became the dominant view. Then came Einstein, Relativity and Quantum physics - and once again the notion that light was made of particles came back - but this time as part of a "duality." The current view of what light is  may sound a bit confusing then - according to modern to Physics; it is both a particle and a wave and they aren't necessarily related.

In the photoelectric effect - more energetic light knocked electrons out of the metal while less energetic  light didn't

At this heart of this dispute is what appears to be conflicting evidence - some illustrating wave-like behavior for Light and other evidence showing particle-like behavior. The folks who believe it is a wave view light as radiation (e.g. radiated energy in wave form patterns); yet  starting with Einstein and then with the Quantum school of Physics Light is viewed as packets (of particles) called Quanta (or Photons) that when combined exhibit wave like behavior or form waves. Here are the current definitions:
The "classical" view of light is as a wave. The wave involves perturbations in both the electric and magnetic fields as light travels through space at the speed of light (299800 km/s). 
The quantum view of light is as a particle-like wave packet. Each wave packet is called a photon. Each photon of a certain wavelength has the same amplitude and shape, so that the energy of each photon is the same. The energy of a photon is given by Planck's equation: E = hν = hc/λ. Higher intensity of light corresponds to a greater number of photons passing by per unit time.
The light waves and particles behave differently, how can they be the same?

So is light an electromagnetic wave or is it Photons and Quanta? Is it energy or is it matter or is it both? This a tough question. Light can't be described in the same manner as any other elemental matter in Physics / Chemistry. It is not something that has a nucleus with electrons spinning around it. It also doesn't seem to be related to subatomic particles of matter either. Light also represents certain special characteristics as it 1 - can be used to transmit information and 2 - seems to be the fastest moving thing in Universe. This is where Light and Time begin to intersect as the speed of Light or C is the measure of both distance and time at the galactic and universal scale.  Here is the current, commonly accepted definition of what an electromagnetic field is:
An electromagnetic field (also EMF or EM field) is a physical field produced by electrically charged objects. It affects the behavior of charged objects in the vicinity of the field. The electromagnetic field extends indefinitely throughout space and describes the electromagnetic interaction. It is one of the four fundamental forces of nature (the others are gravitation, the weak interaction, and the strong interaction).
The field can be viewed as the combination of an electric field and a magnetic field. The electric field is produced by stationary charges, and the magnetic field by moving charges (currents); these two are often described as the sources of the field. The way in which charges and currents interact with the electromagnetic field is described by Maxwell's equations and the Lorentz force law. From a classical perspective, the electromagnetic field can be regarded as a smooth, continuous field, propagated in a wavelike manner; whereas from the perspective of quantum field theory, the field is seen as quantized, being composed of individual particles.
We could spend a lot more time trying to understand both the particle view and the wave view theories, but ultimately we end up in the same place. Our current understanding of what light is seems contradictory and incomplete.

At this point, I'm going to turn back to what I described as Intuitive Physics in our last post. Can an intuitive approach help us reconcile the two worldviews on Light? I think it can and does. There are times when in any field of endeavor we confuse the issue - this happens in IT, in politics and so on. In the case of light - the real issue, as I see it, is defining what it consists of, not how it behaves. We could use any number of analogies to highlight this distinction - for example - how does one explain the collective movement of fish in a large school or birds in a flock? What seems apparent when looking at nature is how these 'collective' forms express themselves over and over again.  A flock of birds or a school of fish exhibit group behaviors which are wholly unique and different than the behaviors of individual birds or fish - in fact they are behaviors that simply aren't possible at the individual level.

Schools of fish exhibit behaviors which seem to be reflective of a larger natural Geometry...

The collective nature of the group of individuals transforms that group into something more than sum of its parts (or particles). When we study birds or fish however, there is never any question that a flock or a school is made up of individual birds or fish, because we can see and measure them directly.  And of course, birds and fish are clearly constructed from known matter (elements) - we know this because all aspects of this analogy are subject to examination at our immediate scale of reference.

Light is different - in fact all subatomic particles are different to some extent. They exist at scale that is difficult for us to comprehend and impossible to measure accurately.  A quark is perhaps 10 to the -24 meters in length (which is a septillionth - 1000000000000000000000000 - of a meter). That's pretty small. Photons are much larger than that (light waves are between 300 to 500 nanometers, light particles however can't really be defined yet so there size seems to be more mysterious).  This issue or question is perhaps the single most obvious example in Science of how the perspective applied to a  problem can change theoretic outcomes or solutions related to it. In this case, our current perspective on light doesn't allow to reconcile our theories regarding it.

The aftermath of the 2004 Tsunami

Analogy 2 - The Tsunami
We'll use a real-world event to illustrate this next example. On Sunday, December 26, 2004 a massive underwater earthquake triggered one of the largest, most destructive waves recorded in human history. The wave spread across the entire Indian Ocean and somewhere around 230,000 people in 14 countries lost their lives. This event represents an extreme example of wave behavior. The Tsunami that was generated  (they used to be called Tidal Waves but that term has fallen out of disfavor as the waves seem to have little to do with Tidal Forces) by what is estimated as a Magnitude 9.0 earthquake.

To give that perspective, we need to understand the measurement systems for Earthquakes; the Richter Scale and the Moment scale. The Richter Scale is a logarithmic scale meaning that each level is ten times greater than the previous one and involves an even higher multiple of energy. The Moment Scale does a better job of assessing the total energy released - in that scale the Earthquake that launched the Indonesian Tsunami was roughly equivalent in energy output to 616 million Hiroshima atom bombs.
I looked elsewhere and saw some radically smaller figures of energy release - I used this calculator to come up my number.  (and some scientist content that the true size of the quake was 9.3 which leads to the following value - 1 billion 700 million Hiroshima bombs).

So how does any of this relate to the Particle v. Wave issue?

Let's break it down. Water exists as molecules composed of 2 Hydrogen and 1 Oxygen atom. Water combines to form droplets and then pools, then ever-larger bodies of collectively contiguous molecules. An ocean is nothing more than an extremely large collection of water molecules. Molecules are by nature particles. However, as we all know from personal experience, water when viewed collectively exhibits wave-like behavior. In fact, all other descriptions of wave behavior began from our initial experiences or observations of water. So, the intuitive physicist (e.g. you and me) would say something like this - water particles exhibit wave-like behaviors as energy is applied to them (the collective particles). The more energy applied within a specific frame of reference, the more powerful the waves become - however certain types of energy may affect the waves differently. The tidal motions of the Moon around the Earth express themselves in globally manifested yet relatively low energy waves. In other words, the daily tides are powerful enough to occur every day across the planet but end up producing low frequency - read small - waves. However a massive, abrupt release of energy such as that which occurred in the Indian Ocean on December 26, 2004 produced a relatively brief series of massively powerful high frequency waves (read large). The waves achieve their highest amplitude just as they reach the shore.

Tsunami waves are different than all other ocean waveforms.

This description actually parallels Einstein, Bohr's and Plank's discussion of the Photoelectric Effect closely in that the high frequency wave of Light displaced more electrons just as the high frequency Tsunami waves wave caused more destruction (more energy applied causes more energy released). The world of sub-atomic and atomic physic seems to correlate exactly to the realm of macroscopic physics in this case, doesn't it?

Let's take this a bit further though. The intuitive Physicist looking at the Tsunami waves never really questions the fact that water is made up of particles, because we already understand the water molecule and the elements it is made up of and can manage them at our scale of existence (from our immediate perspective). However the theoretical physicist has a much more daunting challenge with light. Light travels at almost impossible speeds, it is hard to capture and doesn't seem to have any real physical characteristics upon initial examination.  So the theoretical physicists were forced to focus more on the behavior of light as opposed to the light itself as many of those behaviors and effects of those behaviors were measurable. They were left to speculate that light might be particles but it might be waves and both represented some sort of energy that simply hasn't been defined in the same manner the rest of the physical universe has been.

Let's extend that back out to our Tsunami example - so this behavioral approach would be like us focusing primarily on the wave propagation of the Tsunami and the resulting destructive impacts to coastline rather than looking at water dynamics at a molecular level or even trying to account for discrete / specific damage to the environment such as the injuries caused by the rushing water on the hundreds of thousands of victims. In any scientific endeavor, we make pragmatic trade-offs about what is feasible or reasonable to measure and what isn't. For a Tsunami which is occurring at macro scales, we can realistically measure everything from water dynamics at smaller scales to the global wave propagation emanating from the quake epicenter.

Yin and Yan - wave or particle; or perhaps either / or is the wrong perspective

So what are we trying to say? Here it is - the wave / particle duality is not a complicated or bizarre aspect of nature. It is due entirely to the perspective within which we view the problem. In both the water and light scenarios we are dealing with an entity or element that does exists as particles in its smallest form (this has been proven conclusively for light). For both light and water we are able to witness unique behaviors that arise when particles are grouped together and subjected to different types of energy.  In both cases those behaviors manifest themselves in the form of waves - in other words when grouped light (quanta) or grouped water (just water) move together with certain energies, they will form frequencies - those frequencies are the distances between particle waves. The amount of energy or the nature of how the energy is applied will  change the nature of those frequencies. High energy application will cause shorter more rapid waves - in electromagnetism this becomes x-rays in water it becomes waves that travel 600 miles an hour miles an hour underwater until reaching the coast where the energy not already spent is released (in slower, yet much higher waves).

In current scientific theory, we've made simple logical error when viewing the wave / particle duality question - we confused our initial trade-off decision for how to study light with the actual nature of light itself. In other words, have attributed to light's behavior a structure that in fact only exist as part of a collective behavior.  So, just as a Tsunami cannot exist with trillions upon trillions of water molecules, neither can a light wave exist without trillions upon trillions of light particles (of an indeterminate nature). Experiments on light are able to discern both the structure of light as well as its collective behavior which gives rise to our current view that a duality exists. Yet that duality is no more complex than the duality between water molecules and Tsunamis. One fits neatly within the other (the particle within the wave). It's all a matter of perspective and our ability to view the problem.

This is precisely why de Broglie was able to extend Einstein's photoelectric effect to all matter and have it work out. This is known as the concept of "matter waves" or "de Broglie waves" in physics, which earned him the Noble prize in 1929.
In quantum mechanics, the concept of matter waves or de Broglie waves reflects the wave–particle duality of matter. The theory was proposed by Louis de Broglie in 1924 in his PhD thesis. The de Broglie relations show that the wavelength is inversely proportional to the momentum of a particle and is also called de Broglie wavelength. Also the frequency of matter waves, as deduced by de Broglie, is directly proportional to the total energy E (sum of its rest energy and the kinetic energy) of a particle.
We know matter consists of particles - having wave-like behavior doesn't then make it a wave only.

So light, like all matter of the physical universe can be described as individual particles that when grouped collectively exhibit uniform properties that are more profound than the sum of the parts (in the case of water and the Tsunami, water through its waves becomes an energy storage and translation medium.  This wave behavior is a fundamental feature of the universe itself and is intimately connected to motion, gravity and all other field like characteristics defined by modern physics.

So back to our question and the matter of perspective. Is light a wave or a particle or both? It is both, but one is required in order to satisfy the other. In other words, light waves without particles cannot exist any more than a Tsunami can exist with water molecules. This is an important distinction because if we accept it, then we can bridge Classical Physics (electromagnetism) with Relativity and Quantum theory without any concerns of weird, unexplained dualities.  Also, there is another critical consideration in regards to viewing light as a particle - it is not limited to Locality.  In other words, particles are not constrained to the speed of light C that restricts (electromagnetic) wave movement.

This has some pretty interesting implications both for Quantum Computing and Time.
In our next post we begin to explore Relativity and introduce Mach's Principle.

copyright 2013, Stephen  Lahanas

Sunday, June 9, 2013

Deconstructing Time part 5 - Intuitive Physics

Before we dive into modern Physics theory I think it's probably appropriate to introduce a concept and provide a critique on the current way Physics is practiced. This is not meant to be critical per se - it's simply another way to illustrate the method I described in the first post on Deconstructing Time. Any time an IT Architect such as myself accepts an assignment, we run the risk of setting foot within an entirely new domain - at least it is new for us. Over the years, I have worked in the military, in Healthcare, in manufacturing, in telecommunications and so on. Until now though I had never immersed myself in the field of Physics. Like many folks, I have followed Physics as a news item and studied key concepts as part of my education - but I was never asked to create or validate any solutions for the field.

Intuition isn't as painful as it looks...

The term "Intuitive Physics" then is not meant to represent something wholly unique to that field - it could apply equally to Healthcare and then perhaps be called "Intuitive Medicine." What I'm referring to with this concept is the ability to take a generic (Cross Domain) methodology for problem-solving and direct it at an entirely new field without being or becoming an expert in that field. In fact, not being an expert in the field it is being directed at is one of its major value propositions in that one can avoid Group-think traps that tend to snare many of the more formally accredited members of the group. The reason I use the word "Intuit" is because the ultimate goal of the methodology is the ability to assimilate enough knowledge from the specific field quickly enough to parallel instinctively or intuitively the major lines of logical though that have developed or will develop. Again, this isn't because of any unique skills or abilities per se other than the ability to keep one's mind completely attuned to problem solving instead of getting bogged down in the internal politics and distractions of any given group.

This is a process or experience that has worked more or less the same way for me as an architect for more than 15 years. In every field where I was asked to provide IT solutions I was able to anticipate major issues and parallel the thought processes that went into the current solutions and those that were driving the next generation requirements. If I wasn't able to do this then instead of needing months or weeks to get ramped-up in a new field, it would take me years - Intuitive Solutions design then is a very pragmatic skill for IT practitioners who "jump domains."

The irony here is that intuition lost its place in physics for many decades after the 1930's

Getting back to Physics, while I'm still early in the process of immersion and can't perhaps really consider myself immersed in the sense that I usually achieve on most assignments (as this investigation is entirely virtual in nature and is part time), I have still begun to notice certain characteristics about the field. I always make observations like these while on assignment in order to better understand why something is or isn't working well within a particular environment - this helps drive solution options as well as helping to define constraints, dependencies and risks. So, for Physics thusfar I've noticed the following things:

  • In general, there is a lack of community support for questioning core assumptions. This seems to be changing, but appears to have been very rigid for about 75 years. I should point out that this situations changes over time as evolutionary 'bursts of dissent' often drives major advances, but these have historically occurred far apart. The pace of these bursts may be on the verge of some serious acceleration.  
  • In Physics, there seems to be some difficulty viewing concepts in holistic perspectives. This is a common phenomena and occurs whenever the subject matter becomes sufficiently complex - which is also why it is very widespread in IT. In Information Technology we refer to this as "Silos." In Physics though the problem seems worse - I've yet to view visualizations that link all of the core concepts of modern physics together within shared contexts. I'm still looking but if I don't find them I will provide a few myself in order to demonstrate what's missing.
  • Physics is overly dependent on the proof. Now this may sound counter-intuitive at first but it all depends upon how "proof" is defined. Arbitrarily narrow proofs tend to stifle the ability to voice dissent and thus lock group thinking into static modes. At times, physics has managed to escape these boxes by diverging into different directions but at the expense of being able to link them back together holistically (which is why Quantum Mechanics, Special Relativity and General Relativity aren't entirely lined up - and why many other possibly theories are completely dismissed). 
  • To expand a little on the previous point - non-physicists will note that in nearly all published explanations of any key aspect of Physics theory the proofs are tossed into the middle of the narrative. For outsiders, this almost appears compulsive in nature (as you rarely find any examples of where this doesn't occur). It has the effect of potentially excluding 90% of the world's population from the debate as it ensures that only those who have studied some physics are likely to brave all these proof-filled narratives. So the math becomes a barrier, and perhaps more of a barrier if the formal trade groups that control Physics (well, that's what we might call them) determines which proofs are allowed without much input from the majority of the members (and how they will be defined). This is justified by stating any wider results are invalid - this despite the volumes of conflicting and inexplicable data accrued over the past century - but more on that later. This proof-centric situation has the chilling effect of preventing the field from challenging its own core assumptions.
  • Physics has been for past century more of an experimental than an applied science. This is changing and changing primarily due to IT. But as in R & D for IT, the experimentalist view of the world often becomes divorced from reality and the balance between research and application should always include a healthy portion of practice. 
  • Like most experimental science, Physics is severely restrained by the tradeoffs associated in building, reproducible manageable results. What does this mean - it means that we too often build arbitrarily simplistic experiments that inevitably tell us only what we want to hear or worse yet give us a false impression of the problem. The best way to illustrate this issue perhaps to describe a typical medical experiment; a test group of patients is a given a new drug say for hair growth, another test group is given a placebo (let's say an M & M). The whole focus of the experiment then resolves around just two variables and the other variables are factored out. The problem with this though is that we know the real world doesn't work that way thus we can never be sure of the dynamics of what's coming out in the test results. 

I will insert more observations regarding the practice of Physics as we proceed in our series on time in the context of actual decisions and events regarding current theory. Our following articles will focus on some of the key questions that inspired modern Physical theory - the first being this: "What is light," and it's corollary, "Is Light a Particle or a Wave." These in turn will help us to understand the question most closely associated with our Use Case - "What is Time."

copyright 2013, Stephen Lahanas

Saturday, June 8, 2013

Integrating Design & Architecture

We're going to take a short break from Physics and dive back into IT - actually pretty soon we will see them intersect in our discussions - but for now we're going to expand on a topic I introduced on last year: Integrated Design.

Many people view design work and architecture within IT as separate disciplines and perhaps as many different disciplines. It's my firm belief that all design-related effort (and architecture most definitely falls into that bucket) are related and contextual. The challenge of course is finding a way to integrate these activities - first from a process perspective and eventually through some sort of automation. One of the best ways to achieve this is through the development of an Integrated Solutions Framework.

One of the interesting characteristics of most Enterprise Architecture frameworks is that they are built atop meta-models. Those meta-models are in fact usually a 3NF (3rd normal form) relational data model (or ERD – entity relationship diagram). Having the models allows for the ability to deploy design artifacts to architecture repositories – it also provides a metadata framework that allows for reporting based upon the information resident within the designs. But most importantly perhaps, this illustrates how important it is to have a logical “mapping framework” for all information related to design. While using one of those frameworks within an actual software solution dedicated to architecture management is nice, it isn't necessary. One can recreate the logical framework as an organizational construct and still receive quite a number of benefits from it.

Here is a visual representation of what that standard framework might look like...

A prototypical framework for integrating design and architecture

In the following sections, I will provide potential benefits and explain the framework in more detail.


  • This framework provides a clear linkage between specific elements of the architecture/design and requirements.
  • This framework allows for consistent and logical descriptions of all solution elements across and within a given organization (and is scalable across organizations as well).
  • This framework allows for management of all non-architecture designs alongside formal architecture deliverables. This is extremely important as in most organizations the majority of IT design is not captured as architecture (and in some organizations, none is).
  • This framework can be used with no automation, some automation or total automation – it is also easily translatable to most EA frameworks.
  • This framework can be integrated with / into any solution methodology and any standards management paradigm.

Levels (vertical):

  1. Enterprise
  2. Segment
  3. Capability
  4. Detail

Concepts (horizontal):

  1. Design Activity or Description
  2. Design Deliverable
  3. Example/s


  • Core / Holistic Architecture – The Holistic Architecture can exist potentially at several levels – it is roughly equivalent to an Enterprise Architecture (EA). Like an EA, the Holistic Architecture can encompass one or many domains and organizations. The main idea is that within some defined boundary the Holistic Architecture covers the comprehensive solution in question.
  • Architecture Tier - An Architecture Tier tends to represent a logical layer within a holistic solution – for example this could be the shared Cloud Hosting environment that all other elements of the solution reside within. 
  • Reference Architecture– A Reference Architecture can be a tier (more horizontal) or it could be a vertical segment (crossing tiers) within the larger Holistic solution. For example, a BPM workflow capability may stretch from to the infrastructure level to the User Interface and be considered the standard approach towards providing workflow capability to that organization (thus it is a vertical segment).  
  • Core Capability – A capability is often associated with a slightly lower level of detail than the Reference Architecture. An example of this might be a BPMN design tool for business users to create their own workflow. Within the example Reference Architecture listed above, the design tool is located on one of several servers and provides its own unique interface that may not be generally available to the rest of the users in the presentation layer (for the BPM / workflow tool). Capability definition is used to facilitate requirements development and Use Case modeling. A Capability in this framework approach is roughly equivalent to a complex pattern.
  • Complex Pattern – A Complex Pattern simply means that more than one pattern is being employed. Although allocation of scope to patterns is a somewhat subjective exercise, the goal within the paradigm presented here is provide as narrow of a pattern as possible in order to help drives specific decisions regarding solution allocation and configuration standards. The above Capability example represents a complex pattern as there are several distinct elements to the workflow mapping solution (each of which may involve different options to realize). 
  • Simple Pattern – A Simple Pattern is the lowest level of architecture or design control; in other words this is where we specify the details for how the solution ought to work. For example, using the ongoing example, we might assign separate Simple Patterns for the Design UI approach for the modeling tool based upon whether it is being delivered to a PC or a mobile device. 
  • An important consideration here is the notion that an organization can have more than one methodology; and in fact many organizations now have at least two, a traditional Waterfall approach and an Agile approach of some type. More complex organizations may have any number of methodologies to deal with; for management of Data Centers ITIL could be considered a methodology, for government Acquisition DoD 5000 is used, for data-centric projects many groups create approaches based upon DAMA’s DMBOK (Data Management Body of Knowledge) and so on. The design mapping described and illustrated below could be applied to any / all of those situations. 

copyright 2013, Stephen Lahanas

Deconstructing Time part 4 - Introducing Motion

In our last post on Time, we began providing some context to the discussion of time as well as context to where it is that Time actually occurs – e.g. within events and reference frames. For this discussion to really make sense though we need to step back and look at the big picture (actually this is something that will need to occur several times over the course of this series). At some point soon we’ll take a closer look at what Special Relativity, General Relativity and Quantum theory mean and how they relate to time, but for now we're going to spend a few moments discussing Motion. One key element that intersects all of those core theories in physics is the notion of Motion (and of course this applies to Newton's Classic theories as well). Motion and Time are inextricably linked. All of our current perception related to time, and all of our current measurement systems connected to it are “motion-based.”

What does 'Motion-based' mean? Well, let’s list a few examples below:

  1. A day is one rotation of the earth.
  2. An SI second (the new standard for measuring seconds) is based on the decay of a Cesium atom – that decay is expressed as regular pulsations.
  3. A year is a standard orbit of Earth around the Sun.
  4. A month is a standard orbit of the Moon around the Earth.
  5. A light year is the time it takes a photon of light to travel from one point in space to another. 
  6. We travel at roughly 70 miles per hour on the highway on our way to work (e.g. the velocity expressed equals time to cover a distance in continuous motion). 
  7. Time flies when you’re having fun. I’ve added this here because there is an important perception consideration for time that appears at least to be related intensity of activities (this as we will see is also motion-related although at first it doesn't seem to be intuitively).
  8. The Galactic Year is the time it takes the solar system to orbit around the center of the Milky Way (estimated to be 250 million years or so).
  9. Geocentric Coordinate Time (also known as TCG) - This is defined as “a coordinate time standard intended to be used as the independent variable of time for all calculations pertaining to precession, nutation, the Moon, and artificial satellites of the Earth. It is equivalent to the proper time experienced by a clock at rest in a coordinate frame co-moving with the center of the Earth: that is, a clock that performs exactly the same movements as the Earth but is outside the Earth's gravity well. It is therefore not influenced by the gravitational time dilation caused by the Earth.” We will list all formally defined "Time Measurement Systems" 2 posts from now.

Now this last example I included is particularly interesting because it is an international standard already being used to help assign coordinates in 4 dimensional spacetime – it was defined specifically to conform to current theory relating to relativistic space. However, as one can tell from reading the description, it is largely limited to the study of coordinates within our solar system. Another interesting fact is that it is only one of several competing coordinate time systems that have arisen to help astronomers deal with the same Use Case we described in our first post on Time – the ability to locate events within spacetime.

One of the main problems with all of the coordinate systems and even with our current theories on spacetime is the earth-centric perspective we tend to toss in. What does that mean? Well, for the above example we are taking into account planetary motion but not the overall motion of the solar system, the galaxy etc. We touched upon the “taxonomy” of motion context in an earlier post and here it is again (and it actually extends to much lower measurements):

  • The earth rotates – motion vector 0
  • The earth orbits the sun - motion vector 1
  • The solar system travels within its neighborhood in the milky way - motion vector 2
  • The milky way itself is rotating – motion vector 3
  • The milky way is also hurtling towards the Andromeda galaxy and will collide with in about 25 galaxy years - motion vector 4
  • The universe itself is expanding outward at incredible speeds. In fact according to the latest data it is speeding up and some of it even appears to be moving away from us faster than light (1.4 C)!

So, each of these motion contexts listed above has its own velocity, trajectory and mass/gravitational considerations.

There's a whole lot of motion going on around us, do we continue to ignore it?

This taxonomy doesn't include any of the motion that might be occurring on the surface of the planet. That could include walking, riding in a plane, the motion occurring beneath us in Tectonic Plates or even the earth’s core (which is what generates our magnetosphere) or the motion with our bodies or even at the sub-atomic level. There is motion everywhere it is exists at every scale imaginable. More than that though, all of this ‘Contextual Motion’ is occurring along different trajectories. So let’s talk about this in relation to a coordinate system for finding an event in spacetime. The current coordinate systems use four vectors (3 for 3 D space 1 for time t) to locate planetary bodies in space. A scientist might ask the question, where will Saturn’s moon Titan be in ten years on January 10th, 2:00 pm eastern time? Using this type of coordinate system and extrapolating the known orbital behavior of Saturn around the Sun and Titan around Saturn we can plot where it will be at the designated time and date from the perspective of Earth. This works to some degree of precision for the following reasons:

  1. It’s still very close to us (from both the galactic and universal perspectives).
  2. We are deliberating pegging it to Earth time so using standards based on Earth motion is ok.
  3. We assume all other contextual motion will continue moving along the same trajectories they have been so we don't ever bother to factor them in – they become a more or less “absolute” background. The only motion we concern ourselves with is within our own Solar system.

So, if we try to extend this out to a galactic scale we see where the problems will arise almost immediately:

  • We’re dealing with massive distances.
  • We can’t or shouldn't use Earth focused time scales for managing coordinates.
  • Contextual motion becomes very important once we move away from a localized perspective. We can no longer view external motion as an absolute because it in reality it isn't. At the galactic scale it becomes dynamic (and in fact always was even at our scale although that dynamism isn't regularly perceptible by us). 

The problem becomes more immediately noticeable if we decide we need to locate more discrete events – e.g. something smaller than a moon’s orbit – something let’s say like locating the “Peach Orchard” action that occurred on the second day of the Battle Gettysburg (see figure below). This particular event also has the distinction of having occurred in the past rather than the present or future. The diagram illustrates rough Geo-spatial coordinates and major arcs of movement but without topology (altitude) . It wouldn't be too much of a stretch to add topology and clock readings (taken by observers of the battle) next to the movements in the diagram although we cannot be sure how precise that the clock readings are. The real question we need to ask ourselves is this; would the 4-vector coordinates for this reference frame and set of events (the location in Gettysburg and the time the action started and when it finished) be sufficient to actually locate the timespace where the battle occurred?

We know where it occurred and when it occurred, but would this be enough to find the actual event?
The timespace described by the 4 vector coordinates could be logically correct  once we add the missing information – we have represented the location and duration within the context of Earth movement. But what really happened since the battle took place – did the earth travel around and around in the same timespace ‘groove’ for 150 years or did it travel in its orbital groove trajectory across multiple layers of contextual motion – most of which was not moving in exact alignment with our movement? I’m afraid it was the latter. The Earth we’re standing on now isn't moving across an absolute background (what Newton called Absolute Space) – we have moved a phenomenal distance across space in multiple directions simultaneously. Isn't that impossible? Well, maybe not. If we were to express each layer of contextual motion as a distinct dimension then it becomes feasible to track motion across as many as may be involved by simply adding more vectors. The level of accuracy in determining the coordinates we wish to achieve will be dictated largely by how many vectors we choose to apply to the calculations. Of course, there will be a point where adding more vectors becomes counter-productive (we could see that coming close to approximating issues associated with quantum physics).

You might be asking yourselves why folks in the scientific community haven’t been viewing these issues in this manner; here why:

  1. For most of the past century we were still assimilating and validating the core premises of relativity and quantum mechanics – it was a lot to digest. We've also become obsessed as to why those three theories don't match up rather than challenging the underlying premises of all 3. The theories have become "institutionalized" - we experienced 30 years or so of radical conceptual expansion which ended in the 1930's (for the core theories) and then settled in around those core theories rather than taking them to the next level.
  2. It’s only recently that we've begun looking further into space and decided to look for tiny objects like planets.
  3. It’s hard for people residing on Earth not to view the Universe from Earth’s perspective – and it wasn't that long ago that most people thought that Earth was the center of the Universe.
  4. It’s a lot easier to make math work if 1 – the formulas are simpler, and 2 there is a fixed reference point to compare to. And science currently rewards the least complicated theories and formulas. The problem with that of course is that some aspects of nature are complicated.

This last point is particularly interesting. For those of you who have read anything about Special Relativity you will probably recall thought experiments such as at the observers on the train platform watching the lightning bolt hit from different perspectives. We tend to construct of all these scenarios as 2 observers in relative motion to one another or to an observer stationary in relation to another. The thing is when we construct it that way we leave out all of the potential contextual motion that may be occurring (or complex event scenarios). Perhaps the limited set of variables is OK to determine whether time dilation or other relativistic phenomenon is occurring or not, but is it sufficient to predict truly accurate event coordinates? Maybe not...

We will explore that and Relativity, Quantum Mechanics and Mach's Principle in our next post. The thing is - in order to understand Time - we have to tackle almost all of Modern Physics.

copyright 2013, Stephen Lahanas

Monday, June 3, 2013

Deconstructing Time - part 3: Reference Frames

In our past two posts on time, we began to outline some key concepts, most notably the notion of an “event.” Today we’re going to try to add some more context to the discussion and then talk about “Frames.”

A Contextual Dilemma
Let’s say NASA made a breakthrough in propulsion and in its ability to secure some funding (the real miracle perhaps). A mission is being planned to the nearest star that is believed to have an earth-like planet, Kepler 10b. This star system is approximately 500 light years from Earth. In basic terms, this means that the light that we see today emanating from Kepler 10b is not from that star’s present, but from 500 years in its past. We can somewhat accurately plot where Kepler 10b is now as well as all of its locations along its journey between 500 years ago and its “now.” (because it is travelling on a stable course – at least it has been up to the point we’ve been able to witness in its 500 years ago – who knows – aliens from Alpha Centauri could have blown up the whole star system 200 years ago in their time, there is no way for us to know for sure).

NASA artist conception of Kepler 10b
So, you might be thinking it’s obvious that we would be travelling to the “Now” version of Kepler 10b, right? Well maybe it’s not so obvious. Let’s assume that a propulsion approach better than today’s has been discovered providing us speeds up to 20,000 km/s.  This still doesn't come anywhere close to light speed. At this better yet still sluggish speed, it would take our astronauts more than 16,000 years to arrive at the Kepler 10b star system. This obviously doesn't equate to the Now we just extrapolated for Kepler 10b just a moment ago – the ‘Now’ that is equivalent to the distance it traveled since the light we see on Earth's now was generated (500 years ago).

What if we were able to travel at or close to the speed of light (or faster than light)? According to Special Relativity, if we are travelling at close to the speed of light (say 97% or so) then the actual time it takes to arrive at Kepler 10b is 12 years (in travel time felt by the astronauts). For folks back on Earth, 502 years will have passed. So, which Kepler 10b do we arrive at then if we're travelling that fast(important to know so we’re actually pointed at where/when it will be)? Let’s see if we can guess:

  1. It’s not the Kepler 10b from which the original light we were looking at came from.
  2. It’s not the Kepler 10b that is in the same instance of Now as Earth is. (this is a complex topic and we will explore it in more depth soon).
  3. It’s not the Kepler 10b that it would have taken us 16,000 years to reach. (because we close to light speed)
  4. It may be the Kelper 10b roughly 12 years after the point we measured the 500 year light old from Earth and determined when was Kepler's Now (in other words 512 years or 12 years into Kepler 10b's future). Or, it may not be.

The reason that there is some uncertainty relates to the fact that all of our measurements are being generated based on time as defined in Earth or near-Earth contexts. We are also assuming (perhaps incorrectly) that Earth and Kepler 10b are moving away from one another at a constant speed. There are some galaxies that despite universal expansion are moving towards one another.

Why would any of this be important? Well, say we were dealing with a typical Science Fiction scenario where people from Earth were preparing to making a first contact with a race of intelligent beings whom were on the verge of discovering atomic energy. In that situation, showing up even five years too late might be devastating, but it would also be somewhat embarrassing if we showed up several hundred years early as well (to witness the invention of the rifled musket instead).

Events occur within Reference Frames (a two dimensional visualization of a multi-dimensional phenomenon)

The invention of a weapon might be viewed as an event as would be any number of activities involved with traveling between Earth and Kepler 10b. Finding the events (across both time and space and across significant distances in space) is a tricky business. One way to help better understand events and how to locate them is by placing them in the context of Reference Frame.

Here is the Wikipedia definition:
In physics, a frame of reference (or reference frame) may refer to a coordinate system used to represent and measure properties of objects such as their position and orientation. It may also refer to a set of axes used for such representation. 
Alternatively, in relativity, the phrase can be used to refer to the relationship between a moving observer and the phenomenon or phenomena under observation. In this context, the phrase often becomes "observational frame of reference" (or "observational reference frame"). The context may itself include a coordinate system used to represent the observer and phenomenon or phenomena.
The Reference Frame is the timespace box within which an event or events occurs. The box though is not limited to 3 three dimensional geometry as it exists across time and space. In fact, the box really doesn't exist at all except in the context of providing a conceptual grid field that allows us to distinguish events from each other. The other key aspect of a Frame is that it allows us to define the boundary for Simultaneity.

Frames don't always move in the same direction...
Sorry for tossing in another new term so quickly – but it’s a key one that helps us to cement events within frames. Simultaneity is often viewed or described in purely relativistic terms, but here we’re going to view it a bit more broadly. Simultaneity is the Now that corresponds to an event within a frame or frames. Simultaneity is usually restricted by the ability to pass information back and forth between observers at the speed of light, C. In the old days, this represented information that could be seen directly – nowadays it can be a live internet video conference between Dayton, Ohio and Tokyo, Japan. This live conference may be occurring 12 time zones apart and the folks in either city would be sitting in a different calendar day than the participants in the other city. Yet, the event is synchronized through a global agreement on the tracking of Terrestrial time (Greenwich Mean Time - GMT) and communications that are passing across satellites and phone lines at roughly the speed of light. So, even though it may be 6:00 pm in Dayton and 7:00 a.m. tomorrow in Toyko, the meeting is a simultaneous event which occurs in the same reference frame.

This is curious even at our own limited scale here on Earth, isn’t it? The people in Dayton are in effect travelling into tomorrow to attend the meeting and the folks in Toyko are going back in time – strictly speaking. In a larger sense, all of us, in Toyko, Ohio and the rest of the planet are travelling forward through time in a sequential, linear manner. So, we have already proved that time travel does indeed exist, sort of.

But let’s get back to Reference Frames; here’s our definition of what they are:
A Reference Frame is the logical boundary within which an event or events may occur. The Frame exhibits (approximate) Simultaneity for the events and participants within and can be used to provide a coordinate system for tracking events. 
The biggest difference between my definition and that associated with Relativity Theory surrounds the use of the word “Logical.” By adding this we allow much more flexibility to the concept. Let’s say for example someone invented a quantum communications system between Earth and Mars. Instead of each transmission taking 8 minutes to send, the communication would be instantaneous (or worse, a reply might be received before a message was sent – we’ll talk about that eventually).

Thus a news conference with our Mars exploration team could become a simultaneous event occurring within one very large (for us) Reference Frame. Let’s add some other characteristics for Frames while we’re at it:

  • Frames can last for indefinite duration.
  • Frames can overlap.
  • Frames generally exist in the same time phase or phase space (more on that later).
  • Frames expand and collapse – there isn’t a fixed grid in timespace – only the potential for one, when potential is realized, frames appear (or more accurately can be discovered).
  • Frames are supported under both classical and relativistic physics – they may also be supported by Quantum physics but we may never be able to assign coordinates at that scale.
  • Frames are by their nature, multi-dimensional.
  • Events can be linked within and across frames through “Continuity Paths.”

In conclusion for today’s post I’d like to leave you all with one key thought – the context of the trip I provided earlier makes it clear that any significant travel into Space is in fact an exercise in both Space and Time travel. Why that’s the case is the topic of our next post.

copyright 2013, Stephen Lahanas