The federal acquisition community is well aware of these issues and has been struggling for the past two decades with new approaches to solve the problem. A lot of positive developments have come about; pieces to the puzzle, but yet the puzzle remains incomplete. The most significant advance in Federal Acquisition theory emerged about seven years ago; “Performance-Based Acquisition.” The best way to describe what that means is by reviewing its process elements or steps (taken from an executive summary here,
- Step 1 – Establish an integrated solutions team
- Step 2 - Describe the problem that needs solving
- Step 3 - Examine private-sector and public-sector solutions
- Step 4 - Develop a performance work statement (PWS) or statement of objectives (SOO)
- Step 5 – Decide how to measure and manage performance
- Step 6 - Select the right contractor
- Step 7 - Manage performance
Background
Government acquisition didn’t begin in The United States. Many ancient civilizations had highly complex bureaucracies; Egypt, China, the Roman Empire. All of these societies had governments charged with managing the public wealth, and they exercised that power to build; roads, temples, ships and armies. And in every case, there was always the possibility for mismanagement and corruption. Some of this country’s most famous scandals involved misappropriation of public funds. Any time an event such as that has occurred it led to reform initiatives, that combined with the general evolution of procurement practices and technology gradually built up to the acquisition laws and workforce we have in place now. But something else has been happening recently that has fundamentally changed the nature of acquisition – the advent of information technology.
Dealing with procurement of relatively unsophisticated materials or services on an ‘enterprise scale’ can be challenging enough in its own right; however those both buying and selling shared a significant advantage in the great majority of instances that the nature of what was being transacted was thoroughly understood. A hammer, an engine a sack of potatoes all share a certain lack of ambiguity that is familiar and comforting. It is very easy to determine whether the hammer is faulty, whether an engine runs or whether the potato is fresh or rotten. We also have an easily identifiable cultural context for how much each of these should cost so that when someone discovers that the government paid $150 for hammer the discontinuity is obvious.
When the procurement world was largely concerned with this type of acquisition; the process of buying was directed mainly on materials management. The exception of course for the past 75 years or so has been research and development which has always been considered a high risk proposition. The focus on materials management translated into a contract-centric, bureaucratic approach to acquisition. There was an implied assumption that if all of the steps built into the process (designed to protect the government from fraud), were followed that all would be well – there was little need or incentive to actively oversee or otherwise examine procurements. This assumption began to break down during the cold war as the United States began to increase the number and complexity of its R& D projects and began deploying more complex technologies. The early manifestations of the problem mostly involved the failure, delay or cost overruns of a variety of complex weapons systems. At that point, the failures were still the exception, not the rule.
Then, beginning in the mid-1970’s, two things happened; first those complex weapons systems become dependent on software, and second the federal government became the largest systems development and integration entity in the world. In many ways, the commercial IT industry owes its very existence to the massive investments and fundamental breakthroughs provided by federal programs in the late sixties and throughout the 70’s. The federal acquisition community was being hurtled from a post civil war paradigm to the space age in little more than a decade. Materials management is an objective discipline; it depends upon clearly defined, deterministic principles. Software, systems, integration and their related services however define their own realities in entirely new contexts – assessment of their value is nearly always an exercise in subjectivity.
The larger the Acquisition, generally the more complex and subjective it becomes... |
Performance-Based Contracting
By the 1990’s, it became apparent that the previous approaches to acquisition were no longer suited to the new environment. A series of reforms were undertaken by the Clinton administration to streamline processes, streamline the workforce and rethink the basic assumptions. Yet, problems remained. And then at the end of the decade a new philosophy was posited, one designed to deal with the subjectivity of modern day acquisition. When one examines the seven basic steps of Performance-Based Contracting a number of realizations become apparent, including:
- It is not materials focused
- It represents a lifecycle process; in a sense it is advocating that acquisition and development lifecycles are part of the same lifecycle.
- It forces the government to ask the question; what is that we’re really buying and why? This may sound like an obvious question but believe it or not, sometimes it is never answered properly or even asked.
- It attempts to superimpose some level of objective measurement atop areas that are notorious for their subjectivity. The Performance Work Statement is the most obvious manifestation of that.
- It acknowledges that from this point forward, the acquisition community must become active participants in the programs they fund – they are no longer limited to managing the dollars, or contracts or paperwork. Acquisition is now a vested partner in the process.
Capability-Driven Acquisition
So what’s the answer? How do we get from here to where we need to be? Like most things, the answer begins at a philosophical level and all good philosophies begin with a question; ours is this - if we are measuring performance, what exactly is it performance of? Are we measuring the ability to solve a problem (as perhaps hinted by step 2) or is the problem resolution designed to provide something else? Put another way, do we want to judge a contractor by their effort, by their knowledge at a point in time or by what they produce for us? If they do produce something, will it be relevant or address what we had actually requested?
Quite a lot of questions; quite a bit of subjectivity if our primary focus is only performance, isn’t there. The truth is that a performance-only based paradigm will never work precisely because it stimulates an even greater level of uncertainty than currently exists if all things are measured only in this dimension. There needs to be a stable foundation to build upon before we can tackle and manage the uncertainty and complexity associated with modern acquisition. That foundation is an understanding of the expectations surrounding anticipated capability. By focusing on capability we completely redirect the attention and efforts of the acquisition community and we provide them hard targets for defining and managing performance.
Capability implies an outcome. Capability can apply equally to hardware, software, integration or services; all of these elements imply use that will lead to practical application. This means that all performance must necessarily be directed to the delivery of capability. The capability becomes the primary criteria for acceptance, not knowledge, not effort, not pieces of work which cannot on their own accomplish a function, but only the resulting outcome-based test demonstrating a capability. So, for our previous materials management example with the hammer, that hammer would be stress tested with however many blows needed to show that it wouldn’t fall apart. For something more subjective, like software, the software must meet the exact capability and performance expectations specified in the requirements. If it is must manage 247 data sources, totaling as many as 400 million records with reports or queries that are returned in less than 2.5 seconds then the capability must be demonstrated and validated.
Now, this leads to some obvious questions. How does one know whether the program is working towards that effective capability or simply burning money? This is where performance measures are supposed to help, but again if those measures only track project progress in its own context (i.e. milestone’s A & B are complete), what does that really tell us about capability? Does it solve our problem – what is a milestone, is it a marker for how much resources were expended, how much code was developed or does it perhaps represent how much of the capability has been achieved? It has to be the latter to matter – and this is precisely the premise for most approaches to what is referred to as ‘agile’ development. For this to succeed, projects or programs cannot simply be divided across work breakdown units tracking progression across time and space. The project must determine up front how elements of capability can be subdivided and build the development structure around those so that each element can be released and demonstrated in turn. Progress must be tangible to ensure project success and reduce the risk typically associated with complex endeavors.
So how does one define a capability, let alone subdivided it; requirements, requirements, requirements? Requirements engineering is the most underestimated, undervalued activity in IT. Acquisition professionals have some idea of the importance of requirements engineering and management but still have not really grasped the most fundamental truth of acquisition – an acquisition is only as effective, will only ever be as effective as the requirements associated with it. And capability is expressed, chiefly, through requirements, both functional and technical requirements. This implies that the most important stage of any project or acquisition process is the beginning. The requirements provide the basis for all performance measurement, for capability demonstration and for the entire scope of acquisition.
If we were to develop seven steps for capability driven acquisition; it might look like this:
- Define a scenario in terms of the problem or challenge and the capability needed to resolve
- Validate the scenario
- Develop an acquisition strategy, determine capabilities
- Expand the scenario – build detailed requirements
- Build a scenario-based RFP, let the contractors describe how they will solve this problem; how they will meet the desired capability test.
- Build performance measures & project oversight based upon the detailed requirements, continue to validate requirements & performance throughout the lifecycle
- Continually demonstrate capability through the final iteration to achieve acceptance.
Copyright 2012, Semantech Inc. All rights Reserved
0 comments:
Post a Comment