The Agile EVMS Desk Guide sounds like an oxymoron. EVMS [earned value management system] by its nature follows the waterfall methodology. Agile is inherently iterative. Trying to reconcile the two is like saying, well, all agile is in project management is just a bunch of successive waterfall approaches, and so we just do the same thing but on shorter timescales for sequential “chunks” of capability. But as we shall see, the two are really inconsistent, and that EVMS has to be thrown out the window for an effective agile process to take route.
I want to start this argument by addressing a crucial aspect of EVMS implementation, the necessity of a freeze period by which actual performance is compared to the baseline prediction:
Freeze period: The EVMSIG states that, “In order to solidify the PMB [performance management baseline] for accurate performance measurement, it is necessary to establish a freeze period. During the freeze period, changes to the PMB are limited to maintain its integrity. At a minimum, detail planning of planning packages must occur prior to the commencement of that work within the freeze period.” The definition of a contractor’s freeze period, including the mechanics and rules for controlling baseline changes during that timeframe should be documented in the contractor’s EVM System Description.
It is critical if EVMS is to provide any useful measurement and control, that technical features be fully described and locked down such that allocations of time and resources can be attributed and tracked. If the project participants find unexpected difficulties, or new opportunities, or pivot in any way, then the PMB is no good as a measure of cost and schedule performance. It has to be frozen, or its useless.
So while the EVMS agile guide is more flexible than, say, fully detailing all planning packages for the entire contract effort – it allows that detailing to occur in “rolling waves” which was already standard practice – it still requires the lock-in of features at some interval. That doesn’t sound so bad, right? Let’s say we “open” a planning package every month or two weeks and lock something down. Certainly we should be able to forecast well enough at that range.
The obvious problem is that providing cost and schedule to the out-period planning packages is simply arbitrary. Why should performance of the first or second iteration have any bearing on out-periods which are as of yet undefined? The roadmap is just that, a vague roadmap which will morph over time with the growth of knowledge. And so current performance has no relation to total project performance. The program is, in a sense, never finished. It is continuous capability delivery, and it is up to leadership to decide when enough requirements have been met or money has been spent.
But we have another problem. There is a misalignment between the period in which plans are detailed and when the relevant knowledge to create those plans becomes available. Let me break that down a bit.
When we talk about adding detail to the EVMS in a rolling wave, we are talking about a significant process. First, the set of sprints resulting in features has to be defined. What new features of the system will be added? What activities will result in their achievement? Those are technical questions. Not only does it necessitate prioritizing across the range of user requirements – which, by the way, is only available after user testing of the latest feature occurs – it also implies some technical solution that is known, that it can be broken down into logical steps and provided resource and time estimates.
The logical steps then need to be sequenced and interrelated in the Integrated Master Schedule. What are the steps of requirements, design, code & unit test, QA, etc., required to get these features. How do they connect to one another such that a slip in design affects the start and end dates of, say, code & unit test. Without such information, progress couldn’t be tracked and implications of slips/cost growth calculated.
Of course, all this planning has to occur before the next increment of agile development starts. But that is not it.
The accounting system has to be set up to that plan. The Agile EVMS desk guide asks for a new control account to be set up for each new product feature. That’s where cost actuals will be accumulated to be compared to baseline predictions. The control account manager then takes the quantity of baseline cost and allocates that downward to work packages, which is a collection of discrete and interrelated work activities found in the Integrated Master Schedule. These work packages will require further detailing by hours of various labor categories and material costs, which has a built in mechanisms for allocating indirect costs. This requires integrating with the financial system, HR system, material inventory system, and so forth.
Often times, it is assumed that the cost accounting system must identify actual costs at the work package level (DCMA surveillance will sometimes lead you to believe that), but in reality cost actuals only need come in at the higher control account level while detailed planning and forecasting comes in at the work packages. Ultimately, however, all these processes must be consistent.
Now, all this takes a great deal of time from project managers. The abbreviated description above does not mention many other aspects of EVMS upkeep, including variance analysis reports, risk and opportunities reports, allocating costs to various reporting structures, updating estimates at completion (EACs), providing three point estimates, answering questions from the customer, and so forth. This is all a huge distraction for managers, and forces their attention to such matters as EVMS and how the government will receive it. It puts pressure on project labor to provide managers information, and stick to the plans that have been created.
All that upfront work, which can take several months or even more than a year for the average defense contract, is supposed to happen before the freeze period of the next increment. Let’s say, optimistically, all that work for one month increments can be planned in one week. I know precisely what the next increment features will be. I can then get all my CAMs to plan the increment into my EVMS for measurement by the start of next week.
That means, three weeks into my monthly increment, managers will be distracted from delivery and test to focus on planning for the next increment. But the features haven’t been delivered, and user feedback hasn’t been generated, and so the planners have little basis except for the product roadmap by which to plan. If new information is learned between week three and four of the increment (a full quarter of the time), then the team will either execute on a suboptimal plan, or will have to suffer a delay until an updated plan and freeze period can commence.
This is what I mean by a misalignment of planning periods and the time when the knowledge to plan becomes available. But it isn’t just that. EVMS necessitates the lock-down of technical features, cost accounting, scheduling, and so forth. It creates a different type of culture, one where the needs of the plan outweigh what the project participants themselves potentially think is right. It creates a system of relative inflexibility.
And so we shouldn’t be surprised that the Defense Innovation Board has recommended ridding all software programs of EVMS. They advise: “… remove earned value management (EVM) for software programs.”
But the DIB was focused on software. In reality, all development programs have software-like aspects – they are creating intangible assets. EVMS should be rescinded from all development efforts in the Department of Defense. Such rigid planning and tracking of accounting costs might make sense in early production activities when standard manufacturing processes are set up, but even then you can say that production tooling is a learning activity as well.
It is not clear what the role of industrial era planning processes like EVMS have in the 21st century. What it seems to accomplish is to give government officials a “warm and fuzzy” feeling that comes with an avalanche of data able to distract oversight agencies like the GAO from providing real accountability.
I mostly agree, but I am frustrated by how everyone wants to talk about ongoing agile development once things are up and running, but nobody wants to talk about what it takes to get to the point where you can be agile. For major defense systems, the effort to get to the Minimum Viable Product (MVP), including all of the architecture, design, data rights, cybersecurity, etc. required for an effective and suitable minimum capability, is enough to justify EVM to track its progress. It’s after that MVP is in place that EVM no longer makes any sense, as you are now essentially in a realm of best-effort time-and-materials contracting.
Yes I think the architecture question and so forth is a good point for new platforms, but perhaps less so for applications on existing platforms. In any case, I’m not against waterfall style planning in the abstract. Planning has to be done to create some coherence. But EVMS becomes a ritual that seems to distract from the learning process, and in fact distracts from the true goals of a project which are only imperfectly represented by an explicit sequence of tasks.