How should cost estimators respond to rapid acquisition?

As the Department of Defense keeps up the pressure on rapid acquisition, all functional areas are wondering how they can best support the policies. Cost estimation has for many years been associated with long timelines, extensive planning, and numerous data calls. What changes are required to make the cost community responsive to rapid acquisition, while remaining committed to realism and oversight?

Background.

To understand where the cost community must go in the future, we must first understand the past. Up through the 1950s, the Harvard researchers Merton Peck and Frederic Scherer found that government officials rarely if every performed penetrating cost analysis ahead of contract negotiations. Only in one “trifling case” involving a million dollars was cost analysis performed. Thy criticized how time and performance were favored at the expense of cost control.

During the 1950s, RAND systems analysts had been conjuring up a rational method of weapons choice that relied on quantitative evidence rather than judgment. David Novick, who was called the “father of cost analysis,” had been working on methods of statistical analysis of historical data. He devised accounting systems to collect costs by Work Breakdown Structure – a hierarchy of systems, subsystems, and components – in order to help facilitate technical tradeoffs. Costs, after all, were the other side of technical performance.

RAND analysts argued that weapon system alternatives needed to be fully defined and costed ahead of work start in order to facilitate and economic analysis. The analysis required all future costs be incorporated or else it would be incomplete and potentially dangerous. So cost analysis was broadened not to consider just the next contract, but the total program, including all elements that aggregated into a logical, organic, system. That increased the complexity of the task, and indeed required an extensive planning stage that took years.

When Robert McNamara took the helm in 1961, cost analysis was emphasized ahead of all program decisions. Systems like the Economic and Contract Information System (ECIS, forerunner of today’s CCDR) and the Program Evaluation and Review Technique (PERT, forerunner of today’s EVMS) had at the end of the 1950s started to provide cost data. The Air Force created a cost analysis school at AFIT in 1962.

The rise of cost analysis accompanied RAND’s systems approach to weapons choice. It was noticed early on by Congress that the number of new program starts had fallen dramatically. Atlas ICBM program manager Bernard Scheiver said that even though there were fewer program starts, a more penetrating analysis of costs and effectiveness made it probable that the programs which did get underway were very much better programs. He pointed to the TFX, later the F-111, as a program that he “completely agreed” with, which had a long planning stage before it started.

Rapid Acquisition.

By the end of the 1960s, it became clear that there was a crisis in weapon systems. The F-111, for example, was an unmitigated disaster. Melvin Laird replaced Robert McNamara at SecDef, and his deputy, David Packard, began to press for prototyping and decentralizing decisions to the services. Instead of waiting for a thorough cost analysis of a detailed plan, competitive prototype projects would be pursued and evaluated.

The Lightweight Fighter competition, for example, provided two firms a fixed amount of money for a demonstrator with little definition. As chief designer of the YF-16 Harry Hillaker later recalled, they were not required to deliver anything functional – it was a best effort contract. Decisions toward a full program were incremental.

In these years, we find the adaptation of the cost community to defense policies. Whereas the systems analysis approach had first sought to fully define technical requirements, then sent that information over to the cost analyst to put a dollar figure on, the rapid acquisition efforts of David Packard went the opposite way. Design-to-cost (DTC) set a fixed unit cost for the program, and asked for the most capabilities within that cost. Here, the estimator simply fixes a cost deemed affordable, and the engineers work within that constraint. It very much diminished the role of the cost estimator.

Quickly, there was backlash against Packard’s rapid acquisition approach. The Lightweight Fighter, for example, began to proceed without oversight from Congress – or indeed without the approval of many in Air Force or Navy leadership. Program opponents complained that even years later, there were no defined requirements. Prototyping and design-to-cost programs did note provide sufficient insight to leadership to justify that it was indeed cost-effective – and therefore legitimate.

The requirements-pull approach, which then would initiate a full-program cost analysis, was re-established with the rise of Milestone Zero (1977) and OMB Circular A-109 (1976). Jimmy Carter’s zero-based budgeting again put a premium on fully costing programs based on the approved requirements involved. It slowed down acquisition again.

In the 1980s, a new cost analysis method arose: cost as an independent variable (CAIV). The interesting aspect of CAIV is that it didn’t demasculate the cost community as design-to-cost might have. CAIV continued to require a detailed model linking costs to performance requirements, but it allowed cost to be an input into the model rather than an output. Tell me how much you’re willing to spend and I can tell you how much performance you’ll get. Fundamentally, cost analysis remained time consuming and detailed using CAIV.

What’s Next?

It would seem that the return of rapid acquisition programs would also imply a return to design-to-cost techniques, or at least incremental funding and constant updating. For example, a fixed cost would be provided for, say, a prototype contract and the contractor would do the best that they could within that constraint. That prototype would then also inform the full-scale development cost and production unit cost. This then looks something like, pick an “affordable” fixed cost, then extrapolate from actuals onto new phases of acquisition.

That might be a bit premature. After all, Section 804 rapid acquisition programs requires budget justification. That may take one or two years, in which time the requirements are developed and used to do either an engineering build-up or statistical analysis of historical data (CER) to support a cost estimate. That makes it look much more like how things are done today, just on an accelerated time line. And the benefit is that this is the comfortable way for policy makers.

But to some degree it defeats the purpose of rapid acquisition – a year or more is still a long time. Moreover, rapid acquisition isn’t just about starting new programs fast. It is about being agile, iterating between technology and requirements, and pivoting based on new information once gained. Agile means that there is some vague sense of where you’re going, but no fully defined plan. That means a “cost estimate” is premature. All that is known is what will happen in the first iteration, or sprint. What will be accomplished in future iterations will be defined by what is learned in the first. And so agile development programs tend to look something more like a level-of-effort program in which capability is fungible across fixed, but arbitrary, pieces of time.

Two points pop up. First, the level-of-effort cannot be arbitrary. The level-of-effort for communications software will be different than for a new airframe. Second, the level-of-effort will not be constant over time. For example, the prototype phase may be at one level-of-effort, development another, testing a third, and so forth. Indeed, start ups usually operate at minimal staffing, but then if they’re successful they need to start scaling up rapidly. Usually these are event driven. I got to CDR, or I got product-market fit. These cannot be predicted such that there is a financial plan of when that scaling will come. Imagine a start up having “rocket fuel” poured on it before getting close to product-market fit. It would be wasteful, or you’d have to spend time re-programming the money.

After this discussion, what is the cost estimator to do? Well, it depends on leadership’s intent. If leadership still wants to pursue a requirements-pull approach, such as seen perhaps with Next-Gen OPIR, where we have a good idea of the requirements and we are going to commit to production and operations, and that requires a fully costed plan extending 5 years or beyond, then the cost estimator must satisfy that need. The product is a fully defended cost estimate based on credible data and tied to approved requirements. That means the cost estimator has to do this faster. And generally that means investing in standard data collection, standard CERs, and so forth – whether at the engineering build-up level or at the parametric level. But this often conflicts with rapid acquisition, it takes time and cost to set up cost accounting systems, and there is pushback from program managers and contractors alike. So leadership must also provide serious support to cost data collection. It can’t be done otherwise.

(Note: cost data collection is still required for Section 804 middle-tier programs — this post was drafted in Aug. 2019 before those policies were released.)

Now, if leadership’s intent is different – especially if it is what I hear Air Force acquisition executive William Roper saying – then rapid acquisition means something different. It means implementing agile development, incremental decision-making, trial-and-error discovery procedures. Agile development, where the requirements emerge as development goes along and there is continuous operational testing, means that there cannot be a firm baseline of expectations that extends for the duration of a program. Further, cost data collection would change with the direction, and make standard CERs difficult or impossible. Most cost products would be infeasible or of little utility to decision-makers.

But the cost estimator has a role here. Documentation is always important for purposes of oversight. How much was spent? What was obtained? How much did similar outputs cost? How much will we need to scale operations? Government funding will not be released without such information, even if leadership wants to go “fast.” And so to my mind, the cost estimator looks something like a functional discipline within the program office history. Let me explain.

The contractors will certainly collect cost and technical data. But forcing a standard across the DOD could be infeasible. So the cost estimator should be there to receive what is collected, organize and normalize it, link it to other sources of data (e.g., budgetary), and provide important technical context. This effort helps oversight substantially looking back on what actually happened to provide accountability.

The cost estimator’s role today is predominately a forward looking – setting future financial plans. That could still work in an agile world, because history will support justifications for ramping up financial requirements based on what was actually achieved, and what is now expected to occur in the next period. These expectations for the future are also based on history. When similar programs were ready to ramp up, what were their expenditures? This means that the network of cost histories must be interrelated, and because of the ad hoc nature of the data collection, requires humans in the loop rather than automated processes. The CER, from this perspective, is something like a taxonomy. What funding requirements are needed for this phase of this class of system?

A last point. The risk-adjusted cost estimate seems to have no role in an agile development environment. If funding decisions are incremental rather than teletic, then what is the point to funding to a 50 or 70 percent cost estimate? This was a big criticism of Ernest Fitzgerald, who saw that programs were funded to higher levels than the requirement because of risk adjustment, but because the contractors had access to budget data they knew how much more money could be squeezed out of the program over the coming years. Similarly, government officials were incentivized to obligate the entire funding or risk losing it in the current and future years. So the risk adjusted estimate becomes self-fulfilling.

Be the first to comment

Leave a Reply