New DoD cost estimating guide fails to address the problems facing the cost community

“No one can predict the future” is an often-used cliché, and yet this is what the DoD asks its cost estimating community to do every day, albeit in a highly structured and disciplined way.

That was from the preface to the DoD Cost Estimating Guide, v1.0. I wish the guide dwelled on that problem a little more, but instead goes on to describe the traditional waterfall method of cost estimation. While I’m glad it includes inflation vs. escalation, the guide seems to skip entirely the most pressing problems facing the cost community.

The need for rapid reaction cost estimates.

The DoDI 5000.73 allots 210 days for a software acquisition pathway cost estimate, which is the same as for major capabilities. That’s a long time, and perhaps it is fine so long as costing isn’t on the acquisition critical path. However, Middle Tier programs have a 60 day timeline after the acquisition decision memorandum. There is no mention in the cost estimating guide on how that should be addressed, besides foregoing the CARD. The need to support faster moving programs is critical to the skills cost estimators will need in the Adaptive Acquisition Framework. However, in one place, the guide throws skepticism of accelerating schedules for MTA, indicating that it can result in rework and higher cost.

Costing in support of agile development.

The only real mention of agile is a reference to a DAU course on Introduction to Agile Acquisition. However, the guide has a new graphic that shows cost estimation as an iterative process. It’s nice to say that, but on what cycle time? The cost estimation process flows as one big bang that takes months, then you’re really just updating that with new data or parameters for a block upgrade some years later. There’s no discussion of how cost estimation can support contracts that run in 6 month increments, or how how it can build an estimate for a program that needs flexibility to pivot direction, or that doesn’t yet have a fixed set of requirements.

One idea is to use an analogy to create a waterfall cost estimate that bounds the program, but then use modular contract tasks and quicker cost/pricing support for specific actions within that. It would be a terrible burden to repeatedly go through a costing and budgeting process for every modular effort.

Costing software and data projects.

The guide talks about source lines of code, function points, and story points as the basis for estimating software. I will simply say that these methods are trying to treat software as hardware. It wants to create work units for software in the same way that machine-time is a proxy for fabrication output. Simply put, the problem is that software is non-routine knowledge work while machining is routine physical work. That distinction invalidates such a method of cost estimation. No where is this challenge address. Some people will claim they can get pretty accurate cost estimates out of story points, but that only works for specific teams and specific contexts. They are not generalizable.

Data projects only blow this problem further out of the water. Hardware or software product design can potentially be costed if you fully specify the design and you have cost data for similar projects in the past. But the whole point of an AI/ML project is to collect, organize, tag, etc., the data necessary to create an algorithm that creates an outcome. You cannot know ahead of time how much and of what kind of data is necessary in order to cost that out. If you already knew you had all the data necessary, like we plausibly might have in a hardware program, then the AI/ML project is basically over. It’s a paradox to cost what is unknown.

This is a general form of the problem of costing anything innovative. When innovation follows an incremental pattern, like increased size and speed of jet aircraft, past data may do fairly well to inform future costs. But cost accounting data can only tell you what happened, not what will be. When moving to new styles of innovation and new technological paradigms, costing becomes problematic. Indeed, the very methods of costing push defense program decisions to favor legacy solutions because those are the only ones that have sufficient data as a basis for cost estimation.

Requiring a life-cycle cost estimate only compounds these problems of prediction. Moreover, what does a life-cycle cost estimate mean in the context of software when “software is never done“?

Ultimately, DoD lost an opportunity with this cost estimating guide to address the methods that will be required of the cost community in the 21st century.

2 Comments

  1. You can’t really blame the writers of the guide for failing to tell people how to do something that nobody yet knows how to do. The tension between the desire to go fast and the desire to fill specific capability gaps puts software programs in an impossible position. Agile development explicitly avoids promising any particular capability. Until the Joint Staff and the COCOMs are OK with that, they will be unwilling to permit actual agile development.

    There was a CAPE seminar yesterday that featured a presentation about cost estimating under the new Software Pathway. It said all the right things about agile development, but it did not offer any actionable advice on the key practical issues, such as how to decide the content of the Minimum Viable Product, and how to predict spending after that point.

    • Right. But there needs to be some thought leadership there. Either life cycle costing can do what its proponents say it can do, or it cannot. Since they can’t describe how it works with modern management techniques, then we’re left in the open water. Do commercial companies lifecycle cost digital developments? Because if not, I’d like an explanation why they haven’t crashed and burned as acquisition folklore would have us believe.

Leave a Reply