I showed that for 10 projects for which a cost risk analysis had been conducted, the actual cost was greater than the 90th percentile for 8 of the 10. The 90th percentile should capture 90% of the variation in the actual cost from that predicted, so what we see in practice is the opposite of what we should expect, if cost risk analysis is realistic. I have seen the same phenomenon occur in schedule risk analysis, but there is even less of a track record than for cost.
… The problem has been going on for a long time, and is not getting any better. Norm Augustine, who wrote about cost growth and schedule delays for defense and aerospace projects, found the average cost growth for a development project was 50% and the average schedule delay was 33%. If you look at the NASA/DoD column, you can see little has changed in the last 40 years…
That was from the excellent Christian Smart, “Are Analysts Complicit in Cost and Schedule Growth?” He has a new book coming out Nov 3, 2020, which you can preorder on Amazon or Barnes and Noble.
Government agencies demand cost estimates before a program starts. But those estimates have very little to do with reality besides an order of magnitude ballpark. Estimates have been consistently bad for decades. And yet the DoD clings to them institutionally because they give the appearance of knowledge and risk reduction. The problem is that very few people besides Christian Smart, Andy Prince, Bent Flyvbjerg, and some others actually track those estimates over time. Here is Christian Smart again:
An analyst who conducts risk analysis for a project can often claim innocence by stating that they provided a risk analysis that was consistent with the project assumptions. If the project manager is optimistic, they claim it is not their fault. However, as we need to help project managers make better decisions, it is also incumbent on us to measure our track record over time (which we do not systematically do), widen our risk ranges to be more realistic, and use this information to defend our position with project managers.
I would claim that providing an even fatter risk ranges for cost estimates doesn’t do much good. OK, a program may cost anywhere between $1 billion and $10 billion, with the 50th percentile perhaps at $3 billion. What do you do with that? You have to operate with one number. Should you pad an even fatter budget estimate? Of course the program and contractor will eat through any budgeted amount. Why leave funds on the table, especially when they are publicly available figures?
Even if the self-fulfilling prophecy of higher costs were not a real thing, you’d need more administrative structures in place. Say the DoD did what it wanted with Better Buying Power: budget to a risk-adjusted will-cost estimate, but hold the contractors’ feet to the fire on a lower should cost estimate. Systematically, there will be too much funds budgeted relative to the need. And so the expectation must be that vast quantities of funds expire. Or, that there will be some black swans of extreme cost growth that will eat through those reserves. But that requires major Above Threshold Reprogrammings with congressional prior approval.
I believe NASA had something of a reserve account where a portion of risk-adjusted budget estimates go so they can flex to whichever program needs it. That seems to make sense. But then it also makes it easier to funnel more money into bad programs that should have been cut early and rethought. No sense throwing good money after bad.
The real problem here is that cost estimates are performed (usually) for the entire development and production program. That’s really absurd. After all, budgets are reformulated every year, and most programs see extreme budget volatility to plan for political if not programmatic reasons. If the DoD really wanted lifecycle cost estimates, why not do what Australia does: set aside the entire program funding up front and not go through the annual machinations.
This whole construct of cost estimation needs to be rethought for a paradigm of iterative, incremental, agile programs. Estimators are complicit in cost and schedule growth not by failing to make risk adjusted estimates fat enough, but because they operate under the pretension that they actually know what will happen years into the future for a very complex technical program.
Leave a Reply