Congressional view of oversight and transparency needs an overhaul

I think this provides some color as to why the Air Force’s top priority programs are being cut by congressional appropriators from $302 to $158 million for ABMS (-48%) and from $1.44 billion to $904 million for NGAD (-10%).

In accordance with 10 U.S.C. 2334(a), the Director, Cost Assessment and Program Evaluation (CAPE), is directed to conduct or approve Independent Cost Estimates (ICEs) in advance of certain milestones for all major defense acquisition programs and major subprograms. In addition, Department of Defense Instruction 5000.73 outlines the responsibilities of the Director, CAPE in providing estimates and expresses the policy of the Department for its use. It is noted that Instruction 5000.73 defines an ICE as “a full life-cycle cost estimate of a program and includes: All costs of development, procurement, military construction, operations and support, disposal, and trained manpower to operate, maintain, and support the program or subprogram upon full operational deployment, without regard to funding source or management control.”

 

The congressional defense committees view ICEs as a critical source of information about programs consuming billions of taxpayer dollars. For the purpose of recommending appropriations, the committees routinely review ICEs along with program requirements information and cost, schedule and performance data, to include acquisition decision memoranda and test and evaluation master plans. Timely and complete submission of all documents to the congressional defense committees is necessary to conduct oversight and should be done as a routine matter. The Deputy Secretary of Defense is directed to provide ICEs to the congressional defense committees for all major defense acquisition programs and major subprograms included in the President’s budget request and accompanying future years defense program, as well as those directed by the congressional defense committees.

That was the resolution of the FY 2021 Defense Appropriations Bill. It’s just not clear to me how these waterfall planning measures is the only fair measure of oversight. Roper clearly articulated that he wanted to take a 21st century approach that mirrors commercial industry best practices, which is described by buzz words like modularity, agile, devsecops, and so forth. These processes produce tons of information in the form of improving capability demos, but do not output lifecycle cost estimates which, by the way, are only really feasible for mature technologies like an incremental advance of the JSTARS platform where there is tons of existing data.

Ask a cost estimator how they’d approach ABMS, and they’d have a tough time giving you a straight answer. It seems that each of the 28 capability areas in ABMS will receive its own unique estimate. I think that number is greater than the total number of cost estimators in OSD CAPE!

Let’s presume that there are 28 cost analysts ready to work on ABMS. First, they’ll need a squared away technical baseline. Then, it takes 210 days to perform a cost estimate. Finally, you can feed that into the PPBE process. Put this all together, you’re looking at a 3-5 year delay!

My tagline is Insight and Investigate rather than Predict and Control. Prediction takes a long time, and then often turns out riddled with error. A better way is to provide permission to innovate while requiring reports as to spending allocation and frequent demonstrations of advancing capabilities. All that while, build up a more detailed fielding plan and cost estimate — not at the very start, but when the time is right and data are available to make such decisions. As a smart man told me, sequence matters.

Anyway, here’s the Appropriator’s breakdown of the ABMS request. It’s funny how these massive program-by-program adjustments give the illusion that there’s some exquisite technical reasoning behind it.

2 Comments

  1. Thanks for the sympathy. I give my cost estimator free-range to go to any meeting and collect any insight he needs to build the most accurate estimate…yet I also remind him that his updates can be program-killers if not messaged carefully and at the right time–the System doesn’t reward him for getting smarter 🙂

    • Yeah definitely. Cost estimators are given an impossible task, because he or she only has historical cost data but is expected to cost out *new* ways of doing things *prior* to tinkering and development of those things.

      Oh, well just use weight and other measures in your multiple regression! But that’s non-sense too! I had some interns for a summer at the Pentagon so I had them hand-jam all the PDF contractor cost data reports from each fighter contract collected from the very first F-4 Phantom II starting in 1955 through the JSF. Could that be used to estimate the cost of a new fighter using new methods expressed by Roper in There is no Spoon?

      For example, there are variables for 5th generation and % advanced materials. In IDA’s results using SAR figures (we had contractor cost data at the WBS level), both of these for the F-35A amounted to a 41% premium on the T1 cost, compared to if it had been 4th gen and no composites. Perhaps that’s true, and leaders can debate whether 5th gen capabilities are worth that. But then what’s the cost impact of 5th gen *and* new development methods like digital engineering, or advanced manufacturing, or whatever else is happening?

      Roper claims that the T1 cost with digital engineering is where the T100 cost would’ve been with old methods. That implies a savings of two-thirds! How is that validated? Non-stealthy T-7A? And then a lot of the value comes from being software-native, and so how do you measure that? Cost models correlate product attributes with cost, not with value. More weight = more cost, not more value. I want an aircraft of infinitesimal size and maximum lethality. So the number of different attributes of systems gets reduced to these ridiculous parameters because we are limited in data and degrees of freedom.

      Anyway, the point of this long story is that cost models abstract away from reality in most cases and can only describe what *was* and never what *can be*. So what’s the role of the cost estimator? His or her job is made much easier when projects attempt just “one miracle at a time.” It has a nice effect of helping isolate causality to the degree possible. If there is continuous experimentation of subsystems and integration, that vastly increases the relevant data to make good inferences about fielding costs.

      I dunno, just spit-balling some thoughts here. But the main idea is that cost predictions are more useful for FRP and sustainment when you have well organized Dev/LRIP data. And cost predictions are harmful for Dev/LRIP if they limit speed and experimentation. Instead, the cost analysts role there is to collect the ad hoc data that exists and curate the data for when it is needed to making fielding analyses.

Leave a Reply