The July 24 Defense News article, “David Norquist has one word for you: Analytics,” refers to Mr. Norquist’s emphasis on analytics at the Office of the Secretary of Defense as a means for assessing program performance and return on investment.
In Mr. Norquist’s written response to policy questions provided in advance by the Senate Armed Services Committee, he stated that “the Department should have an active analytical staff to provide it with relevant data and objective perspectives.”
Data analytics drives budget and resource decisions, and the quality of those decisions depends upon analytics, but analytics depends upon the availability of data. The Department of Defense does not possess the repositories of data it needs to employ such analytics.
That was from “Let’s Talk Analytics” by Walt Yates. He was admonishing defense officials for not following through with congressional requirements to collect better data and organize it into the cloud for use in analysis.
DepSecDef David Norquist ostensible analytical goal is to measure performance and calculate return on investment. Clearly, this has been a leading objective of defense officials since at least the National Security Act Amendment of 1949, and was most vociferously expressed by Robert McNamara starting in 1961.
In order to support this analytical endeavor, the data needed to be created. The entire financial management system shifted from controlling organizations to controlling program outputs useful in such analyses. That why Herbert Hoover, in a bit of marketing, called the program budget the “performance budget.”
So we must ask ourselves, were previous attempts insincere? Was implementation incompetent? Or is there new enabling technologies that now make such a system feasible where it was previously infeasible?
The third point is probably the one most often held. Of course, such points have been made in the past with the rise of the internet, personal computers, mainframes, and so forth. For example, Admiral Raborn of the Polaris Missile System said in 1962 that the cost management system PERT was done through the “magic of computers.” It was supposed to collect real time data on cost and schedule by technical attribute.
And yet, with many billions spent over 60 years, honest analysts still can’t do much with these performance management systems. Contractors game them such that problems aren’t exposed until 60 or 70 percent complete, by which time it is too late.
One of our problems, however, is that our ability to collect and synthesize data is always outstipped by the increasing complexity of relevant data and causal factors. Consider the cost data collection problem in the Department of Defense. We will leave aside the vastly more difficult benefits side of the equation, which due to incommensurables is basically impossible for most cases. Let’s just talk tracing where the money goes. Sounds simple enough.
Back in the 1960s, in order to support performance measurement and calculating returns on investment, the OSD systems analysts created a Work Breakdown Structure of standard systems, subsystems, and components. This would provide data to do cost-benefit type analyses to support decision making. Is this configuration more cost-effective than that configuration?
Here are a couple issues. First, the information is spread across the DOD and contractors, who share in the integration. For contractors, any data collection has to be made a contractual requirement.
But contractors do not generally collect costs by hardware output own their own. Naturally, they control organizations and objects of expenditure. They have a hard enough time allocating costs to a specific contract, let alone to every switchboard and actuator. But the DOD puts it specifically on contract, and the defense contractors are paid for the expense of tracking costs they otherwise wouldn’t.
One major hitch is that contractors are allowed to create their own Work Breakdown Structure according to their personal needs. They may allocate it to the standard reporting structure. And often, they allocate downward, only collecting costs at the highest levels. So those practices would have to stop.
Another problem is that contractors work in non-standard technology development scenarios. Its hard to fit everything into the MIL-STD-881 WBS. Even if they wanted to, people’s subjective interpretation of the definitions would naturally lead to diverging allocations.
Each contractor also has different labor and material categories which have no definition. And they change jobs or classifications constantly. The same engineer may, over the course of a day, be doing product design, supporting the manufacturing line, managing others, and writing a contract proposal.
The labor and material categories then have significant common overhead costs charged back to it in different ratios. The spreading of overhead costs is often arbitrary, and does not reflect the absorption of money costs to the component in question.
Then, it is well known that prime contractors which the DOD contracts with are doing less of the work. They are less vertical. They used to perform about 70 percent of a contract, now it is only about 30 percent. The bulk of the actual system is produced by subcontractors and integrated by the prime (and the prime often subcontracts to itself, to its different business units).
So this creates a huge problem of, how far down does the standard data collection go to maintain some constant level of insight? Each of these cost data plans would have to be made a contractual requirement and priced. They would have to be coordinated across all of industry. No wonder the idealism of such perfect data collection was quickly dashed in the 1960s.
This kind of cost idealism perhaps made sense back when we were doing high rate manufacturing using routine operations. The past was a good basis for the future. Most of the money costs went to touch labor, physical capital, and raw materials. Money costs didn’t go much to knowledge work, such as software, databases, product designs, company training, etc.
And so while out cost collection systems are getting better, they are becoming increasingly irrelevant. Economic activity shifted. It is more complex. No longer is production described by deployment of raw materials and touch labor. It is described by knowledge work which is non-routine and requires much more communication.
The communication generated by organizational culture depends not just on the articulated information that can be quantified and sent over as data. The communication includes a great deal of tacit knowledge. It is demonstrated in the fact that returns to agglomeration are increasing, meaning it is more important to move into cities and be with people directly.
To get to the point, the money cost spent on intangible assets like software does not describe the value being generated by that asset. A million lines of code does not mean that I have achieved $X million worth of functionality, or provided the consumer more value than ten thousand lines of code.
Before I said that we should put aside the problem of measuring benefits and focus on costs. Well, that focus was on the expenditure of money costs, or dollar outlays. But a cost is simply the thing given up for a benefit. It is the negative aspect to the choice of a product. So it makes no sense to talk about costs without talking about benefits.
Measuring benefits is impossible for most multifunctional outputs, due to the problem of incommensurables. I’ll say that again, it is impossible, unless one object is strictly superior in every possible category. But then no tradeoff is really involved. So an objective analysis of data for particular attributes cannot indicate overall value. Value is inherently subjective. Money costs and subjective values may diverge significantly, such as under innovation.
Even if we could measure all costs and benefits accurately and put then into some cloud database for analysis — and we had some objectively correct valuation scale — how would the analysis improve decisions? Measuring performance and calculating a return on investment does not appear to hold any operational meaning. Does it help us make the next choice about weapons systems?
If we knew that a DOD program — or portfolio of programs — had a net present value of negative seven percent, what would that mean? When costs outweigh benefits, does that mean instant cancellation? But that would mean the plan was not optimally selected from the outset. We ran into problems. How else would we find ourselves with a -7 percent NPV? But this means our data analysis wasn’t working, and will generate a change in the data for future analyses. How do these changes work? What does that mean for making the next program decision, deciding where to allocate budgets?
I honestly don’t know what defense officials would do if they had the data and analysis to measure “performance” and calculate “returns on investment.” Those things only make sense for businesses in the economy because they are in open competition with other firms, with the profit/loss figures representing survival against actual competitors.
But even then, times are changing. Tesla’s stock price is so high not because of fat profits, but because of what people expect in the future. And that means accounting figures like the balance sheet may diverge from stock valuations. Everyone agrees that Tesla’s stock price is subjectively derived. So we must ask, if the market economy can function using subjective valuations, why are defense officials working so hard to make all decisions under the guise of objectivity?
Leave a Reply