Why are defense programs so big?

Mother Nature does not like anything too big… Mother Nature does not limit the interactions between entities; it just limits the size of its units…

 

But there is another reason for man-made structures not to get too large. The notion of “economies of scale” – that companies save money when they become large, hence more efficient – is often, apparently behind company expansions and mergers. It is prevalent in the collective consciousness without evidence for it; in fact, the evidence would suggest the opposite. Yet, for obvious reasons, people keep doing these mergers – they are not good for companies, they are good for Wall Street bonuses; a company getting larger is good for the CEO.

 

Well, I realized that as they become larger, companies appear to be more “efficient,” but they are also much more vulnerable to outside contingencies, those contingencies commonly known as “Black Swans” after a book of that name. All that under the illusion of more stability. Add the fact that when companies are large, they need to optimize so as to satisfy Wall Street analysts. Wall Street analysts (MBA types) will pressure companies to sell the [redundancy] and ditch insurance to raise their “earnings per share” and “improve their bottom line” – hence eventually contributing to their bankruptcy.

That was from Nassim Nicholas Taleb’s The Black Swan. I think a very similar proposition applies for defense programs. They are planned from the very beginning with “economies of scale” in mind. The bigger and more all encompassing they are, the lower is the presumed per-unit cost. It always works out in the cost-benefit analysis that aggregating into larger programs creates cost savings.

Similar to how companies keep doing mergers and acquisitions because they are rewarded for it by Wall Street, defense planners create massive one-to-rule-the-all programs because that is what opens the purse strings by the approval of oversight agencies in OSD, the GAO, and so forth. It is easy to advocate the benefits of learning-by-doing and bulk-buy discounts on large production quantities, and justify lower maintenance costs through single supply chains.

What is often neglected in the early analyses that authorize programs is the range of requirements and technologies that must be jammed into a multi-mission platform that vastly increases complexity. The whole program becomes fragile to unforeseen contingencies that spiral costs and reduce performance. Perhaps worst of all, it creates large monopolist organizations which are intended to be siloed from one another in order to reduce overlapping responsibility.

In the commercial sector entrepreneurs can speculate on the fragility of large firms, that their prices are too high or products not innovative, and can bring productive resources into competition. These startups often address niche needs poorly addressed by monolithic providers, and then scale that through superior execution. This recognition and correction of errors is virtually absent in the Pentagon. It is nearly impossible. Who will authorize program funds to compete with an existing program? That’s “wasteful.” It’s better to bury your head in the sank and sink more good money into existing program interests.

5 Comments

  1. Are they big, outside of a few high profile programs like JSF or carriers? I’m reminded of a perhaps apocryphal story I heard that the HMMWV PM solicited a few major seat belt suppliers to equip the vehicles with seat belts. Even equipping every single HMMWV in the Army and Marine Corps inventory was not a big enough of a buy to attract the suppliers. They were used to buys orders of magnitude larger from the civilian vehicle sector.

    There’s also a point that I think Sydney Freedberg made on Breaking Defense, related to the then-VCSA GEN McConville’s interest in replicating the FVL prototyping strategy (two major vendors covering most costs through IRAD) across the Army Modernization portfolio. He commented that this works for aerospace vendors, since they can diversify across the civilian sector and in some cases to other military applications. Other areas like ground combat vehicles and tactical vehicles are too small for something like that to work.

    • Thanks for writing, you make a good point. Do we know empirically whether programs have increased in relative size?

      Certainly the Manhattan Project was massive, and the Fleet Ballistic Missile consumed over a $2 billion budget around 1960 — which translates to something like $13-$14 billion today. Not even JSF came close to that on an annual R&D funding basis (though, JSF lifecycle costs are far larger). But a major difference is that those pre-1960s programs often had multiple parallel approaches under them (e.g., 5 fission paths and more than that gun paths, and 2 or 3 component contractors on FBM).

      Another issue is that DoD spending as % of GDP fell from over 10% down to around 3%, so a simple look at total force structure size doesn’t tell us much. We all know it’s fallen a lot. Air Force aircraft from 3.2K to 1.7K, ships from 500 to 280ish, Army aircraft from 9K helos to 3.5K. All that from end of Cold War until today, even though real dollar budgets are slightly higher.

      RAND/IDA studies find that then year dollar costs for production units — quality adjusted (whatever that means) — is growing perhaps twice the rate of inflation in aircraft and in ships. So there’s huge startup costs and high unit costs, and the only way out of the death spiral for defense planners is to minimize the number of different programs and gain the economies of scale through learning and rate. This thinking started with the TFX/F-111. The F-35 is simply today’s TFX in terms of size and ambition. I don’t think the culture in that regard has changed much since the 1960/70s — we still have the same acquisition process. But it is my impression that in the 1940s/50s it was difficult to stop duplication and overlap, which is indication that programs had to be smaller or else they would have been consolidated and no one would have complained. Unlike the 1940s/50s, it is rare to hear someone complain of too much duplication. In fact, you hear we have too little of it!

      After this non-empirical ramble, I’m not sure what to think. We see large companies in the economy. There’s big large companies since the Progressive era. Both companies and mega-programs churn over time. But I think the difference is that to get big companies openly compete against many other companies. The AoA process is not a real competition. And so the size of a program is not justified through experimental evidence generated and tested against various alternatives.

    • Also, I don’t think no one competed on OMFV because it wasn’t big enough. The same could be said for everything down to spares and repairs. The transaction costs to going after a program is tremendous. The barrier hurts small efforts much more than large (expect tiny SBIR type stuff).

  2. In aerospace, it’s also simply because there are only a few major major integrators (Boeing, Lockheed, Northrop). Your options are limited when you want to build a state-of-the-art airplane. Furthermore, it’s manpower-intensive on the government-side to be in the role of the integrator and have components built by multiple vendors.

    • True that, but the defense industry has probably consolidated and grown structures to match what the Pentagon (as monopsonist) is doing. But if a valid belief is that software-native companies (e.g., SpaceX) can outcompete hardware-native incumbents (e.g., ULA), then there must be some path for those new companies to come about. I don’t think that can be done any other way than manpower intensity on the government side. Ultimately, moving towards competition as an error detecting mechanism will lead to better outcomes, even if it looks “messy” to outsiders.

Leave a Reply