DoD cannot wait for zero risk before scaling important technologies

One of the major ideas of defense acquisition is that programs of record shouldn’t begin until a full requirements and technical understanding has been accomplished. Cost-schedule-technical baselines are established prior to the first operationally relevant prototype (Milestone A), or at the very latest full scale development (Milestone B).

Yet it is a well-proven fact that engineering trial-and-error solves scientific problems just as much as scientific discoveries enable engineering solutions. For example, the steam engine was invented well before the science of thermodynamics emerged:

The steam engine was invented before the science of thermodynamics, and did not depend on it. Thermodynamics is needed to optimize an engine, but not to invent it. To invent it, however, did depend on at least some scientific understanding of atmospheric pressure, which had been demonstrated as early as the 1600s, in particular by Denis Papin.

There is something about anticipation, trial-and-error, and risk tolerance that has been an important source of progress. This process is antithetical to the heavily bureaucratic, overly linear processes mandated by the Department of Defense. But it wasn’t always that way.

Harvard researchers Martin Peck and Frederic Scherer found that the Air Force Atlas ICBM program led by General Bernie Schriever had critical technical problems solved by the Army’s Jupiter program. They showed how the engineering method used by the Army included trial-and-error processes that systematically searched for information without understanding all physical aspects before-the-fact:

There remained, as General Schriever noted, one critical problem—re-entry of the warhead into the atmosphere—about which little physical knowledge existed. When ballistic missile warheads re-enter the atmosphere at speeds up to 20,000 mph, shock waves with temperatures of 15,000F or more are generated. But just how these shock waves were formed, how they behaved in contact with various physical shapes, and how the tremendous temperatures would react with materials in a shock wave environment were all unknown.

 

In this respect Atlas was a “scientific” project. Even then, however, it turned out that the re-entry problem was resolved by [engineering] activities before a complete [scientific] understanding existed. The Jupiter IRBM nose cone problem was solved largely in an empirical manner. It was known from theoretical calculations that the nose cone had to resist certain general heats and shock waves. Guided by test data on rocket throat temperatures, one material after another and one shape after another were tried in the exhaust blast of a rocket engine until the most successful combination was found.

 

This nose cone illustration reflects a broader set of technical problems typifying advanced weapons developments. Fundamental scientific knowledge about the environments within which new aircraft, guided missiles, and space vehicles must operate has frequently been lacking during many developments of the 1950-1960 era. For example, science has yet to provide sufficient understanding of how objects behave in various supersonic and hypersonic environments to predict fully the problems which will be encountered in flight. All too often, these problems do not become apparent until a prototype vehicle is test-flown unsuccessfully. Then isolating the problem requires lengthy trial-and-error testing in which scientific theory may be of little assistance.

DoD needs to go back to the future — relearn some of the wisdom that existed prior to the managerial class taking over the reins in the 1960s and 1970s.

1 Comment

  1. I agree with everything you say, Eric — but none of it has anything to do with JCIDS or milestones or programs of record. DoD is already set up to fund science projects and prototyping efforts and technology demonstrations, through RDT&E Program Elements. And they do — just not nearly enough, and not for long enough to bridge the Valley of Death effectively. My take is that their problem is that they apply the wrong metric for their overall prototyping portfolio. You shouldn’t care what fraction of prototypes eventually turn into programs of record; you should care what fraction of programs of record are able to leverage an existing prototype. They should just accept that most prototyping efforts won’t ever be used operationally; they’re so much cheaper than programs that it doesn’t matter.

    When you go to the hardware store, you don’t care what fraction of items on the shelves will someday be useful to you — you care whether the one you need is on the shelf. The smart hardware store owner knows this, and doesn’t worry about sell-through.

Leave a Reply