GMU Playbook: Make room for opportunities in program requirements

George Mason University’s Center for Government Contracting is actively gathering feedback on its recently released draft playbook. Over the next several weeks, we will provide deep dives into each play and discuss how we are incorporating critical feedback.

Read the Draft Playbook, May 13, 2021

Background

There’s a story out there that the “requirements pull” paradigm of defense acquisition is out of date. Back in the 1950s, DoD was the major source of R&D investment. It controlled its own destiny, and thus tried to “pull” technology along to meet articulated needs. Intelligence would define enemy threats, operations would identify capability gaps and define requirements to close them, programmers would resource it, and the acquisition community at the end of this chain executes the baseline plan. This model relied on the assumption that the future can be predicted with precision in terms of military and technical feasibility.

As the story goes, the world DoD lives in has changed. Commercial and international R&D investment dwarfs the government’s own, and the cycle time to releasing new technologies is inside the “OODA loop” of DoD’s decision lead time. As a result, DoD must be able to react to new opportunities created by the larger technology ecosystem, and experiment with them to form new operational concepts. DoD no longer has a monopoly on frontier technology, and if it fails to integrate tech equally available to our adversaries, DoD can only fall further behind the state of the art.

As RAND analyst Robert Perry already understood in 1967, the problem with such stories “is that they tend to ignore the reactive influence of innovative technology on requirements, and of requirements on the handling of innovations.” His colleague Thomas Glennan added, “This meeting of technology-push and requirements-pull efforts within a single organization makes for considerable difficulties in setting up organizational objectives.”

Application

The first draft play of George Mason’s acquisition playbook sought to strike this balance. Requirements are still part of regulation and law, but room can be made for opportunities. The first aspect of the play is focusing on statements of outcomes rather than particular features (such as spending four days to decide whether to require a 110-volt or 220-volt power supply). Specific features or user stories should be planned for, but that’s the responsibility of the product owner in execution. These features on the living roadmap should map upward to program requirements, but they are not the same thing.

The second aspect is a method of iterating on requirements when needed. Users, testers, and requirements officers should be included in deciding which features make it to the top of the queue for the next increment of work. Moreover, when new opportunities from the labs, market research, or otherwise become available, this continuing relationship provides an avenue to updating program-level requirements if needed.

What We Learned

The Center invited interview participants back for a roundtable to discuss the draft play for requirements. Fair criticisms were well received. For example, one participant pointed out that there’s little new to the recommendation. The Joint Capabilities Integration Development System, or JCIDS, was created in 2003 to focus requirements on descriptions of high-level capability rather than extensive lists of features. What went wrong then, and what would be different now?

Another problem is with system testing. How do you test iterative requirements? The test community is required by law to determine whether a system meets its requirements. That means spelling out all the ways it will be tested in advance of development by using a Test and Evaluation Master Plan. What does a continuous testing strategy look like?

A third problem is that requirements – even statements of outcomes – can conflict with each other. Lethality, reliability, and survivability are all great, but if you maximize all of them then you will definitely lose mobility and range. Maybe you won’t have enough money to then go after the networking requirement. Even with just a few non-tradeable requirements, the solution space may still be described by “unobtainium.” A partial set of requirements may be better than asking for everything.

Other recommendations roundtable participants shared can be bucketed into outcomes-based requirements and stakeholder communication shown below:

A Path Forward

Formalize two-tiered requirements sets. Kessel Run shared with us their requirements model that distinguishes between strategic “guardian” level requirements and tactical “user centered” design. The strategic requirements are what would be found in a Capabilities Development Document, Capabilities Needs Statement, or similar documents. These high-level outcomes decompose into tactical steps found in a product roadmap that get iterated upon.

This basic construct works not just for DoD Software or Middle Tier pathways, but all federal acquisition programs. Roundtable participants were convinced traditional programs could also benefit from iterative requirements.

Writing good requirements. The Army IVAS program for augmented reality headsets had requirements for shock-proof, waterproof, and ruggedness, but did not consider the need for soldiers to brace a rifle against their cheek, or army crawl on rough terrain. Luckily, through iterative development they caught the issues and closed them. Of course, the Army could have predicted all of these features and made them into a long specification list. An alternative would be to write a general outcome-oriented requirement like “Do not impede the soldier’s regular combat movements.”

Prioritization is a team effort. This outcome-oriented requirement to not impede soldier movements will likely conflict with other requirements such as for endurance, compute, and reliability. The implications are not always clear up front, but conversations must be forced early as to what the priorities are. This should be discussed with the cross-functional team that includes user perspectives, and get agreement on what comes first in terms of tactical-level features to be developed in the next increment. The process of development, test, and feedback in the early iterations will make it clear whether program-level requirements need updating.

Use collaborative tools. While a good outcome-oriented requirement provides flexibilities to developers, they may not necessarily have known what is fully entailed in a soldier’s combat movements. Perhaps the army crawl was completely out of mind. That’s why early and regular testing with users is important. Rather than going up and down hierarchical chains, direct communication with counterparts in the combatant commands, materiel commands, requirements offices, and other organizations is preferred.

Clearly define a regular cadence of meetings and interactions. Provide access to shared folders of important information and status. Use enterprise tools like Office 365 and Slack to communicate horizontally with stakeholders. There’s no replacement, however, for side-by-side testing with users.

Create a map of the stakeholders. This collaborative program process requires formalizing the stakeholders and their role in the process. Major program decisions may involve numerous functional and leadership positions. It may seem each one of them is a potential veto point and thus the program must accommodate all of their additional requirements. These demands should be resisted if program officials and users alike do not prioritize them. Justify these choices and be prepared to defends them to the decision authority that does represent a real veto point.

Want to participate?

Our request to you: Provide us one example of proscriptive requirements and how they could be refactored into a single sentence statement of outcomes.

Please contact Senior Fellow Eric Lofgren at elofgren@gmu.edu. All feedback is welcome!

Be the first to comment

Leave a Reply