Podcast: Andrew Hunter on software-defined, hardware-based adaptable systems

Andrew Hunter, director of the defense-industrial group at CSIS, joined me on the Acquisition Talk podcast to discuss a set of papers on adaptable systems, or systems that are based on hardware but defined by software. One of the defining features of adaptable systems is that they can be upgraded on shorter time frames than the usual 5+ year block upgrade cycle that traditional platforms have gone through. Yet the acquisition system is strained to accommodate such agile developments. We discuss challenges including budget flexibility, the requirements process, earned value management, cost and pricing data, the program office structure, and more.

The discussion also touches on acquisition reform, including the breakup of the Acquisition, Technology, & Logistics office into two undersecretaries — one for Research & Engineering and another for Acquisition & Sustainment. Andrew discusses why he was skeptical of the reform early on, and how he thinks it can work out. One issue is the messaging about culture. USD(A&S) cannot be thought of as simply cost conscious because it controls about 70% of technology insertion. Another issue is lines of authority, such as whether USD(R&E) will take over the (optional) Milestone A decision to initiate prototyping. Yet both undersecretaries have been losing control as many decisions are delegated to the services.

The podcast finishes up with a discussion of the industrial base. We talk about innovation hubs and whether they are lowering barriers to entry, the decline and then stabilization of new firm entry into defense, and how small business face difficulties graduating into larger firms. Andrew says industrial consolidation hasn’t reached new levels of concern, but he will keep an eye out for further developments.

Podcast annotations.

Here is Andrew on why he doesn’t think the “fail fast” approach to innovation can work in the government:

Government isn’t wired that way. I noted with interest some months ago Mike Griffin, the head of Research & Engineering, decided to cancel the Kill Vehicle program from the the Missile Defense Agency. And at the time I said, I see this as a sign of him being committed to the fail fast idea. He identified a failure and he canceled the program. He didn’t say, “let’s double down on it, let’s try a new technical approach.”

 

I’ve read more recently that Congress is now investigating why that program was terminated for convenience rather than terminated for cause. They want to identify where in the contracting process the failure occurred, whose fault it was, and potentially claw back some of the funding that was allocated to the program and expended in order to punish the failure. Now that’s not fail fast.

It’s especially hard for organizations to terminate large or long-standing programs. Probably less so for small programs where expectations about cancellation were created early. This makes me think of Dan Ward’s advice from his excellent book F.I.R.E., that you should cancel any project with cost growth above 15 percent. Now, some may find this crazy and I won’t adequately explain it (so read the book!), but Ward gives an example of how failing fast worked well for NASA when they were implementing the Faster, Better, Cheaper strategy.

Here’s the abridged story. Despite great successes — reducing science spacecraft schedule by 40 percent and cost by 66 percent while 4x-ing the number of launches — a string of four failures in 1999 led to an IG report that recommended formalizing the process. In 2001 NASA abandoned the FBC process rather than complicate it. As NASA administrator Dan Goldin said, a 100 percent success rate would have indicated NASA wasn’t trying hard enough. Everything old is new again.

In a similar vein, here is Andrew discussing some of the troubles with ratcheting up the speed of acquisition programs trying to take advantage of the adaptable systems approach:

I think the system starts to break when you get to upgrade cycles of two years or less… In many ways it’s because our budget process says, if you want to start a new program you have to program funds to do that. How does one program funds? Well you go into the budget process and request funds. But the budget for FY19 is already approved. FY20 is already over on the Hill. Oh. If I want to do something today I’m looking at the FY21 budget.

 

So it’s 2019, I’m looking at the FY21 budget, and that is the earliest — the program is screaming. In all likelihood, they’re going to say, “I’m glad you showed up today to talk to me about your interest in the 2021 budget, but let’s think 2023.” That’s realistically when you might hope to get into the POM.

The POM is the Program Objectives Memorandum, which is the official process as part of the Planning, Programming, Budgeting, Execution (PPBE) system where military plans are connected to financial budgets through program elements. This is usually a rigorous analytical exercise that involves various parts of the bureaucracy to adjudicate competing desires. Yet the justification process can take many times longer than than the development of a new software feature or an iterative upgrade. As the DOD Manager’s Guide to Technology Transition describes:

The PM [Program Management] community cannot always predict the pace of innovation two years in advance, and funding may not be available for fast-moving S&T [Science & Technology] projects that are ready for transition. Therefore, a desirable S&T project may stall for 18 to 24 months, awaiting funding. This gap is sometimes called the “valley of death.”

During this time, the project loses momentum. Members of the team move onto other projects. Outsiders levy new technical requirements. Management reports and controls begin to multiply.

I agree whole-heartedly that adaptable systems are enabled by budget flexibility, which Andrew says is “the ability to allocate dollars in the year of execution for a newly identified capability or need.” I don’t think it is possible to over-state the importance of how the budget process impacts innovation. It not only prescribes what technologies will be pursued years into the future, it prescribes a linear progression from research to prototyping, then development into testing, and finally procurement through sustainment, without necessary feedback loops.

I asked about mission-funded budget accounts as one remedy, where funding is tied to general mission statements (e.g., Intelligence, Surveillance, and Reconnaissance in support of a brigade) rather than specific programs (e.g., a camera of particular specification on a predefined platform). With his experience as a staffer on Capitol Hill, Andrew pushes back a little and describes where a happy medium might lie. Listen to the whole thing.

Another topic we discussed was the requirements process. I asked about how we can better implement DevOps in the Department, where systems are simultaneously in development and operations. In my mind, DevOps seems to necessitate delegated authority to the lower levels to make decisions based on their commanders intent without getting the requirements revalidated before work can continue. Here is part of Andrew’s answer describing the expedited process for rapidly validating a requirement, the Urgent Operational Needs Statement (UONS):

I talked about how the UONS process is relatively streamlined, we’re able to get decisions in a matter of weeks. Well, my boss Dr. [Ashton] Carter would always say when I briefed him on UONS, he would always look at me and say, “how many four stars generals does it take to figure out we need to buy this widget?”

 

Even the UONS process, which is streamlined, you have the four-star commander in Afghanistan who would approve it, it would go to CENTCOM and the four-star would approve it, it would go to the joint staff and the four-star vice chairman of the joint chiefs would ultimately validate it with recommendations from four three-stars who are giving him the thumbs up or down, and then it’s validated. And that’s the streamlined process.

 

To you point: yes, delegation is absolutely key. I think there are reasonable ways to do it.

Find out Andrew’s recommendations for how that can be done, informed in part by his time having led the Joint Rapid Acquisition Cell. The topic reminded me of what Air Force Acquisition Executive Dr. William Roper said about delegation. You can’t have a process where the first time an officer is given authority to make decisions and learn from mistakes is when they reach an O-6 (Colonel or Navy Captain).

The theory is push the decision to the lowest level you can within acceptable risk. You don’t want to put people in a position to make decisions they haven’t been trained or equipped to, but the way acquisition worked before the reform, you had to become a colonel before you had a chance to make you’re first big mistake. That’s not fair. That’s a disservice to that senior materiel leader who didn’t get a chance to fail at lower levels to become a better strategist in how to approach risk.

This delegation — if it is to be stable or antifragile — might take on what could be called a fractal structure, where there is some regularity in the reduction of scope of decision-making as you go down the hierarchy. The parameters set by each official and his/her subordinates, however, much be locally defined based on the context. Such delegation implies an adaptable systems framework of small, incremental steps from which decisions can be partitioned to lower levels. That is very different than than the current teletic method, where many project outcomes are decided in advance and through consensus, while subordinates under an O-6 are usually expected to execute standing orders inherent in the plan.

There are tons of other great parts from my discussion with Andrew Hunter which will not be excerpted and riffed on here. But I’ll provide a related quote near to my heart on Earned Value Management, a control system required of contractors in development efforts that might not be wholly compatible with adaptive systems acquisition:

What EVMS does is takes as a given that you’ve got a good plan. Your baseline is the correct answer to the problem, whatever the problem is — and is by the way an unchangingly correct answer to the problem. What you are measuring is compliance or success in executing the plan, and then becomes you’re measure of success in the program. In the real world, where the plan wasn’t perfect to begin with, and changing reality means that it becomes increasingly less accurate of a measure over time, your metric isn’t telling you truly what the right outcome is.

I’d like to thank Andrew for joining me on the Acquisition Talk podcast. Be sure to check out his reports available at the CSIS website, including those on adaptable systems here and here and the excellent report on small business graduation. He also has hosted some of the episodes of the CSIS podcast, including one with Dr. William Roper and another on Small Businesses. Andrew is frequently sought out for comment at major outlets, as well at congressional hearings including Shortening the Defense Acquisition Cycle, Contracting and the Industrial Base, and U.S. Ground Force Capability and Modernization Challenges in Eastern Europe. Here is a good video of Andrew discussing Artificial Intelligence. Be sure to give Andrew and his colleagues their due by spreading the term and concepts surrounding adaptable systems!

Be the first to comment

Leave a Reply