Capt. Mark Vandroff (ret.) on people, process, and delegation

I was pleased to have Mark Vandroff speak with me on the Acquisition Talk podcast. He served 30 years in the US Navy before retiring as a captain, with his most recent positions including Program Manager of the DDG-51 shipbuilding program (2012-2017) and Commander of the Carderock division of the Navy Surface Warfare Center (2017-2019).

In the episode, we discuss why scientists and engineers should read the classics, how to manage large organizations, why we must start with the end in mind, whether or not the Department of Defense is risk-adverse when it comes to acquisition, how a working capital fund really works, and lessons Mark learned from Sean Stackley — including the “Stackley curve.”

The conversation features an analysis of the “valley of death” problem associated with transitioning new technologies from the labs into an official program of record. Mark argues that the principle cause of the “valley of death” is not the contract process — which can be alleviated by skipping over the Federal Acquisition Regulations (FAR) using special authorities — but in fact is traced to the Planning-Programming-Budgeting-Execution (PPBE) process.

Mark describes the coordination necessary for a program manager to make changes in the budget. He finds that there is no one responsible for taking new technologies from the lab to the program offices, whose managers couldn’t know which emerging technologies needed the funding when their requests went in 3 years before. He argues that there needs to be a mission-based appropriation which provides additional flexibility. Mark warns us, however, not to carry that idea too far. Major new items like ships should still proceed through regular channels.

Podcast annotations.

Mark provides us with a description of what he calls the “Stackley curve,” which comes from his interactions with former ASN(RDA) Sean Stackley:

One of the things he taught me and one of the things he was a master at, both as a program manager and as a service acquisition executive  was not just what decisions to make but when to make a decision. And I call this the Stackley Curve.

 

When you think about the quality of a decision, you always make a better decision in the future than you can today because you’re always gathering information, you’ll always have more information to base your decision on in the future than today.

 

The downside of that is the ability of your decision to affect the outcome of the situation is always going down. That’s because your decision can only affect the future — any decision you make now cannot affect the past.

 

So you have two curves when it comes to a decision. The quality is always going up and the effectiveness is going down. The skilled decision maker understands the rate of change in both of those curves, and will understand when to make a decision as well as what decision to make.

I think that’s a nice description of decision-making. You learn information along the way about what works and what doesn’t work, and by doing so you create new options. But at the same time, you have burned money and schedule. Your future decisions may have more information, but they are conditioned by choices made in the past, which you cannot affect.

I will attempt to illustrate the two curves with a simple example. The quality of a decision tends to increase over time and is represented in blue. The effectiveness of a decision is decreasing over time because you can’t affect what has already been done; it is represented in orange.

Now, the Stackley curve is derived through some function of the two curves above. Let’s assume for simplicity that the Stackley curve is simply a multiplication of the indexed values of the Quality and Effectiveness functions. If Quality is increasing faster than Effectiveness is decreasing, then you want to delay decisions. The overall impact of your decisions (the Stackley curve) increases over time. This corresponds to incremental decision-making, or perhaps can be interpreted as situations requiring an agile approach.

By contrast, if Effectiveness is falling faster than Quality is increasing, then you want to hurry up and make decisions sooner rather than later. This corresponds to situations which you already have pretty good information and good foresight, or when most of the costs are in material and routine labor. Planning should be done up-front, and the project should be implemented as planned.

In the image above, the combined effect of the two curves perfectly balances out such that overall Stackley curve is flat — you are indifferent to making a decision today or making a decision in the future.

Of course, none of these curves are likely to be simple, monotonic, functions. For example, the Department of Defense process of technology development makes decisions according to “stage gates.” Under this model, the Department doesn’t learn information smoothly. It learned information at the end of each major phase of the program. For example, information is compiled and briefed for leaders after materiel solutions analysis (Milestone A), after prototyping (Milestone B), and after full-scale development (Milestone C). This can be represented by a stair-step function in the quality of decision curve, represented below.The nature of the Quality and Effectiveness curves has implications on the overall impact (the Stackley curve), which helps a person decide when to decide. If information is only made available at the very end of each project phase, after the contract deliverable has in fact been delivered, then the DOD milestone process makes a lot of sense. The Stackley curve tells us that it is the right time to decide after each of these project phases, when its value is greater than 1.00. Any value greater than one means you prefer to delay decisions or make them incrementally. When the value crosses 1.00, you switch to planning and deciding now.

For example, you may be learning a great deal after Milestone B, when you start work on full-scale development. You may wish to hold off detailed designs for final integration until more is known. But you get to a point, for example at Critical Design Review, where decisions have to be locked down if you are to achieve a functioning system on-time and on-cost. You cannot delay forever. Somethings must be decided and let the tests show whether it worked out.

Another inflection point on the Stackley curve isn’t necessarily at Milestone C, the production decision. Instead, there is a lot of learning during low-rate initial production, and tooling, processes, and even designs may still be in flux. However, by the full-rate decision, you better have a plan because you want to start achieving economies of scale.

Most likely, the Stackley curve will be different depending on the technology and situation. But the fact of the matter is that there is no empirical way of deriving these curves (kind of like marginal cost and marginal utility curves in economics). They represent a way of simplifying and modeling an aspect of the decision process. Ultimately, it takes experience and judgment to estimate where you are on the Stackley curve.

Here is Mark on risk taking:

The conventional wisdom is that the defense acquisition system is very risk adverse. I don’t think that’s right and I don’t think we’re risk adverse. Regardless of what you think about the DDG-1000 program, how it was conceived or evolved or where it is now, it did not lack for technical risk at any point. If you look at the Ford-class carriers today, they are managing a significant amount of technical risk.

 

I believe that most defense officials, both military and civilian, are perfectly willing to take on risk in order to advance our capability. That’s not the problem. I think sometimes they don’t always understand the risk they’re taking. And I struggled with that myself as a Program Manager.

I responded that the DOD often takes on risk across a wide range of components and subsystems within a single platform, like 11 critical new technologies attempted on the DDG-1000. Mark agreed, and illustrated that when they were putting a very power-hungry new radar, the SPY-6, onto the DDG-51 flight III, he and his team decided to focus on that challenge and limit other requirements that were being tacked on:

We were very careful and had to work hard within the Navy to keep other potentially useful technology out of that initial design, even though a lot of people would advocate that there were things that could have had meaningful impact. I won’t get into each one of them that we had to say “no” to.

It seems that engineering challenges are often more difficult that assumed at the design stages, and so it makes sense to develop components and subsystems sometimes independent of platform design — and when integration time comes, to only do one or two major things on each attempt. While that seems like it would cost more and take longer, I think over the long run such combinatorial innovation would outstrip the weapon system concept. Armen Alchian and Oliver Williamson agreed.

The most interesting part for me was the discussion on the budget process, and how we can solve the “valley of death” problem in technology transition:

I think you need a separate appropriation for — and management of — the technology transfer. Every year, we would tell Congress, “we’re going to spend X dollars to transfer these kinds of technologies, in general, and then every year you figure that out as technologies are maturing and are ready to transition. That’s how I see it becoming faster.

What he is saying is something like a RDT&E budget activity “6.3-and-a-half,” which does not have line items like “directed energy project X,” but instead might be something like “Shipboard systems” which can be devoted to taking a general kind of technology that meets a mission, and can be routed to the right technology at the right time without waiting 2 or 3 years to program it into the budget. The funding level might be relatively stable — operating to a fixed budget with greater program choice. Here’s more from Mark:

I think the complaining that “hey, by the time I need the money it takes me 2 or 3 years to go through the entire PPB cycle,” — yes, that’s what it is and that is what it should take, but we can plan, program, budget quicker for those things that need to be quicker… you don’t want rapid SCN, that’s ship construction Navy, they take 4 or 5 years to build, you don’t want that overly rapid, but getting that first laser on a ship when ONR [Office of Naval Research] says “yep, I got a laser that really works and this is going to be a game changer,” you want a pre-programmed appropriation that’s yearly.

I think that makes a lot of sense. I personally think that much more of the RDT&E appropriation should be mission funded. In my own opinion, Congress could also do a much better job getting records of what things actually costed and how they performed in operational environments, and hold officials accountable for the choices they’ve made after some evidence of the consequences have become known. Currently, it seems that Congress is fixated on where money will be going, and they have little interest in what they got for the money already spent.

I’d like to thank Mark for joining me on Acquisition Talk. Be sure to check out some of his great material, including articles discussed in the podcast “Power to the Polymath,” “Reflections on Tailoring Leadership for a Perfect Fit,” and his widely read article “Confessions of a Major Program Manager.” Here is Mark providing a great lecture on acquisition using Star Wars as an analogy. Here he is being interviewed on DefenseNews. And I highly recommend listening to both of his episodes on Commander Salamander’s Midrats podcast (on USN’s Labs, Research Facilities, and Ranges — and on Confessions of a Major Program Manager). He is also quite active on Twitter, follow him at @goatmaster89.

1 Trackback / Pingback

  1. A Guide to Not Killing or Mutilating Artificial Intelligence Research - War on the Rocks

Leave a Reply