Podcast: Speed, thrift, and simplicity with Dan Ward

In this episode of Acquisition Talk, I speak with Dan Ward. He spent 20 years in the US Air Force, and has just released his third book called Lift: Innovation Lessons from Flying Machines that ALMOST Worked and the People who NEARLY Flew Them. During the conversation, we discuss a wide range of topics related to accelerating innovation, including:

  • How the Wright brothers built the first airplane with 1/73rd the funds
  • Why waterfall development is “like gluing feathers on your arms as a way to fly.”
  • Why contracts should add a termination clause if costs grow 15%
  • The importance of diversity to developments
  • Why economies of scale rarely pan out
  • How the Navy used an X-Box controller at less than 1/100th the cost

During the episode, Dan explains why predictions become increasingly fragile over time. If large projects are broken down into smaller tasks, the greatest advance can be achieved for lower costs and in less time. While there is no longer a static technical baseline to measure performance against, the iterative learning process allows us to count on a positive outcome even if we can’t define it ahead of time.

Dan points to policies that explicitly favor modular projects and contracts. He recommends we all read FAR Part 39, acquisition of IT. In practice, however, many government officials continue to favor large monolithic contracts. We discuss how government can shift toward more modular contracts. History has shown how incremental steps emphasizing speed, thrift, and simplicity actually allows us to innovate faster. This pattern comes out clearly in Dan’s new book on the pioneers of aviation.

Podcast annotations

A bias toward speed, thrift, and simplicity often means approaching problems incrementally. Break big problems down into small problems. Only make decisions when the need for them arises. This not only reduces projects to human scales of time and knowledge, it makes possible many experiments where only one or two big things are changing at a time. As Dan points out,

It’s unscientific to introduce many changes at once. It’s more scientific to say, we’re only going to introduce one change or one real breakthrough because then we get cause and effect.

One of the greatest benefits of moving incrementally is that management preserves the option to change funding levels and change technical direction as information grows. As William Roper said of ABMS, the program “will emerge” through iterations of 10-15 percent solutions. That’s a good way of thinking about it.

Preserving options, however, also presents a problem for oversight. Flexibility to redirect funding and engineering choices also defeats a general measure of efficiency. It makes difficult the use of baselines based on performance, cost and schedule.

We get tied up in measurements a lot. A while back I was part of a study with the National Academy of Sciences that the Air Force commissioned to look at how we can help foster more innovation in the Air Force acquisition community. The report has been publicly published. One of our findings — and this is the most consistent thing we got from all the interviews we did — is that there is no single metric for measuring innovation. One individual said it’s better to have no metrics than bad metrics, because bad metrics lead to gaming the system, making bad decisions, and giving us the illusion of control, awareness, and predictability.

That doesn’t mean metrics aren’t useful in the world of speed, thrift, and simplicity. Dan offers what seems to be a pragmatic way of using metrics:

The best approach to metrics that I’ve ever come across is to treat metrics experimentally. For a period of time, let’s measure these things and see if those measurements inform good decisions… for the next month, we’re going to measure this. At the end of the month we’ll ask, are they relevant, are they timely, are the informative? If not, then pivot and try some other metrics. Again, that iterative, incremental approach is applying the same scientific method we’ve been talking about the whole interview.

Such a recommendation might sound like a challenge to oversight agencies. There could be dozens or hundreds of relevant metrics, each requiring contextual knowledge to make sense of. In short, there’s no substitute for knowing the particulars of each project, including the qualities of the individuals involved. That’s much more difficult than asking “what’s your CPI?” for a contract and “what’s the growth in your APUC?” for a program.

I usually think of oversight more like TSA agents. Do basic monitoring of fraud or corruption, and then do random screenings at a more intimate level. The role of oversight is to catch you when you did something wrong rather than provide performance ratings, make programmatic recommendations, or determine whether best practices were followed.

Here’s a related snippet from Dan:

We have this belief — completely unsupported by the data — that we will minimize risk by maximizing oversight. We’re going to minimize risk by maximizing redundancy and maximizing control…

And I liked this line:

When you go into those high risk situations where you’re Wile e Coyote strapping a rocket to your back, hopefully you’ve done your homework… We want to avoid making the fatal mistakes. Making the mistake you can only make once.

I’d like to thanks Dan for joining me on the Acquisition Talk podcast. Check out Dan’s website for a lot more, including links to purchase all of Dan’s books, F.I.R.E, The Simplicity Cycle, and his newest release LIFT. Follow him on Twitter, and listen to his interviews on various podcasts, including here, here, and here. Watch his testimony to the UK’s Parliament and read his testimony to Congress.

Be the first to comment

Leave a Reply