Complexity science, “if-then” propositions, and how to make decisions

I’m interested in complexity science, which, if you’re actually doing complexity science, it releases complexity. For example, in real complexity science, there’s an issue with causality. You can’t make an “if-then” proposition, so you’re not going to be able to make these propositions, “if this, then that.” It’s funny they call it complexity science, because without “if-then” propositions there are no hypotheses so there’s no experimental setup, so it’s hardly a science at all.

How do you navigate complex systems? You sense, you respond, you observe — you observe, respond, sense. This is not complex thinking. These are small iterative actions that advance you forward. You use intuitive sense, you don’t make big plans and you don’t have big blueprints. In complex solutions, you know that isn’t possible.

That was a fascinating Venture Stories podcast with Bonnitta Roy, “Complexity, Collective Intelligence, and New Ways of Thinking with Bonnitta Roy.”

I think this is a valid viewpoint when we are consider systems on the scale of U.S. defense acquisition, which is really larger in scale and deeper in complexity than many national economies. Yet the modern system of buying weapons really does assume that we can make “if-then” statements extending over decades. This is why so much process goes into deciding what requirements and technical specifications are the correct ones, and once we progress through Milestone A (or B) which initiates early prototyping (or full-scale development), the program plan has been fully planned out and committed to financially.

Ms. Roy makes a good statement later on, that planners of complex systems — like defense officials — presume that there is linear causality in complex systems, they just need to study it enough to determine what all the coefficients are. In the effort, the manager de-animates parts of the system as if there is not already a larger engagement between the system and the manager. Here’s Roy:

The manager occupies a position outside of the participation of the people he is going to manage — hence he can make a system of them and move them around like an object. So systems thinking inherently de-animates parts of the system that has agency.

This is an important point when we consider all the bureaucratic work that goes into planning defense programs and setting baselines. Usually, those responsible for determining the baseline are not actually the ones who will be executing the work. It is handed off to some program manager, his team, and contractors. These actors are presumed to execute standing orders found in the baseline, which was derived from a rational analysis of the system of defense needs. It doesn’t provide agency to the actors, who may find errors in the plans, or have career concerns in mind, or may disagree with the specifications, or have any number of motivations not anticipated by the planner.

Here is another interesting part:

For 33 years I ran a landscape design-build company, and for some reason I always knew that when you’re negotiating with a supplier or vendor or employee, there are certain moves that if you make that move — you know pay me now or you’ll pay more later — it would just build complexity into the system. So people would come in and try to sell you these big software packages that supposedly controlled all your costs and this and that, and they looked right, but they were so beyond the human scale — beyond what was actually needed. During my whole career in business, which was quite successful, I always was very keen on not making that first move — that first move is just going to escalate complexity… It’s the propensity is to think the solution is up the ladder of complexity.

There’s some wisdom to that as well. For example, contractors will often say to the government, “hey, if you invest in long-lead items and other production start up costs, then it will save you hundreds of millions — or billions — down the line!” It sounds like a rational plan to enter into long-lead items, or even multi-year procurements, but often the program is already behind schedule and not thoroughly tested. Committing to a large procurement can then escalate complexity and result in sunk costs if technical problems are found. It will cost much more to resolve such problems when tooling and early production units are already paid for. The actors will be incentivized to paper-over the problems and push forward because of the political difficulties of recognizing an error, stopping production, the layoffs, etc.

Ultimately, however, action has to be taken. We cannot fall into a different fallacy, that all is too complex so that we should do nothing. There are places where cause-and-effect can be gleaned. It is the people engaged in the work everyday that probably have the best knowledge about what cause will lead to what effect, and because we live in a complex system, the knowledge cannot be articulated to the higher levels in a standard way. Some method of devolving authority to lower levels is necessary to unlock complexity.

Ms. Roy recommend small iterative steps based on sensing, responding, and observing. But who makes the decisions? How do they concatenate into a successful larger system? I think Michael Polanyi had a good answer to that, which I describe in some detail here.

1 Comment

  1. Roy sounds like a disciple, or at least proponent, of Margaret Wheatley. She has a great book called Leadership and the New Science that delves into complexity and chaos theory and their applications to organizational behavior. Good stuff.

Leave a Reply