What do systems engineers wrongly assume?

On of the key mis-assumptions in modern Systems Engineering and Systems Analysis is that the total problem can, and frequently is, decomposed into subproblems, the subproblems can be solved more or less independently, and the total solution can be synthesized by combination of the subsolutions, treating the interactions of the parts as “interfaces.”

 

The real world is, however, highly non-linear, and unless real attention is paid to this fact, the linear decomposition treatment will fail catastrophically, because the interaction terms may be as large as the subproblem and not reducible to simple interfaces. The result may well remain decomposed…

 

This brings me to a related antithesis that I describe as prediction versus production. We have come to a time when meeting certain targets seems to have become more important than producing a satisfactory system. The question is not that of the development of a system that performs well and was produced at a reasonable cost, and in a reasonable time, but rather replacement of this sensible desire by the question, “Does the system perform as predicted, did you produce it for the cost you predicted, and on the schedule you predicted, and did you do it in the way you predicted?”

 

Consequently, looking at what is actually happening in the development has been replaced by measuring it against a simplistic set of predicted milestones. Fulfillment of prediction has been seriously proposed as the criterion for judging system managers.

That was a very wise Robert A. Frosch, former Assistant Secretary of the Navy for Research and Development, in “The Acquisition of Weapon Systems,” Hearings before the Subcommittee on Economy in Government, December 29-31, 1979.

I would say that the non-linear nature of development doesn’t mean we have to build major systems all in one go. Its that when you decompose a system, you’re making assumptions about how they will interface. Some interfaces will be standardized like APIs. Other interfaces will be the content of new integration projects.

Non-linearity still requires incremental decision making — incremental in that you go one step at a time rather than limiting yourself to minor improvements of existing platforms.

Prediction certainly still dominates our measures of program performance. That’s because it is easy to measure. Prediction is fine if all the knowledge of means, ways, and ends are known up front. But when projects are intended to learn and resolve uncertainty, prediction is a poor measure of success. Instead, intimate knowledge about the outcomes and their worth relative to alternative design choices takes precedence. But this, of course, cannot be reduced into a single metric. It requires judgment that comes with high experience levels.

4 Comments

  1. I agree that today’s projects, particularly defense projects, are increasingly complex and operate more like complex adaptive systems than linear ones. However, we – as project and program mangers – cannot get completely away from prediction. We still have to answer questions like how long will this project take; when will I have the capability you are developing? How much will this cost – I need to submit a budget request? This is the rub, I think. Without unlimited budgets, how do we both recognize the inherent unpredictability of developing something new for the first time and still appropriately assign resources to make it happen?

    • Thanks for the comment Chad. I’m not saying prediction goes away, of course. But we shouldn’t be fixing our predictions at Milestone B for an entire MDAP with a dozen critical untested technologies. A more incremental approach reduces the damage of prediction errors, whereas a 10 or 15 year baseline will force participants to hide errors rather than expose them. Prediction isnt the goal, good outcomes are. We shouldn’t stand by a prediction when new information points us to higher values uses.

      • If your MDAP has a dozen critical untested technologies, you are (to paraphrase Frank Kendall) committing acquisition malpractice — that program should never have been allowed through MS B. Burning down the risk through prototyping and demonstration efforts is both faster and more cost-effective in the long run, but protecting such an approach from funding whiplash is very hard.

Leave a Reply