Has program budgeting failed the world over?

Program budgeting does not work anywhere in the world it has been tried. The reason for this failure can be deduced backward. What would it be like if we had it? Program budgeting is like the simultaneous equation of society in the sky. If program budgeting worked, every program would be connected to every other with full knowledge of the consequences, and all social problems would be solved simultaneously. Program budgeting fails because its cognitive requirements – relating causes to consequences in all important areas of policy – are beyond individual or collective human capacity.

 

Organizations should be designed, therefore, to make errors visible and correctable, that is noticeable and reversible, which in turn is to say, cheap and affordable. Program budgeting increases rather than decreases the cost of correcting error.

 

Line-item budgeting, precisely because its categories (personnel, maintenance, supplies) do not relate directly to programs, are easier to change. Budgeting by programs, precisely because money flows to objectives, makes it difficult to abandon objectives without abandoning simultaneously the organization that gets its money for them.

That was Aaron Wildavsky’s 1978 “Policy Analysis is what Information Systems Are Not,” found in Accounting, Organizations and Society, Vol. 3, No. 1, pp. 77-88. Wergamon Press Ltd. 1978.

Wildavsky uses something of a Hayekian argument here. Information as to what activities are likely to successful or valuable is dispersed across participants. Yet program budgeting, what is now the PPBE process, presumes that all possible outcomes can be articulated and aggregated into a holistic plan.

Wildavsky wisely argues that the political debate over how inputs should be translated into outputs performs a cognitive task that information systems cannot. Uncertainty as to the value of alternative plans means that policy should focus control on individuals and organizations with relative freedom in pursuing defined missions. When a program office is set up to pursue a program line item, they will naturally be resistant to changing the program with the growth of knowledge because it affects their organizational standing.

5 Comments

  1. Eric, This question may not fit this post perfectly, but close enough. You have argued elsewhere your support for returning to an organizationally-based object of expenditure budget, in part, to achieve the benefits discussed here by Wildavsky (I apologize if I misrepresent you, please tell me). Of course, the transition to the performance (program) budget happened for a reason. Mainly people outside the agency (what Mosher called people “above and outside the military departments”) wanted to know how much we spent to achieve specific outputs that were of interest to them. When a single organization achieves multiple different outputs of interest or when a single output of interest is achieved by multiple organizations it is difficult to use the organizationally-based object of expenditure budget to satisfy the desires of these outside reviewers.

    So, finally to my questions, by advocating for a return to that object of expenditure budget, are you simply saying those outside reviewers ought not care about the things they care about? Are you saying they shouldn’t have the insight they desire? Or maybe, are you saying if only they knew the negative impacts of their requests they would no longer want the agency to organize the budget by program? Maybe what I’m really asking is: why do you think an object of expenditure budget would be more acceptable to influential external folks in 2020 than it was in 1948?

    • Pete, thanks for a very well considered comment, which exposes something of a shocking belief of mine. Short version: reviewers ought not care about what they think they should care about.

      Long version: In my view, the DoD acquisition process is a complex adaptive system. It’s not that outside reviewers — as appendages of the central planning process — shouldn’t care about programmatics, but that they should recognize the limits of their knowledge particularly as it comes to optimizing future action across many technologies. Trying to minimize asymmetric information of programs between line and staff can only stifle progress, and then staff grows to compensate.

      Reviewers should know the general intent of future action, but focus programmatics as a historical exercise. Trying to understand what happened is difficult enough, let alone predicting the future. What was actually delivered? How did it test? How much was spent? This will be hard, because ultimately we *want* synergies and spillovers between programs. We do *not* want to pretend each program is completely separable and related through articulated facts held only by staffers.

      Regardless of budget structure, you’ll never be able to perfectly align organization and program. We see the problem arising again with the AF trying to build enterprise tools. Will that be a working capital fund to allocate costs to programs? There will always be overlapping responsibilities. This is fine. It used to be coordinated by chiefs of line orgs being dual-hatted, sitting on a coordinating committee.

      Of course, procurement of major systems like ICBMs, carriers, etc., is not exactly detailed and nuanced. So I expect fair top control of these systems quantities to continue. But that doesn’t require a program budget. The problem was that if you wanted a new carrier, it wasn’t just BuShips you’d have to increase funding to, but various other supporting bureaus too. That’s the only real challenge I see, which may be resolved through coordinating committees if the balance was wrong. — My real view here is that the United States took before WWII, and what the French still abide by today, that politicians shouldn’t make specific line item decisions like that. Indeed, when Woodrow Wilson wrote his famous essay on policy vs. administration, he clearly put Army and Navy procurement into administration! And that was Woodrow Wilson!!!

      (On this note, I also like the idea that each organization, whatever level it is, should allocate X% of its funding to other organizations, which would help increase internal accountability and ability to take advantage of spillovers/synergies.)

      • One other point here is that sometimes Congress and oversight is a champion for pushing good programs forward or cancelling dumb ones. A good example is when the inventor of continuous aim firing had to write directly to President Teddy Roosevelt to get the thing installed in the Navy.

        Reviewers should have access to pretty timely historical data — obligations, deliverables, expenditures, milestones achieved. This will keep them well informed, and able to create policy guidance in a broader context. However, that the DDG-1000 needed a rail gun is not policy in my view, but detailed administration.

        If a line org is not delivering, or is not supporting general agreements with policy makers, then simply fire or replace the individual. The individual can only blame him/herself. Today, failure is the fault of the bureaucratic plan baked into the program funding. The PM just executes standing orders, turns money into purchases for hire, and cannot be pinpointed fairly for responsibility.

        • Thanks for the great response. On the issue of forward looking versus backward looking control, have you ever read Kenneth R. Mayer and Ann M. Khademian, “Bringing Politics Back in: Defense Policy and the Theoretical Study of Institutions and Processes?” If you haven’t you could probably jump to page 184 (page 6 of pdf version) section on “The Politics of Procurement Reform.” I’d be interested to hear your take some time, especially because of their principal-agent economic approach which, I think, your depth is greater than mine.

          • It so happens the wise Richard Shipe pointed me to that excellent piece some months ago. I pushed back on the article a bit and here is part of what I wrote to Richard:

            The authors equate after-the-fact control with outcomes-based accountability, saying they “require that the goals of the policy or process be unambiguous and that an objective standard exists against which outcomes can be compared.” To me, this confuses a subtlety, even though it is the mantra of New Public Management. [It also misunderstands completely the concept of *relational contracts* and *mission command*.]

            If programs were easily evaluated based on defined goals and standard measures, it is silly to use after-the-fact control. You use before-the-fact control represented in output-oriented budgets, in detailed contract specifications, etc., because you know ahead of time what the outcome should be and there is little uncertainty as to a measurable/executable plan. Yes, it is “outcomes-based,” but all knowledge of what is the correct outcome is known ahead of time and so is built into control mechanisms before work starts.

            The confusion is exemplified by the fact the authors point to the C-5A as an instance of after-the-fact control. This is a half-truth at best. The C-5A was programmed from excessively detailed military requirements that resulted in a 1,500 page RFP. Before-the-fact control isn’t exemplified by adherence to procedure and monitoring costs. It is exemplified by attempting to create standards of evaluation for program outputs before funding/work is authorized. After-the-fact control is exemplified by inputs-based budgets such as organizations/object and relational contracts. It allows for incremental decisions, where the results of projects are subjectively evaluated in a social process informed by competition of overlapping paths and updated information.

            Consider that commercial entrepreneurs have a road map, but investors do not tightly lock down product specifications and methods of evaluation before providing capital. They don’t invest in the product necessarily. They invest in the founder and the team (focusing on inputs before-the-fact). They expect the team to pivot in many cases, and withhold judgement of outputs until after-the-fact, or until after consumers test the product against relevant alternatives. So in high ambiguity cases, you focus on inputs, techniques, and providing latitude, then you hold them accountable after-the-fact.

            I was surprised the authors did not touch on the Jackson Committee hearings of the late 1960s which pitted systems analysis types like Enthoven against political process types like Wildavsky. I clearly sit closer with Wildavsky, who attacked the PPBS as a key impediment to proper management. But Mayer and his co-author would seem to equate the PPBS with after-the-fact control, and I cannot accept that. If budgets or contracts outline exactly what outcomes must be accomplished ahead of time, then evaluation is built in before-the-fact. It is simply incompatible with after-the-fact control, which is flexible to the growth of knowledge.

Leave a Reply