Here is an excerpt from my recent War on the Rocks article, responding the call for ideas from the National Security Commission on Artificial Intelligence (AI):
The national security strategy for AI is not like President John F. Kennedy’s call to put a man on the moon. There is not a well-defined goal with a single technical plan. Harnessing narrow AI means that there will be a multitude of applications, each with its own requirements, challenges, and user communities.
The strategy should not designate centralized offices to make project choices on behalf of the researchers. They are particularly susceptible to groupthink, prone to the neglect of alternatives, costs, and uncertainties. Instead, the ideal strategy requires empowering researchers by injecting speed and iterative learning into defense innovation.
If speed and iterative learning are the most important aspects of a healthy AI research program, then funds must be available when needed. In other words, a flexible funding mechanism is needed. The proper mechanism is a mission-funded AI account. Particular project line items would not require justification. Rather, the military components could direct the funds to AI projects based on their merit.
Read the whole thing. Naturally, the article does not address many alternatives and tradeoffs. In this blog post, I’d like to tackle some of the issues left over.
First, if the Congress allows for a mission-funded AI budget account, then we must define the boundaries of AI. The mission-funded account allows defense officials to reroute funds to any AI application at the last minute. For example, funding may have been expected to go to a particular AI application for aviation logistics, but then officials realize problems there or opportunities elsewhere, and so move the funding over to an AI app for image recognition. This could be done at any point before the funds are obligated to a contract, even if that is after the appropriation was authorized. No reprogramming action required.
That sounds great for flexibility and iterative learning. Certainly we cannot predict the outcomes of AI research like it was a routine construction job for roads or bridges. If we could make such predictions, then a flexible funding mechanism to support agile development wouldn’t be necessary.
But that flexibility runs into definitional problems. What if there is a high-priority AI application, but as part of that project we must first develop a database, or modify a deployed sensor, or upgrade a robotics component, or tighten up cybersecurity, or integrate cloud services? These project activities are in the service of artificial intelligence, but they aren’t exactly the activities related to building and training an algorithm.
How broad is the definition of activities related to an AI project? If it were limited to the algorithm, then the mission-funding account may be self-defeating — it may force funds into projects without the prospect of enabling technologies ever being built. If we take a broad perspective of AI, then the budget account looks more like a slush-fund to be used however the official wants.
I recommend a much broader interpretation of AI. However, this brings up the problem of oversight. Congress likes to know where money is going. The mission-funded AI account may really be supporting the development of cloud, sensors, or other technologies that Congress didn’t intend.
This definitional and oversight problem isn’t limited to the mission-funding scheme. It exists even today, as the Congressional Research Service observed:
One impediment to accurately evaluating funding levels for AI is the lack of a stand-alone AI Program Element (PE) in DOD funding tables. As a result, AI R&D appropriations are spread throughout generally titled PEs and incorporated into funding for larger systems with AI components. For example, in the FY2019 National Defense Authorization Act, AI funding is spread throughout the PEs for the High Performance Computing Modernization Program and Dominant Information Sciences and Methods, among others. On the other hand, a dedicated PE for AI may lead to a false precision, as it may be challenging to identify exact investments in enabling technologies like AI [e.g., cloud, sensors]. The lack of an official U.S. government definition of AI could further complicate such an assessment.
We only know that $927 million was made available for AI because the DOD highlighted that fact for the first time in the FY 2020 budget request overview. That figure would not be returned by independent budget analysts using only public data. Bloomberg, for example, was able to tally $4.022 billion in total funding in which artificial research was mentioned somewhere in the justification. But AI is often embedded in program elements. Specific line items like DARPA’s next-gen AI are the exception and not the rule.
One of the other issues is that DOD acquisition was built upon the fact that there are single, logical, programs which do not overlap one another, and that a single program office will be put in charge of all aspects of that program. This is the weapon systems approach. For example, there will be a budgeted program for a fighter aircraft, and a program office in charge of procuring it, including all components like the engine, avionics, landing gear, and so forth. Presumably, AI and other apps integrated into the system should be managed by the program office to avoid overlapping responsibilities.
That organizational structure is simply unrealistic. You can never have a multi-functional organization like the DOD without competing perspectives and overlapping responsibilities. It implies that there is no organization, anywhere, responsible for improving engines and other important components without first having identified a weapons platform and budgeted program. That’s precisely the intent of OMB Circular A-109, and largely what goes on today (though you have exceptions like the ITEP program, which nonetheless is pre-selected for the Blackhawks and Apaches).
If we make “artificial intelligence” an organic line item in the budget, then the total cost of the weapon system is no longer part of the program element. If other system components are also given dedicated funding, then the system program office starts to lose authority over it’s program, the total funding for which isn’t under its control.
It seems, however, that insight into total costs through budgets isn’t really useful. The “total cost” of something that shares significant costs elsewhere is an abstract and arbitrary concept. Budgets should provide expense control and help align administrative authority and financial authority. Budgets cannot resolve complex organizational design problems.
I recommend that there should be a diversity of mission-funded accounts that go both horizontally (e.g., AI, sensors, robotics, propulsion) and vertically (e.g., ships, aircraft, satellites). The mission-funded accounts should align exactly with organizations dedicated to that mission, perhaps through the SYSCOMs (e.g., NAVSEA, TACOM). Each military component may have an organization or two dedicated to major horizontal or vertical technologies. The leadership of each organization may route funds to any project furthering their mission, whether that is AI, shipbuilding, or whatever.
This means that an organization like PEO Ships can spend its “shipbuilding” funds not just on ship hulls, then be forced integrate whatever the component-focused organizations give it. Instead, PEO Ships may choose to independently develop AI. Ideally, there will be an ecosystem of mature components with PEO Ships could choose from, and work around standard interfaces. But inevitably, some components may need significant work for integration. That should not stop PEO Ships from funding its own AI research, or whatever is needed, if it can’t coax it out of others.
Such an organizational design tolerates overlapping and competing responsibilities for development. It does not prohibit such interdependencies.
Managers of horizontal missions like AI should be rewarded or punished based on their contributions to the managers of vertical platform-based missions like shipbuilding. And those platform managers should be rewarded or punished based on getting their system into production, or keeping it in production because the military user is happy with evolutionary upgrades.
So organic accounts in the budget should map directly to organizations focused on a mission, but that doesn’t mean the organization is unifunctional. It must be able to be multifunctional, and migrate into new areas of technology without the Congress first authorizing it. That rigidity is what put the DOD behind the curve in the first place.
The scheme results in the total cost of systems not being apparent in the budget. Nor should it be. That kind of accountability, perhaps, should be performed through accounting. If PEO Ships writes a contract for AI development, the information should be tagged there. If PEO Ships assigns AI development to the prime shipbuilding contractor, then that fact should be apparent through the Contract Line Item Number (or maybe the ACRN), or Work Breakdown Structure.
If the desire to cut and slice costs in every way were really the objective, doing it through the budget documents is possibly the worst way about it. Budgets are forward plans of action rather than historical records of what actually did happen.
A final note I’ll make is that the essential elements of artificial intelligence are also found in software and database developments of all types, as well as other areas of intangible investment like product design, business processes, training, and so forth. These areas of the economy are becoming more and more important. We are starting to care less and less about industrial era methods of reproducible goods on the assembly line.
An increase in uncertainty, and with it unintended consequences, means that pre-specifying project outcomes in advance of the budget appropriation is not just burdensome, but downright harmful.
Not only that, trying to tace all money outlays to project outputs using cost accounting is becoming more difficult and less relevant. What good is cost accounting when the marginal cost of reproduction is zero? Everything is an allocation. It loses its predictive content.
And so cost accounting should focus not on detailed management plans, but on broad categories of how much was spent, for what, and to whom. This coupled with a continuous testing regime and a willingness to change suppliers results in accountability.
Makes me think a lot of matrix management, and the challenges that corporations have as they grow – balancing between product and functional area.