Podcast: Military uses of AI/ML in China and the United States

In this cross-over episode of the Acquisition Talk and China Talk podcasts, we have Greg Allen on to discuss progress in AI/ML, the military, China, and the United States. Greg Allen is the director for the project on AI governance at CSIS, and was formerly the director of strategy and policy at the Joint Artificial Intelligence Center (JAIC).

During the episode, we discuss:

  • Military use cases of AI/ML as they are shaping up in Ukraine
  • Bureaucratic challenges in the US to fielding AI/ML systems
  • What is the definition of lethal, autonomous weapons
  • Whether the US is moving fast enough on experimentation
  • If China has an advantage due to quantity of data

Military Applications

Greg argues that commercial technology has become more powerful over the past few decades while the defense industry is becoming more isolated from that source of innovation. AI/ML is one important source of new military power at a much lower cost, which greatly reduces the barriers to entry for state and non-state actors to leverage capabilities that the US used to have exclusive access to.

For example, since the start of Ukraine, commercial satellite imagery companies have been using AI/ML as core aspects of their business to classify data that used to take armies of analysts. Another area is in intelligence. There are tons of signals data being communicated over un-encrypted channels. Without teams of analysts to capture, transcribe, and database all the information, Ukraine has been using commercial natural language processing technologies to do all the manual work much faster. It can flag and get the information to the right place for a timely strike.

China’s Rise in AI/ML

Greg says that in 2017, China’s government put out their next generation AI plan, also called China’s national AI strategy. They are a series of milestones that first get China to the world’s state of the art, then make China the leader, and eventually dominate the global industry (like to do certain other sectors like batteries, solar cells, rare earths). Five years into the plan, China has reached it’s goal of reaching the state of the art. By 2030, they want to dominate.

Greg points to highly cited AI/ML research papers as one imperfect metric. Over the past few years, they’ve grown their share and if trends continue will become the number one publisher around the world. One of the more worrying aspects is the increased share of these papers where Chinese researchers co-author with leading global researchers, indicating some technology transfer.

Does China Have a Data Advantage?

Here’s a good segment from Greg:

Folks have pointed out that data is the new oil and that China seems to have most of the largest datasets. Parts of that story that are true, but with oil, you can turn into gasoline and you can turn into a lot of other things. If you have a massive amount of Chinese facial recognition data, well, that doesn’t necessarily help you build a missile guidance system, right?

 

… [However,] while the data sets might not be fungible, the size of those data sets also sort of translates into the size of an overall ecosystem that the Chinese government has worked very hard to take advantage of.

Perhaps the way I think about it is like manufacturing. Traditional assembly lines have unique tooling and can mass produce one specification, but changing the spec disrupts the line. Today’s advanced manufacturing is making production lines much more flexible to continuous changes of specification.

Similarly, in AI/ML, perhaps these large pre-trained systems of neural nets provide something like advanced manufacturing. While the data and particular algorithms are still “narrow” in their application and brittle to changing situations, the infrastructure and processes make it much more flexible. So you can have scale with flexibility, not one or the other.

Moving Faster

Greg said that in China, while the foreign ministry says they would support a ban on the use of lethal autonomous systems, where systems can select and engage targets without human involvement, the people’s liberation army does not support a ban on the development of such systems. The former is part of the government, the latter an organ of the communist party.

In the United States, prioritization of these systems is really quite low despite the quantity of rhetoric. I pushed Greg on whether the US could field these types of systems at scale in the 2020s, the period that is really quite dangerous with China. While Greg’s former job at the JAIC was to advocate for moving faster, he provides a caution.

When you’re using machine learning to recommend what movie to watch next on Netflix, the stakes are pretty low when you’re using machine learning to, you know, this is not something we do, but hypothetically, if you’re using machine learning to operate the reactor on a nuclear sub, the stakes are considerably higher.

I pushed back:

In the 1940s, fifties, you had all these test pilots, Chuck Yeager taking incredible risks. We were willing to lose pilots in that respect. And you were talking about calibrating risk. The rest of the other side of the risks is obviously opportunity and pay off, right? Like when I get recommended a good movie, there’s no payoff there. But if you get into a war and autonomous technologies are decisive in a respect, then the other side of that risk is really enormous. So how do you think about, are you willing to lose people?

Be sure to listen to the whole thing!

Thanks Greg Allen!

I’d like to thank Greg Allen for joining Jordan and myself on the the China-Acquisition Talk podcast. Follow him on Twitter @Gregory_C_Allen. Here he is on the Change Log and a CSIS discussion. Check out his government exec bio here. If you want an good introduction to AI/ML concepts that circulated the Pentagon, check out his white paper Understanding AI Technology.

Be the first to comment

Leave a Reply