nuclear-news

The News That Matters about the Nuclear Industry Fukushima Chernobyl Mayak Three Mile Island Atomic Testing Radiation Isotope

China, the U.S., AI and Autonomy in Nuclear Command and Control

Too Much Too Soon: China, the U.S., and Autonomy in Nuclear Command and Control

Lawfare, Ashley Deeks, Monday, December 4, 2023,

China won’t yet commit to keep autonomy out of its nuclear command and control. It will take a lot more talking to get there.

News reports leading up to the meeting between President Joe Biden and Chinese President Xi Jinping in November indicated that the United States sought to reach an agreement to keep autonomy out of nuclear command-and-control systems (C2)—or at least set up a formal dialogue on the issue. A South China Morning Post headline, citing unidentified sources, proclaimed: “Biden, Xi set to pledge ban on AI in autonomous weapons like drones, nuclear warhead control: sources.” Before the meeting, an Indo-Pacific expert at the German Marshall Fund told the press that China had signaled interest in discussing norms and rules for artificial intelligence (AI), something Biden’s team surely knew as well. The administration seemingly sought to capitalize on that interest by seeking a meeting of the minds on the narrow but important topic of nuclear C2. 

But the U.S. aspirations, while laudable, proved to be too ambitious. As the New York Times reported, “On one of the critical issues, barring the use of artificial intelligence in the command and control systems of their nuclear arsenals, no formal set of discussions was established. Instead, Mr. Biden’s aides said that Jake Sullivan, the national security adviser, would keep talking with Wang Yi, China’s chief foreign affairs official.” 

It is not surprising that the Biden administration was not able to make more progress with China. First, the United States and China do not seem to have had many direct bilateral conversations about military AI to date. Reaching agreement about nuclear AI—even in a statement that would be nonbinding—was an ambitious goal.

Second, the United States already committed unilaterally to maintaining human control over nuclear decisions, so the U.S. government would not have had to “give up” anything to reach a bilateral commitment about nuclear C2.

……………………………………………………………………………. The U.S. government is to be commended for fleshing out in greater detail some norms that states should pursue for their military AI systems, including through its political declaration and its efforts to open a military AI dialogue with China. But these recent developments further illustrate how difficult it will be to obtain legally binding international agreements—even very narrow ones—among states that are actively pursuing military AI. 

As I wrote in an earlier Lawfare paper, “[R]egulation of national security AI is more likely to follow the path of hostile cyber operations, which have seen limited large-scale international agreement about new rules. Absent a precipitating crisis, small-group cooperation and unilateral efforts to develop settled expectations around the use of national security AI are far more likely.” It will be a good sign if future U.S.-China dialogues about AI—even informal and low-profile ones—proceed, as these meetings will give the United States more chances to explain to China how the Defense Department is trying to establish strong, high-level oversight over uses of military AI. But the bilateral trust between the United States and China is so low and the verification problems are so hard that it may take a while before the two states reach a shared view about keeping autonomy out of nuclear C2.  https://www.lawfaremedia.org/article/too-much-too-soon-china-the-u.s.-and-autonomy-in-nuclear-command-and-control

December 5, 2023 - Posted by | weapons and war

No comments yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.