nuclear-news

The News That Matters about the Nuclear Industry Fukushima Chernobyl Mayak Three Mile Island Atomic Testing Radiation Isotope

AI and the Bomb: Nuclear Strategy and Risk in the Digital Age

Nuclear Deterrence: Unsafe at Machine Speed

AI and the Bomb: Nuclear Strategy and Risk in the Digital Age, By James Johnson,

Reviewed by Douglas B. Shaw, 3 Dec 23, Arms Control Association,

James Johnson’s book is the most important book about preventing nuclear war that has been published in recent years. The author confronts head-on the complexity of the dangers that artificial intelligence (AI) and other emerging technologies pose for nuclear deterrence. He combines a commanding view of deterrence theory with the imagination to point toward where technology is already obscuring deterrence practice and concludes darkly that, “[i]n the context of AI and autonomy, particularly information complexity, misinformation, and manipulation, rationality-based deterrence logic appears an increasingly untenable proposition.”

AI and the Bomb opens with a gripping account of a “flash war” between China and the United States, taking place over less than two hours in June 2025, in which nuclear weapons are used, millions of people die, and afterward, no one on either side can explain exactly what happened. This story underscores the fact that even if not given control of nuclear weapons, AI and emerging technologies connected to adjacent or seemingly unrelated systems may combine in unforeseen ways to render nuclear escalation incomprehensible to the humans in (or on) the loop.

Johnson’s book is the first comprehensive effort to understand the implications of the AI revolution for the Cold War notion of “strategic stability” at the core of nuclear deterrence. He finds new challenges for deterrence theory and practice in emerging technologies, centering inadvertent escalation as a “new model for nuclear risk.” He formulates a novel “AI-security dilemma” more volatile and unpredictable than the past. He also adds a new dimension of “catalytic nuclear war” by which states without nuclear weapons or nonstate actors might use AI to cause nuclear war among nuclear-armed states.

Artificial Intelligence, Emerging Technology, and Deterrence Theory

The author embraces and extends the emerging conventional wisdom that AI should not be plugged into nuclear command-and-control systems, observing that “the delegation of the decision-making process (to inform and make decisions) to machines is not a binary choice but rather a continuum between the two extremes—human decision-making and judgment and machine autonomy at each stage of the kill chain.” Beyond using AI to facilitate nuclear launch decisions, Johnson shows how AI could affect the nuclear balance by changing nuclear weapons system accuracy, resilience and survivability, and intelligence, surveillance, and reconnaissance for targeting. AI capabilities also may give conventional weapons systems dramatic new capabilities to attack nuclear weapons systems, through increased ability to penetrate air defenses; increased ability to “detect, track, target, and intercept” nuclear missiles; and advanced cybercapabilities, potentially including manipulation of “the information ecosystem in which strategic decisions involving nuclear weapons take place.”

Importantly, Johnson uses AI as a shorthand for referring to AI and a suite of other emerging technologies that enable AI, including “cyberspace, space technology, nuclear technologies, GPS technology, and 3D printing.” This choice mirrors the practice of other thought leaders, including Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher in The Age of AI and Mustafa Suleyman in The Coming Wave.

The book is a grim journey for scholars of nuclear deterrence theory, forcing them to confront concepts such as “machine-speed AI warfare,” “non-human agents,” nuclear arsenals with a “larger attack surface” in a world in which ubiquitous sensors feed data oceans, and “disinformation cascades” that could lead to an “unravelling of deterrence in practice.” These ominous signs begin to flesh out the broad concerns about nuclear strategy that Kissinger, Schmidt, and Huttenlocher raise, including that “[t]he management of nuclear weapons, the endeavor of half a century, remains incomplete and fragmentary” and that the “unsolved riddles of nuclear strategy must be given new attention.”………………………………………………………

First, the AI security dilemma features the possibility of extraordinarily fast technological breakthroughs, incentivizing states in competition with peers in AI technology to move first rather than risk being second. For example, the U.S. National Security Commission on AI found that “defending against AI-capable adversaries [notably China] operating at machine speeds without employing AI is an invitation to disaster.”

Second, the AI security dilemma risks placing latent offensive capabilities in civilian hands, such as the massive data facilitated by communication and navigation satellites. Whereas the traditional security dilemma is driven primarily by the misinterpretation of defensive military capabilities, the AI security dilemma also can be driven by the misinterpretation of ostensibly peaceful commercial capabilities.

Third, the AI security dilemma is driven by commercial and market forces not under the positive control of states. Whereas the traditional security dilemma causes states to fear each other’s actions, the AI security dilemma drives states increasingly to fear the actions of private firms. Taken together, these three novel characteristics of potentially explosive technological breakthroughs, ambiguous commercial capabilities, and the absence of positive control over commercial capabilities led Johnson to conclude that AI is “a dilemma aggravator primus inter pares.”……………………………..

Beyond Rational Nuclear Deterrence?

The book demonstrates repeatedly how revolutionary change in the technological terrain in which nuclear deterrence takes place demands urgent theoretical and practical adaptation. Old assumptions and human rationality may decrease sharply in effectiveness as tools for preventing nuclear war.

Johnson offers some initial ideas of how to manage the stark challenges that AI poses for nuclear deterrence. Arms control will remain important, if challenging, in new ways; he suggests that banning AI enhancements to nuclear deterrence capabilities might be an important first step……………………………………………………..

Ultimately, Johnson expects that “AI technology in the nuclear domain will likely be a double-edged sword: strengthening the [nuclear command-and-control] systems while expanding the pathways and tools available to adversaries to conduct cyberattacks and electronic warfare operations against these systems.” He encourages policymakers to act “before the pace of technological change outpaces (or surpasses) strategic affairs.”

Johnson concludes his book with a quote from machine learning pioneer Alan Turing: “We can only see a short distance ahead, but we can see plenty there that needs to be done.” AI and the Bomb is a must read for those seeking to understand the first signals of revolutionary change that AI is bringing to the challenge of preventing nuclear war. It sends a clear warning that the world does not yet know how to manage the effects of AI on nuclear deterrence and, without significant urgent effort, it is likely to fall short.
 https://www.armscontrol.org/act/2023-12/book-reviews/ai-bomb-nuclear-strategy-risk-digital-age

December 5, 2023 - Posted by | Uncategorized

No comments yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.