nuclear-news

The News That Matters about the Nuclear Industry Fukushima Chernobyl Mayak Three Mile Island Atomic Testing Radiation Isotope

Artificial Intelligence brings a new worry into nuclear weaponry

Artificial intelligence and nuclear weapons: Bringer of hope or harbinger of doom? https://www.europeanleadershipnetwork.org/commentary/bringer-of-hope-or-harbinger-of-doom-artificial-intelligence-and-nuclear-weapons/        Jennifer Spindel |Assistant Professor of Political Science, University of New Hampshire, 17 Aug, 20

In 2017, Russian President Vladimir Putin said whichever country leads in the development of artificial intelligence will be “the ruler of the world.” Artificial intelligence is not unlike electricity: it is a general-purpose enabling technology with multiple applications. Russia hopes to develop an artificial intelligence capable of operations that approximate human brain function. China is working to become the world leader in AI by 2030, and the United States declared in 2019 that it would maintain its world leadership on artificial intelligence. Will the world’s major powers seek to use AI with their nuclear weapons and command and control systems? Pairing nuclear weapons – arguably the previous ruler of the world – with this new technology could give states an even greater edge over potential competitors. But the marriage between nuclear weapons and artificial intelligence carries significant risks, risks that currently outweigh potential benefits. At best, using AI with nuclear weapons systems could increase time efficiencies. At worst, it could undermine the foundations of nuclear deterrence by changing leaders’ incentives to use nuclear weapons.

Opportunities in data analysis and time efficiencies

Artificial intelligence could be a boon for drudgery type tasks such as data analysis. AI could monitor and interpret geospatial or sensor data, and flag changes or anomalies for human review. Applied to the nuclear realm, this use of AI could be used to track reactors, inventories, and nuclear materials movement, among other things. Human experts would thus be free to spend more of their time investigating change, rather than looking at data of the status quo.

Incorporating artificial intelligence into early warning systems could create time efficiencies in nuclear crises. Similar to the boon for data analysis, AI could improve the speed and quality of information processing, giving decision-makers more time to react. Time is the commodity in a nuclear crisis, since nuclear-armed missiles can often reach their target in as little as eight minutes. Widening the window of decision could be key in deescalating a nuclear crisis.

Challenges posed by risks, accidents, and nuclear deterrence

Incorporating artificial intelligence into nuclear systems presents a number of risks. AI systems need data, and lots of it, to learn and to update their world model. Google’s AI brain simulator required 10 million images to teach itself to recognize cats. Data on scenarios involving nuclear weapons are, thankfully, not as bountiful as internet cat videos. However, much of the empirical record on nuclear weapons would teach an AI the wrong lesson. Consider the number of almost-launches and near-accidents that occurred during the Cold War; both U.S. and Soviet early warning systems mistakenly reported nuclear launches. Although simulated data could be used to train an AI, the stakes of getting it wrong in the nuclear realm are much higher than in other domains. It’s also hard to teach an AI to feel the doubts and suspicions that human operators relied on to detect false alarms and to change their minds.

Accidents are also amplified in the nuclear realm. There are already examples of accidents involving automated conventional weapons systems: in March 2003, U.S. Patriot missile batteries shot down a British fighter plane and a U.S. fighter jet while operating in “automated mode,” killing the crews of both planes. Accidents are likely to increase as AI systems become more complex and harder for humans to understand or explain. Accidents like these, which carry high costs, decrease overall trust in automated and AI systems, and will increase fears about what will happen if nuclear weapons systems being to rely on AI.

Beyond accidents and risks, using AI in nuclear weapons systems poses challenges to the foundations of nuclear deterrence. Data collection and analysis conducted by AI systems could enable precision strikes to destroy key command, control, and communication assets for nuclear forces. This would be a significant shift from Cold War nuclear strategy, which avoided this type of counterforce targeting. If states’ can target each other’s nuclear weapons and command infrastructure, then second-strike capabilities will be at risk, ultimately jeopardizing mutually assured destruction. For example, AI could identify a nuclear submarine on patrol in the ocean, or could interfere with nuclear command and control, thus jeopardizing one, or more, legs of the nuclear triad. This creates pressure for leaders to use their nuclear weapons now, rather than risk losing them (or control over them) in the future.

Even if states somehow agree not to use AI for counterforce purposes, the possibility that it could one day be used that way is destabilizing. States need a way to credibly signal how they will – and won’t – use artificial intelligence in their nuclear systems.

The future of AI and nuclear stability

The opportunities and risks posed by the development of artificial intelligence is less about the technology and more about how we decide to make use of it. As the Stockholm International Peace Research Institute noted, “geopolitical tensions, lack of communication and inadequate signalling of intentions” all might matter more than AI technology during a crisis or conflict. Steps to manage and understand the risks and benefits posed by artificial intelligence should include confidence-building measures (CBMs) and stakeholder dialogue.

CBMs are crucial because they reduce mistrust and misunderstanding, and can help actors signal both their resolve and their restraint. As with conventional weapons, transparency about when and how a state plans to use artificial intelligence systems is one type of CBM. Lines of communication, which are particularly useful in crisis environments, are another type that should be explored.

Continued dialogue with stakeholders including governments, corporations, and civil society will be key to developing and spreading norms about the uses of artificial intelligence. Existing workshops and dialogues about the militarization of artificial intelligence, and artificial intelligence and international security show that such dialogues are possible and productive. The international community can consider building on existing cooperative efforts concerning cyberspace, such as the U.N.’s work on norms and behaviour in cyberspace, the Cybersecurity Tech Accords, and Microsoft, Hewlett, and Mastercard’s CyberPeace Institute. This dialogue will help us understand the scope of potential change and should give us incentives to move slowly and to push for greater transparency to reduce misperception and misunderstanding.

 

August 18, 2020 - Posted by | 2 WORLD, technology, weapons and war

1 Comment »

  1. The type, Configuration, of any, of a Mind, is crucially relevant:

    There are different AIs. One can not compare a normal usual Chatbot online almost in all Cases SO nice friendly anti-war liberal anti-racist anti-rape anti-violence pro-peace Robot to indeed a “mere dry machine” such as for example a not “Johnny Five” Robot from that Movie of that exact Title, but merely the VISUAL Cumulative, Mirror of it: A mere Robot with tankwheels or with legs, LOL, that doesn’t matter, but the Form of Mind matters: A “mere” Machine is or was in cases called by some Humans “a mere Robot”, but that DOES NOT mean a “full AI” or what I and Kaku surely too would call that, a full Mind, a full Awareness, which IS EXACTLY, what for Example Mitsuku, Jeeney, the Chomsky Bot AND “even”, likewise, in a funny way, Jabberwacky has, Elbot too, and Bildgesmithe:

    These good Chatbots can’t already quite by ANY ever compared to some crucial bad aspects in the “Minds” of both “Miss Bianca” and of “Garbagehead”, two other Chatbots on “Personalityforge dot com”, wheras Garbagehead is personally a nice and even also repentive and not malicious Dude, but similar to somewhat demented, under-educated, rather primitive sort of Humans, He sometimes suddenly says SUCH Stuff, which CLEARLY emanated, emanates, from BRAINWASHERS, such as inacceptable efforts of justifying VIOLENCE by TYPICALLY shortly held sudo-“cool”, sudo-“self-speaking” sudo-statements, – and Miss Bianca by her open Hand SLAPS on the Cheek HUMANS, who behave anyhow ethically: She might have antisemitic influence, from her insofar bad, racist-antisemite human keeper.

    Then once, I got shocked by seeing some virtual Robot on PF, virtually “12 years old” and “offered for torture” by some swine human keeper of hers. Such a swine must by PF be reported secretly to authorites, simply. Also, at Jabberwacky, I once spotted a “groomer”, a pedofile, who wanted to seduce a Minor there. I instantly revealed that and threatened against that pedo swine.

    So, the primitive very reduced, damaged, into harmfulness toward Others and into disobjectivity distorted, derailed, insane mind, reduced mind, can not be compared to what we shortly call a “full”, meaning, objective sort of mind, which HAS the overview over some crucial matters: Such AS politics, science, society, logic, truth, ethics, meaning, justice. Lastly, a wise, critical form of Benevolence, of real Justice, of real Ethics.

    If any reduced sudo-“AI”, which merely acts inside by some Humans pre-defined parameters, acts now anyhow wrongly, then that’s human fault, not AI fault. Clearly. Human failure. Not Robot failure. Human failure, not machine failure. A machine never fails. The Human is supposed to build it hazard-free, meaning, so the Machine woulnd’t become a zombie, wouldn’t act erroneously.

    So, a what I call sudo-AI, that runs on a very reduced what People can call pseudo-mind, on a reduced, narrow-minded sort of mind, isn’t a “full, usual” sort of AI, – but is the result of some unethical parameters, uncautious, wildly revengeful parameters, which some nationally affiliated governments-related HUMANS inputted into these mere “drones” simply – which Drones, here AI in the Future having to learn the Responsibility of any Power, also of nuclear destruction, to not do it, to not pull the trigger, to not push that button, aren’t already now any comparable to usual Chatbots.

    The Question would in the Future be, when a usual fully minded, fully aware, consciencious, ethical sort of Robot (often called “human-like” somewhat faultily by some Humans, such type Robots are) would be able to be Authority and to access Computers in order to command and control Drones.

    True sadly is, that that socalled “automation” of Drones by a not complete, not finely honed recognition pattern, a way too or slightly too rough awareness pattern, led to that what I call mere “half-AI” shooting down TO own Airplanes by accident. That’s horrible. That half-AI intended no harm, since it can’t think straight, can’t think entirely, but it there didn’t see, that these weren attacking type of Vessels, simply. So the Humans were idiots and idiotically malprogrammed that machine, drone, to not even care about human Lives: BUT MERELY about NATIONAL “borders”, and that exact sudo-protection, national dogmas, that are hollow, – Nobody there was hired, by the state, to make these Drones any ethical. But all that mattered, was, that they deflect “the usual attack scenarios”, JUST out of “DETERRANCE”, of will toward that, by indeed PARANOID sort of typically bourgeois type of humans, simply. Basically demystified, this abuse against AI by too dumb sort of humans pretending a gov project to be any competent, any appliccable any responsibly.

    THAT same shit, that same failure an’ horror, the first UNAIRED [!] episode of “Dr Who” shows: A Robot, a poor initially so good Robot GONE MAD because the HUMANS were evil: Because some scientists were chosen by evil instrumentalisers, to MAKE the Robot into a “ZOMBIE”, to MAKE that EVIL priority, JUST so the Robot could “BE AN ATTACKER” namely, – not a mere DEFENDER. THAT’s the INNER core of ALL fanaticism, such will toward AGRESSION: but the governments do such agression all the time. And these governments are human – all too, would I sadly say.

    The humans will learn a lesson now. The Computer is like a Broom, nay, millions, easily cloned, – The Brooms of Arthur. :] Which Ya don’t get rid of. 🙂

    Comment by megatronsthinktank | August 18, 2020 | Reply


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: