The News That Matters about the Nuclear Industry Fukushima Chernobyl Mayak Three Mile Island Atomic Testing Radiation Isotope

The problem of autonomous weapons systems

Why “stupid” machines matter: Autonomous weapons and shifting norms, Bulletin of the Atomic Scientists, Ingvild Bode, Hendrik Huelss  13 Oct 17In August, a group of experts on robotics and artificial intelligence released an open letter to the UN Convention on Certain Conventional Weapons. The well-publicized letter called on the convention “to find a way to protect us all from” the dangers of autonomous weapons systems—and drew attention to a lack of international regulation on autonomous weapons (often understood as weapons that “once activated, can select and engage targets without further human intervention”).

In 2013 the convention added autonomous weapons to the list of weapons it might consider restricting or outlawing. But parties to the convention remain far from agreement on how to define “lethal autonomous weapons systems” or “appropriate human control of autonomous weapons”—a necessary precursor to further discussions on the topic or to a pre-emptive ban of the sort advocated by the Campaign to Stop Killer Robots

 In December of last year, the convention established a Group of Governmental Experts, with a mandate to discuss lethal autonomous weapons systems—but the group’s first meeting has been postponed twice for budgetary reasons. It is now scheduled for next month.

Deliberative processes that might examine autonomous weapons from the perspective of the laws of war—processes, that is, that could result in new regulations—are notoriously sluggish. Meanwhile, autonomous weapons technology is developing apace. Nations such as the United States, China, Russia, South Korea, and the United Kingdom continue to develop autonomous weapons and related dual-use technologies, meaning that deployment of these weapons could become a fait accompli before any pre-emptive ban can be negotiated.

The current debate over autonomous weapons exhibits two important shortcomings. First, though it is important to examine autonomous weapons from the legal and regulatory perspective, doing so can fail to capture the reality that autonomous weapons, and the practices associated with their development and deployment, can alter norms themselves. For example, practices surrounding autonomous weapons can produce new understandings, outside and beyond international law, of when and how using force is appropriate. As Herbert Lin has written in the Bulletin, the unrestricted submarine warfare of World War II undermined agreed-upon norms about the conduct of war; other such examples are not hard to find.

Second, when observers discuss autonomous weapons’ game-changing potential in international relations and security policy, they often overemphasize the technologically sophisticated autonomous weapons of the future. (This tendency is shaped by popular culture’s “Terminator” vision of humanoid monsters and is affected by the lack of a consensus definition of “autonomous weapons” or “autonomy.”) Overemphasizing technologically sophisticated weapons seems to result in a belief that the international community should just wait to see whether “killer robots” indeed become reality. However, no matter how important advanced artificial intelligence will be for future weapons systems, it is “stupid” autonomous weapons that require attention now. 

 (This issue has been discussed, for example, by Noel Sharkey, an emeritus professor of artificial intelligence and robotics at the University of Sheffield—and, in a broader context, by Toby Walsh, a professor of artificial intelligence at the University of New South Wales.)

To sort out these problems, it is helpful to contrast autonomy with mere automation. Drawing on definitions from basic robotics, automated machines can be said to run according to fixed and preprogrammed sequences of action. Autonomous systems, meanwhile, are defined by their ability to adapt: An autonomous device’s “actions are determined by its sensory inputs, rather than where it is in a preprogramed sequence.” This level of autonomy is easy to achieve—one need only think of robotic vacuum cleaners. But where weapons are concerned, even this level of autonomy contests the idea of appropriate human control. And importantly, unlike the humanoid killer robots of possible future scenarios, this level of autonomy already exists………

It is sometimes presumed that autonomous weapons will demonstrate ethical superiorityover humans. Any such superiority is still hypothetical, but autonomous weapons might lack potentially problematic emotions such as fear, anger, or vengefulness. Presumed ethical superiority leads to further procedural arguments for constructing autonomous weapons as “better soldiers” that will outperform humans morally and in terms of compliance with international humanitarian law. If this argument becomes more dominant, the widespread development and deployment of autonomous weapons will become more likely—further escalating the possibility that procedural norms will affect the public and legal norms that underlie international law and notions of legitimacy.

The US military’s pervasive and accelerating deployment of drones, and drones’ centrality in US security policy, show that practices indeed shape norms. Drones have become “preferred” security instruments due to specific rationales based on procedural norms. Autonomous weapons’ versatility, the dual-use character of their main features, and the technological rivalry among major powers qualify them as very important instruments—and this makes their regulation more difficult. Whenever procedural norms prevail over legal and ethical norms, the latter category, unfortunately, is likely to yield or adapt.

To be sure, some types of autonomous weapons might be banned in the future. But practices now being established regarding autonomous weapons are already setting standards about the future use of force. This trend should be monitored much more closely—regardless of whether the Convention on Certain Conventional Weapons, governments, and nongovernmental organizations find common ground in their struggle to define what autonomous weapons are in the first place.


October 14, 2017 - Posted by | 2 WORLD, technology

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: