Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS) – Agenda item 5(b)

Agenda item 5(b)
Characterization of the systems under consideration in order to promote a common understanding on concepts and characteristics relevant to the objectives and purposes of the Convention

Statement
September 22, 2020

The United States continues to support identifying general characteristics of systems that are under the GGE’s consideration in order to facilitate the GGE’s understanding of the relevant concepts and issues. The flexibility inherent in this approach of identifying characteristics is important given that scientists and engineers continue to develop new technological advancements and that our understanding continues to improve. This discussion could also help delegations understand better what we mean by the terms we are using. Some delegations may be using the same term to mean different things or some delegations may be using different terms to mean the same thing.

In identifying characteristics of LAWS, we must not prejudice future decisions regarding potential outcomes. For example, characteristics should be identified in order to promote common understandings, not with a view towards advancing a particular policy objective, like a ban. Similarly, we must be cautious not to make hasty judgments about the value or likely effects of emerging or future technologies. Frequently, we may change our views of technologies over time as we gain more experience with them.

In discussing the general characteristics of such systems, we must not lose sight of the fact that no matter their level of sophistication or how many autonomous features or functions they have, these weapons systems are tools for human use. Guiding Principle (i) reminds us of this, stating: “In crafting potential policy measures, emerging technologies in the area of lethal autonomous weapons systems should not be anthropomorphized.”

In particular, anthropomorphizing emerging technologies in the area of LAWS can lead to legal and technical misunderstandings that could be detrimental to the efficacy of potential policy measures. From a technical perspective, anthropomorphizing emerging technologies in the area of LAWS can lead to mis-estimating machine capabilities. From a legal perspective, anthropomorphizing emerging technologies in the area of LAWS can obscure the important point that IHL imposes obligations on States, parties to a conflict, and individuals, rather than machines. “Smart” weapons cannot violate IHL any more than “dumb” weapons can. Similarly, machines are not intervening moral agents, and human beings do not escape responsibility for their decisions by using a weapon with autonomous functions. Anthropomorphizing emerging technologies in the area of LAWS could incorrectly suggest a diminished responsibility of human beings simply by the use of emerging technologies in the area of LAWS.

The U.S. Department of Defense policy directive on the use of autonomy in weapon systems establishes definitions of an “autonomous weapon system” and “semi-autonomous weapon system” for the purposes of that policy directive. These definitions focus on what we believe to be the most important issues posed by the use of autonomy in weapon systems — i.e., people who employ these weapons can rely on the weapon systems to select and engage targets. We

will not repeat the specific definitions today, but, for reference, those definitions are reproduced in the U.S. working paper from November 2017.

In discussing concerns about autonomous weapons, it may be important consider whether these concerns are fundamentally about the type of weapon system or whether the concerns are about how weapons systems are used. For example, consider a missile with automated target recognition capabilities that can select and engage enemy tanks. In one scenario, an operator identifies a specific target and fires the missile at this target. Under the definitions applied by the U.S. military, this is a semi-autonomous weapon system. That same weapon system and capability could, however, be classified as an autonomous system if it is used in a different way. If the operator does not identify a specific tank, but instead fires the weapon to loiter in an area and autonomously select and engage tanks, the weapon is classified as an autonomous weapon in U.S. military practice. The weapon system’s technical characteristics are the same, but how it is to be used changes whether it is classified as autonomous or semi-autonomous.

Some delegations this morning and yesterday have raised concerns about LAWS being inherently unpredictable and in the spirit of the interactive discussion that our Chair has invited, I would ask them to consider whether this concern is based on a characteristic of the weapon system or whether it is actually based on assumptions about how those weapon systems would be used. We believe that making progress in our discussions involves developing our common understanding of how emerging technologies in the area of LAWS can be used consistent with IHL, and the conclusions we have proposed in our national commentary on guiding principle (a) try to do that.