The Ethics of Autonomous Weapons Systems 

Autonomous Weapons Systems (AWS) are defined by the U.S. Department of Defense as “a weapon system(s) that, once activated, can select and engage targets without further intervention by a human operator.” Since the crucial distinguishing mark of human reasoning is the capacity to set ends and goals, the AWS suggests for the first time the possibility of eliminating the human operator from the battlefield.  The development of AWS technology on a broad scale, therefore, represents the potential for a transformation in the structure of war that is qualitatively different from previous military technological innovations. 

In May, the first Meeting of Experts on Lethal Autonomous Weapons Systems was held at the United Nations in Geneva.  The participants recognized the potential of AWS to alter radically the nature of war, as well as a variety of ethical dilemmas such weapons systems raise.  Worldwide concern has been growing about the idea of developing weapons systems that take human beings “out of the loop,” though the precise nature of the ethical challenges to developing such systems, and even possible ethical benefits, have not yet been clearly identified. 

The idea of fully autonomous weapons systems raises a host of intersecting philosophical, psychological, and legal issues. For example, it sharply raises the question of whether moral decision-making by human beings involves an intuitive, non-algorithmic capacity that is not likely to be captured by even the most sophisticated of computers?  Is this intuitive moral perceptiveness on the part of human beings ethically desirable?  Does the automaticity of a series of actions make individual actions in the series easier to justify, as arguably is the case with the execution of threats in a mutually assured destruction scenario?  Or does the legitimate exercise of deadly force should always require a “meaningful human control?”  If the latter is correct, what should be the nature and extent of a human oversight over an AWS? 

Additional questions arise with regard to the very definition of an AWS.  Should the definition focus on the system’s capabilities for autonomous target selection and engagement, or on the human operator’s use of such capabilities?  Should the human operator’s pre-engagement intention have a decisive bearing on the system’s definition as an AWS?  Furthermore, AWS present a unique challenge to the way legal responsibility in combat should be assessed.  If a given AWS is merely applying a set of preprogrammed instructions, then, presumably its designers and operators are the ones morally responsible for its behavior.  But if the AWS in question is a genuine moral discerner in its own right, that appears to shift the locus of responsibility to the automated system itself.  And if this is the case, what are the implications for legal liability?  Who, if anyone, should bear the legal liability for decisions the AWS makes? 

The purpose of this conference is to address such questions by bringing together distinguished scholars and practitioners from the academy, civil society, government service and the military, to engage in two days of constructive discussion and exploration of the moral and legal challenges posed by Autonomous Weapons Systems