Lethal autonomous machines will inevitably enter the future battlefield – but they will do so incrementally, one small step at a time. The combination of inevitable and incremental development raises not only complex strategic and operational questions but also profound legal and ethical ones. The inevitability of comes from both supply-side and demand-side factors. Advances in sensor and computational technologies will supply “smarter” machines that can be programmed to kill or destroy, while the increasing tempo of military operations and political pressures to protect one’s own personnel and civilian persons and property will demand continuing research, development, and deployment. The process will be incremental because non-lethal robotic systems (already proliferating on the battlefield) can be fitted in their successive generations with both self-defensive and offensive technologies. As lethal systems are initially deployed, they may include humans in the decision-making loop, at least as a fail-safe – but as both the decision-making power of machines and the tempo of operations potentially increase, that human role will likely but slowly diminish.
Recognizing the inevitable but incremental evolution of these technologies is key to addressing the legal and ethical dilemmas associated with them; U.S. policy toward resolving those dilemmas should be built upon these assumptions. The certain yet gradual development and deployment of these systems, as well as the humanitarian advantages created by the precision of some systems, make some proposed responses — such as prohibitory treaties — unworkable as well as ethically questionable. Those features also make it imperative, though, that the United States resist its own impulses toward secrecy and reticence with respect to military technologies, recognizing that the interests those tendencies serve are counterbalanced here by interests in shaping the normative terrain — the contours of international law as well as international expectations about appropriate conduct — on which it and others will operate militarily as technology evolves. Just as development of autonomous weapon systems will be incremental, so too will development of norms about acceptable systems and uses be incremental. The United States must act, however, before international expectations about these technologies harden around the views of those who would impose unrealistic, ineffective or dangerous prohibitions or those who would prefer few or no constraints at all.
Saturday, April 28, 2012
Anderson & Waxman: Law and Ethics for Robot Soldiers
Kenneth Anderson (American Univ. - Law) & Matthew C. Waxman (Columbia Univ. - Law) have posted Law and Ethics for Robot Soldiers (Policy Review, forthcoming). Here's the abstract: