The idea that using “killer robots” in armed conflict is unacceptable because they are not human is at the heart of nearly every critique of autonomous weapons. Some of those critiques are deontological, such as the claim that the decision to use lethal force requires a combatant to suffer psychologically and risk sacrifice, which is impossible for machines. Other critiques are consequentialist, such as the claim that autonomous weapons will never be able to comply with international humanitarian law (IHL) because machines lack human understanding and the ability to feel compassion.
This article challenges anthropocentric critiques of AWS. Such critiques, whether deontological or consequentialist, are uniformly based on a very specific concept of “the human” who goes to war: namely, the Enlightenment subject who perceives the world accurately, understands rationally, is impervious to negative emotions, and reliably translates thought into action. Decades of research in cognitive psychology indicate, however, that the Enlightenment subject does not exist. On the contrary, human decision-making is profoundly distorted by cognitive and social biases, negative emotions, and physiological limitations — particularly when humans find themselves in dangerous and uncertain situations like combat. Given those flaws, and in light of rapid improvement in sensor and AI technology, it is only a matter of time until autonomous weapons are able to comply with IHL better than human soldiers ever have or ever will.
The article itself is divided into five sections. Section I critiques deontological objections to autonomous weapons. It shows that those objections either wrongly anthropomorphize AWS by assuming they “decide” on targets in a manner similar to humans or are predicated on a romanticized and anachronistic view of war in which most killing takes place face-to-face between combatants of equal status.
Section II addresses the common argument that IHL compliance requires human understanding — particularly the ability to discern the intentions of potential targets. The section demonstrates that such understanding is far less necessary to IHL than AWS critics assume and explains why, in those situations in which judgment is necessary, limits on human decision-making undermine the idea that human soldiers are more likely to comply with IHL than autonomous weapons.
Section III responds to the claim that autonomous weapons will not be able to comply with IHL as well as human soldiers because machines cannot feel compassion. It shows that compassion is irrelevant to IHL compliance, that compassion can lead to negative outcomes in combat as well as positive ones, and that any potential benefits of compassion are far outweighed by the costs of negative emotions such as stress and anger.
Section IV addresses the argument that the non-human nature of autonomous weapons makes it difficult, if not impossible, to hold humans responsible for war crimes that AWS commit. The section demonstrates not only that the problem of “accountability gaps” is significantly overstated, but also that there is no significant difference between human soldiers and autonomous weapons in terms of criminal responsibility.
Finally, Section V explores the implications of the idea that it is highly likely autonomous weapons will eventually be able to comply with IHL as well as — if not better than — human soldiers. It argues that consequentialist critics are not primarily concerned AWS will be worse soldiers than humans. Instead, their real worry is that they will be better ones, because the more humane war becomes, the more difficult it will be to eliminate war itself. This, the section argues, is actually the most powerful argument against autonomous weapons — but one that applies to most of the weapons developed over the past century.
Thursday, February 2, 2023
Heller: The Concept of 'The Human' in the Critique of Autonomous Weapons
Kevin Jon Heller (Univ. of Copenhagen - Centre for Military Studies) has posted The Concept of 'The Human' in the Critique of Autonomous Weapons. Here's the abstract: