In this briefing report, we introduce a new concept — war algorithms — that elevates algorithmically-derived “choices” and “decisions” to a, and perhaps the, central concern regarding technical autonomy in war. We thereby aim to shed light on and recast the discussion regarding “autonomous weapon systems.”
We define “war algorithm” as any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed conflict. In introducing this concept, our foundational technological concern is the capability of a constructed system, without further human intervention, to help make and effectuate a “decision” or “choice” of a war algorithm. Distilled, the two core ingredients are an algorithm expressed in computer code and a suitably capable constructed system.
Through that lens, we link international law and related accountability architectures to relevant technologies. We sketch a three-part (non-exhaustive) approach that highlights traditional and unconventional accountability avenues. We focus largely on international law because it is the only normative regime that purports — in key respects but with important caveats — to be both universal and uniform. By not limiting our inquiry only to weapon systems, we take an expansive view, showing how the broad concept of war algorithms might be susceptible to regulation — and how those algorithms might already fit within the existing regulatory system established by international law.
Tuesday, September 6, 2016
Lewis, Blum, & Modirzadeh: War-Algorithm Accountability
Dustin A. Lewis (Harvard Univ. - Law), Gabriella Blum (Harvard Univ. - Law), & Naz K. Modirzadeh (Harvard Univ. - Law) have posted War-Algorithm Accountability. Here's the abstract: