Machine learning algorithms hold out the promise of making sense of vast quantities of information, detecting patterns, and identifying anomalies better than humans. It seems safe to predict that in the coming decades militaries will rely heavily on predictive algorithms, machine learning, and artificial intelligence in many aspects of warfighting. Military operators, programmers, and lawyers will confront difficult challenges as they try to create decision-support algorithms that are sensitive to the law of armed conflict (LOAC). Lawyers will need to understand the capabilities, requirements, and limits of algorithms, while programmers will need to learn the basics of LOAC and how militaries make LOAC-infused decisions under pressure.
This chapter argues that these actors should pursue a three-step process: (1) identifying the applicable law; (2) crafting and training the algorithm around factors that will produce a recommendation relevant to that legal framework; and (3) interpreting the algorithmic predictions through the lens of that law. The goal should be to produce law-sensitive, data-driven algorithmic recommendations that lawyers and operators can act on. Further, the efforts to create legally-sensitive predictive algorithms may alter the kinds of inter-agency processes that states undertake to interpret LOAC rules and stimulate militaries to re-evaluate how they currently undertake their human-only analyses.
Wednesday, June 24, 2020
Deeks: Coding the Law of Armed Conflict: First Steps
Ashley Deeks (Univ. of Virginia - Law) has posted Coding the Law of Armed Conflict: First Steps (in The Law of Armed Conflict in 2040, Matthew C. Waxman ed., forthcoming). Here's the abstract: