Skip to main content
en
Close
Expert views on the frontiers of artificial intelligence and conflict
Recent advances in artificial intelligence have the potential to affect many aspects of our lives in significant and widespread ways. Certain types of machine learning systems—the major focus of recent AI developments—are already pervasive, for example in weather predictions, social media services and search engine results, online recommendation systems. Machine learning is also being applied to complex applications that include predictive policing in law enforcement and ‘advice’ for judges when sentencing in criminal justice. Meanwhile, growing resources are being allocated to developing other AI applications. At issue here—in the views of experts expressed below and the blog series that will ensue—are data-driven machine learning algorithms that are emerging, or could emerge, as tools to ‘advise’ or even replace humans in certain tasks and decisions during armed conflict.

It is not only private companies and academia, but States too, that are part of the drive to develop and adopt these technologies to advance their goals. Several experts predict that AI techniques might entail diverse far-reaching impacts on the conduct of hostilities and the protection of civilians, as well as other dimensions of armed conflict. As governments, and especially militaries, seek to incorporate AI to enhance, speed up, and transform their decision-making processes and operations, what are the potential implications for their use in conflict settings, and specifically for international humanitarian law (IHL)?

It is this question that led the Harvard Law School Program on International Law and Armed Conflict, the International Committee of the Red Cross Regional Delegation for the United States and Canada and the Stockton Center for International Law, U.S. Naval War College, to organize and cosponsor a workshop on Artificial Intelligence at the Frontiers of International Law concerning Armed Conflict at Harvard Law School in December 2018.

From the premise that more cross-over between IHL and AI experts from different sectors is needed in order to better understand the range of potential risks and benefits of using AI in conflict settings, the organizers convened a group of AI and IHL experts and practitioners from academia, government agencies, international organizations and NGOs to discuss and analyse these issues under the Chatham House Rule.

We asked some of the experts to distill—in under 300 words—some of the key issues and concerns that they believe we aren’t thinking enough about now when it comes to the future on AI and armed conflict.

Expert views

Read below for short excerpts on views from: Brig Gen Pat Huston, Commanding General of the United States Army’s Legal Center and School in Charlottesville, Virginia; Tess Bridgeman​​, Stanford University; Yuval Shany​​, Hersch Lauterpacht Chair in International Law of the Hebrew University of Jerusalem; Suresh Venkatasubramanian​​, University of Utah; Naz K. Modirzadeh & Dustin A. Lewis, Harvard Law School Program on International Law and Armed Conflict; Neil Davison & Netta Goussac, International Committee of the Red Cross Legal Division Arms Unit​​; and James Kraska, Michael N. SchmittLt. Col. Jeffrey Biller, Stockton Center for International Law, U.S. Naval War College​​.

Upcoming blog series

In the coming weeks, we will also publish longer blog posts by some of these participants and other specialists stemming from the workshop. The upcoming posts include:

Prof Naz K. Modirzadeh & Dustin A. Lewis

Naz K. Modirzadeh, Founding Director & Dustin A. Lewis, Senior Researcher, Harvard Law School Program on International Law and Armed Conflict

Looking to the future of artificial intelligence and armed conflict, those of us concerned about international law should prioritize (among other things) deeply cultivating our own knowledge of the rapidly changing technologies. And we should make that an ongoing commitment.

There is a perennial question about subject-matter expertise and the law of armed conflict; consider cyber operations, weaponeering and nuclear technology. When it comes to the increasingly impactful and diverse suite of techniques and technologies labeled ‘AI’, the concern takes on a different magnitude and urgency. That’s in no small part because commentators have assessed that AI has the potential to transform armed conflict—and not just the conduct of hostilities.

Yet, it seems that the vast array of IHL scholars and practitioners currently lacks a sufficient understanding of AI. Moreover, many don’t know what they don’t know. That is a dangerous prospect. To better grasp the purported promise and perils of war algorithms, much less seek to meaningfully regulate them through international law, we must be candid about our own technical blind spots.

We came away from the December 2018 workshop—which we jointly co-organized with the ICRC and the Stockton Center—more informed, more curious and more humbled. From our perspective, merely having a technical expert in attendance at future IHL events will not suffice (if it ever did). Indeed, acting as students for half a day forcefully reminded us that IHL scholars and practitioners must be willing to learn as much as they are eager to prescribe.

Prof Suresh Venkatasubramanian​​

Suresh Venkatasubramanian​​, Professor at the University of Utah and studies the use of machine learning in decision-making

At the meeting, we talked about applying methods from criminal justice (targeting and risk assessment tools) in a military context. Given the myriad of problems we’ve seen with the deployment of these methods in the civilian setting, I was more than a little perturbed by the idea of applying them in a war zone without any of the deep understanding of society and culture that still fails to yield fair risk assessment tools.

At the root of this, I sense a deep-seated resistance to seeing AI as anything other than ‘magic pixie dust that we sprinkle on problems to solve them’. Most of the AI methods discussed are really models trained from data. But there wasn’t a real discussion about where that data might coming from, what biases that data might have and how those biases might affect the efficacy of the model.

And this is a real problem—one that can’t be waved away under the heading of ‘the geeks will figure it out’. If there’s anything we’ve learned about the deployment of machine learning in social settings, it’s that seemingly innocuous choices about data representation and model training bake in strong normative statements about the world being modeled, in ways that are non-transparent and unexamined. The only way to understand the implicit norms and policies being encoded is to open the black box and have that policy discussion. And, we must be prepared to realize that sometimes, maybe, AI isn’t the right answer.

 Brigadier General Pat Huston

Brigadier General Pat Huston, Commanding General of the Army’s Legal Center and School in Charlottesville, Virginia

Artificial Intelligence is all around us. Google searches, Amazon and Netflix recommendations, and Siri and Alexa responses all leverage AI. AI is also common in military applications, ranging from benign ‘smart maintenance’ for trucks to the use of autonomous weapons.

Leveraging AI for offensive autonomous weapons warrants careful analysis. I offer three suggestions—and a few questions—to ensure that this inevitable development is done legally and ethically:

(1)  Human Judgment: We must ensure that autonomous weapons allow commanders to exercise appropriate levels of human judgment. In other words, humans should make key decisions and can’t unleash autonomous weapons over which they will lose effective control.

(2) Accountability: The use of weapons must always comply with international humanitarian law and commanders must remain responsible for all weapons they employ. These are fundamental principles.

(3) Government cooperation with Industry: The best and brightest AI researchers should insist on legal and ethical conduct for military AI uses and should then work with compliant governments to this end. If they boycott military projects, the void will be filled by researchers who are less capable, less ethical or both, and that would be a recipe for disaster.

I’ll end with some questions, particularly for those who are concerned about autonomous weapons.  What if AI-enhancements make autonomous weapons better than traditional weapons? What if autonomous weapons are more precise and cause less collateral damage? Would we be legally obligated to use them, if available? Would we have an ethical obligation to pursue them?

The views in this blog are his own, and do not necessarily express the view of the U.S. Army, the Department of Defense, or the U.S. Government.

Tess Bridgeman

Tess Bridgeman​​, Lecturer at Stanford University, affiliate at Stanford’s Center for International Security and Cooperation, Senior Fellow at NYU Law School’s Center on Law and Security, former Special Assistant to President Obama and Deputy Legal Adviser to the National Security Council

As machine learning and other forms of AI develop, humans building and employing these tools in the armed conflict context should be mindful of the distinction between using these technologies in situations where their capabilities exceed human capacity—such as making basic perception judgements about large amounts of visual content quickly—and situations where making complex, nuanced judgments will continue to require deep human expertise.

Given the dangers of algorithmically-encoded bias and the tremendous human, financial and data resources necessary to build complex AI systems, we should ask two related questions.

First, how can we use AI-driven tools as one component of human decision-making where the obstacles we currently face are due to limits in human capacity that are measurably and reliably improved by use of AI tools?

Second, bearing in mind the purposes of IHL, how can we use AI to gain a better factual understanding of complex conflict environments in order to facilitate ultimately human-centered decisions? For example, when can AI help provide a more complete picture of the expected military advantage of an attack or expected damage to civilians or civilian objects?

These kinds of inputs could provide useful information to the ultimate human decision-maker, but don’t displace the roles or responsibilities of the lawyer, the attack planner or the combatant in applying IHL.

Prof Yuval Shany

Professor Yuval Shany, Hersch Lauterpacht Chair in International Law of the Law Faculty of the Hebrew University of Jerusalem, Vice President for Research at Israel Democracy Institute, Chair of the UN Human Rights Committee, academic director of the CyberLaw program at the Cyber Security Research Center of the Hebrew University.

The duty to use AI

Much of the discourse surrounding the use of AI on the battlefield, revolves around the lawfulness of introducing systems capable of engaging in a sophisticated decision-making process without meaningful human control. Lack of human judgment and compassion might render AI-based means and methods of warfare under-protective of humanitarian interests and less humane. As a result, in the current stage of development of AI technology, they cannot serve as full autonomous decision makers on the battlefield (see e.g., the 2018 Human Rights Committee, General Comment 36, para. 65). Still, the more advanced AI systems become in terms of their capacity to process large quantities of data in short time spans, the question of what constitutes meaningful human control becomes more difficult, and there is a risk that such control will gradually become control only pro forma.

Furthermore, any normative position on the use of AI for on-battlefield decisions (and off- battlefield decisions) should be ultimately informed by empirical scientific data on the comparative advantages and disadvantages of human and machine decision-makers. If it turns out at some point in time in the future that the latter decision-maker is systematically less prone to mistakes (e.g., fewer false positives) and less likely to be biased (e.g., influenced by fear or hatred) than humans, then—as a minimum—militaries would be required to use AI in decision-making, in order to reduce harm to civilians or civilian objects (either due to erroneous identification of combatants or possible collateral damage). Failure to use such technology may be regarded as failure to apply a reasonable precaution.

This does not mean that the human decision-maker is redundant. Like in other ‘double lock’ systems, even the less informed (but perhaps more compassionate and context-sensitive) human decision-maker should remain in the loop, able to intervene and issue a ‘mission abort’ order, in order to correct what he or she perceives as AI mistakes. This approach does not cut in the other direction, though: the less informed human decision-maker should not overrule an AI decision not to attack if the more informed AI identifies the target as a civilian or points to excessive civilian harm.

In sum, human control is inevitably less meaningful in situations where there exists a large gap between the data gathering and analysis capacity of machines and humans. And, such gaps are likely eventually to push humans away from being the principal decision-maker in certain battlefield contexts to becoming a check and/or balance on AI decision making power.

Neil Davison & Netta Goussac​​

Neil Davison, Scientific and Policy Adviser & Netta Goussac​​, Legal Adviser in the Arms Unit, International Committee of the Red Cross Legal Division Arms Unit

Decisions, decisions

The most significant impact of AI and machine learning in armed conflict will be on decision-making. Be it software that controls a robot; a digital system crunching data and serving up an analysis, prediction, or recommendation for humans to act upon; a cyber-attack initiated by AI; or a machine learning system creating ‘fake’ information, the overriding concern is the reliance by humans on machines when taking decisions.

In armed conflict, many of these decisions will be ‘safety critical’, meaning that the decision may result in death or serious injury, damage to or destruction of property, or may curtail individual freedoms. Preserving the fundamental human role in—and control over—such decisions will be essential to avoid unpredictable consequences for civilians and combatants. Indeed, human involvement in decision-making is necessary to ensure compliance with international humanitarian rules governing human behaviour in warfare and to preserve a measure of humanity in conflict.

This means adapting technology, and the rules that bind them, to fit humans—not the other way around. Safeguards will be needed to allow the humans ‘in-the-loop’ to fulfil their decision-making responsibilities—both legal and ethical. If this means slowing things down so that humans can meaningfully play their role, so be it. The alternative approach—where the use of AI prevents humans from fulfilling their responsibilities—will not end well.

Prof James Kraska, Prof Michael N. Schmitt & Lt Col Jeffrey Biller

James Kraska, Charles H. Stockton Professor of International Maritime Law, Michael N. Schmitt, Howard S. Levie Professor of Law and Armed Conflict and Lt. Col. Jeffrey Biller, Military Professor, Stockton Center for International Law, U.S. Naval War College​​

Legal research on military uses of artificial intelligence (AI) tends to focus on the difficult questions related to autonomous targeting.  While the legal and ethical concerns of separating humans from the decision-making process are justified, they do not reflect the more immediate realities of technologies utilizing AI. These technologies use AI to aid the human decision-making process, rather than replace it.  While these uses of AI do not receive the same level of scrutiny as their targeting counterparts, significant legal and policy questions remain.

At the recent Artificial Intelligence workshop at Harvard Law School, the use of AI in detention decisions was discussed. Such technologies currently exist and are used for purposes such as aiding judges in making sentencing decisions and could be useful in military detention operations. Had AI technologies been available to assist, for example, coalition forces in Iraq in 2007—where theywere processing over 26,000 detainees—detainees may have been processed more efficiently with less human resources and less error. If AI could be used to immediately dismiss the large numbers of individuals with no valid reason for detention, available manpower can be focused on the smaller number of more difficult cases where continued detention may be warranted.

There are possible issues, however. As AI experts relayed during the workshop, the use of algorithms in decision-making is only as good as the data on which the algorithm will base its recommendations. If biased data is initially entered into the system, not only does it get imprinted into future recommendations, it can expand in scope as those biased recommendations are fed back into the system.

Additional legal research and policy discussion in this area will help ensuring that initial uses of AI in military operations are conducted in a manner that fulfills the overall aims of international humanitarian law.

The views in this blog are their  own, and do not necessarily express a view of the U.S. naval War College, the Department of Defense or the U.S. government.

Previous posts by workshop participants

See also


DISCLAIMER: Posts and discussion on the Humanitarian Law & Policy blog may not be interpreted as positioning the ICRC in any way, nor does the blog’s content amount to formal policy or doctrine, unless specifically indicated.


 

Share this article

Comments

Leave a comment