Ethical Risk Management for AI-Enabled Weapons: A Systems Approach (ERM)

Jan 2025 – Jun 2028

Photo: US Department of Defense Sgt Cory D. Payne
Photo: US Department of Defense Sgt Cory D. Payne
ERM is led by Jovana Davidovic and Greg Reichberg

This interdisciplinary project engages with defense practitioners and policymakers to develop theory-grounded and actionable risk assessment and mitigation strategies for AI-enabled weapons.

There has been a steep increase in the reliance on AI tools for warfighting. This is unsurprising as AI allows for increased speed on the battlefield, better surveillance, decision support, target tracking, identification, and validation. The increased reliance on AI has fueled a push for enhanced weapons’ autonomy on the battlefield. These changes to the way wars are fought have led to a range of ethical worries around the use of AI for life-or-death decisions on the battlefield, including issues with safety, accuracy, transparency, explainability, robustness, brittleness, and attribution of responsibility. ERM aims to develop actionable recommendations for mitigating the ethical risks that emerge from increased reliance on AI for warfighting.

ERM proceeds from the assumption that the right approach to identifying and mitigating ethical risks is to consider the entire targeting process, as well as the complete lifecycle of the AI tools that play a role in that targeting process. ERM engages with stakeholders across the defense industry, military, government, and international organizations to better understand the tasks in each part of the targeting process, as well as in each step of the lifecycle of algorithms that are utilized in the targeting process. The project focuses on human-machine interactions and considers when and how such interactions can be leveraged to mitigate ethical risks. Mapping the development and use of AI-enabled weapons in this way will sharpen dialogue on a range of key questions about the governance of AI weapons. It likewise provides a framework for devising ethical risk assessment tools and ethical risk management strategies for both industry and regulators.

Outcomes

Mapping the lifecycle of AI-enabled weapons: Through mapping the lifecycle of AI-enabled weapons and the targeting process, ERM delineates a landscape for policy and academic conversations.

Clarity regarding key terms: Academic papers, policy papers, and memos provide clarity around key terms, aiding deliberations around AI weapons governance.

Ethical risk assessment tools: Risk assessment tools for industry, defense contractors, defense department procurement teams, investors, and users are grounded in real-world understanding of how warfighting AI tools and weapons are developed, tested, evaluated, procured, fielded, and used.

Current state of AI governance: ERM maps the current state of AI governance in the defense industry and military.

Guidance for policymakers: ERM provides tools enabling auditability, thereby aiding compliance with ethical and legal standards.

An error has occurred. This application may no longer respond until reloaded. An unhandled exception has occurred. See browser dev tools for details. Reload 🗙