Warring with Machines: Military Applications of Artificial Intelligence and the Relevance of Virtue Ethics

Led by Gregory M. Reichberg

Jan 2020 – Dec 2023

Pixabay
Warring with Machines aims to provide an ethical framework for designing and implementing Artificial Intelligence (AI) in military technologies.

Warring with Machines aims to provide an ethical framework for designing and implementing Artificial Intelligence (AI) in military technologies.

The project's primary research focus is on the people – military personnel throughout the command structure – who serve in combat settings with AI-enabled machines. In a battlespace where machine autonomy is increasingly assuming functions once restricted to human beings, maintaining clear lines of human responsibility is of paramount importance. Clarifying this issue should improve ethical instruction within military training and educational institutions, as well as change how AI developers design their technologies. In turn, this will render ethical guidelines better tailored to the battlefield scenarios military personnel will confront in the future.

The project aims in three settings to yield moral guidelines for AI technology use: kinetic (physical) combat operations, cyber operations, and strategic planning. These guidelines will serve as conceptual pillars for forming policies that help guide the design and use of AI-related weapon systems.

Our theoretical framework broadly aligns with virtue ethics, focused on inward capabilities – virtues – that empower us to act responsibly amid the challenges of personal and professional life. Warring with Machines will probe how we can enhance the moral agency of combatants as algorithms become more prevalent in warfare.

The project is funded by a four year grant from the Research Council of Norway (SAMKUL programme), and involves collaboration between leading national and international research institutions, inter alia the Center for Philosophy and the Sciences at the University of Oslo, the Center for Artificial Intelligence Research at the University of Agder, the Stockdale Center for Ethical Leadership at the US Naval Academy, the Technology Ethics Center at the University of Notre Dame, and the Munich Center for Neurosciences – Brain & Mind at Ludwig Maximillian University.

Project leader

  • Greg Reichberg

Project members at PRIO

  • Henrik Syse
  • Mareile Kaufmann
  • Sigurd Hovd–Ph.D Researcher
  • Kelly Fisher–Research Assistant

External project members

  • David M. Barnes, US Military Academy West Point
  • Edward Barrett, US Naval Academy
  • Einar Bøhn, Department of Religion, Philosophy, and History (University of Agder)
  • August Cole, Atlantic Council and Marine Corps University
  • James L. Cook, US Air Force Academy
  • Robert H. Latiff, Technology Ethics Center, University of Notre Dame
  • Martin Cook, US Naval War College
  • Ophelia Deroy, Ludwig Maximilian University, Munich
  • Shannon French, Case Western Reserve University
  • Kirsi Helkala, Norwegian Defense University College
  • Don Howard, Technology Ethics Center, University of Notre Dame
  • George Lucas, US Naval Academy
  • Kaushik Roy, Jadavpur University (Kolkata, India)
  • Bruce Swett, Northrop Grumman
  • Frank Pasquale, Brooklyn Law School
  • Zoe Stanley-Lockman, Nanyang Technological University (Singapore)
  • Shannon Vallor, University of Edinburgh
  • Sebastian Watzl, Center for Philosophy and the Sciences (University of Oslo)
An error has occurred. This application may no longer respond until reloaded. Reload 🗙