Military robot. Photo: US Department of Defense Sgt Cory D. Payne
Military robot. Photo: US Department of Defense Sgt Cory D. Payne

What kinds of human involvement are necessary in the use of AI-enabled weapons systems – and why?

In a new article in Nature Machine Intelligence, PRIO Senior Researcher Jovana Davidovic argues that most policy proposals for managing the risks of AI in warfare rely on vague calls for ‘meaningful human control’ or ‘appropriate human judgment’. The article forms part of the ongoing PRIO project ‘Ethical Risk Management for AI-Enabled Weapons: A Systems Approach (ERM)’, which examines how emerging technologies can be governed responsibly within the military.

Davidovic warns that without a clear taxonomy of the purposes, types and targets of human engagement, such policy proposals risk remaining aspirational rather than actionable.

‘We cannot build effective governance for warfighting AI unless we are explicit about why we want humans involved, what kind of involvement we seek, and what exactly our policies are meant to govern,’ Davidovic writes.

By distinguishing between purposes, types and targets of human engagement, Davidovic’s framework clarifies how institutional and technical design decisions – such as where to place humans in the operational loop, what information they need, and what kind of oversight structures are appropriate – can be matched to specific ethical and operational goals.

Davidovic’s contribution emphasizes the need for shared conceptual tools to navigate the complex ethical terrain of autonomy, accountability and human oversight in warfighting contexts.

The article builds directly on conversations from the Artificial Intelligence in Security and Ethics (AISE) 2025 Conference, hosted by the United Nations Institute for Disarmament Research (UNIDIR) in Geneva earlier this year. The conference brought together policymakers, researchers and military experts to explore the ethical and legal implications of integrating AI into national and international security frameworks.

The Nature Machine Intelligence piece underscores PRIO’s growing role in global debates on responsible AI governance, connecting philosophical analysis to concrete policy frameworks.

‘This work contributes to the larger effort of ensuring that as AI transforms the character of warfare, human responsibility, judgment and dignity remain central,’ Davidovic notes.

Read the full Comment in Nature Machine Intelligence (2025).