Swift Centre for Applied Forecasting

Bridging the Gap

The Swift Centre's 'Bridge the Gap' project seeks to improve AI policy making by providing open sourced policy advice that is built upon robust forecasts on AI capabilities, risks, and impacts by the world-leading team at the Swift Centre for Applied Forecasting.

Review forecasts and policy advice
5 forecasts • 1 policy advice submissions

Key Info

Questions Forecasted

5

Categories Covered

5

Policy Advice Submissions

1

How it Works

1

Forecast

The Swift Centre team provides forecasts on AI capabilities, impacts, and risks.

2

Policy

Anyone can submit policy advice using the forecasts and have it published on the dashboard.

3

Review

Policymakers, advisors, researchers, and funders can review the policy advice submitted.

Submissions

By December 31, 2027, will a G20 member state officially integrate 'Human-out-of-the-loop' Lethal Autonomous Weapon Systems (LAWS) into their active military doctrine or operational manuals?

Forecast: 03/03/2026Resolution: 31/12/20270 advice submissions
Review resolution criteria

Human-out-of-the-loop: A system that selects and engages targets independently based on sensors and algorithms.

Included Systems: Systems that possess "Target Recognition" AI where the machine can identify and make the kill decision based on a human specified objective (e.g. help us weaken 'x' faction in this area).

Excluded Systems: Defensive systems (e.g., Aegis, C-RAM) that are essentially automated shields, or drones that use AI only for flight navigation and terminal guidance toward a human-selected target.

Source: Official publications from a National Defence Ministry (e.g., UK MoD, US Department of War/DoD, Russia's MoD) or any branch of a G20 nation's armed forces (Army, Navy, Air Force).

Document Type: Strategic reviews, field manuals, or official procurement announcements stating that fully autonomous targeting is an authorised operational procedure.

Background

The same few years that saw the boom in generative AI also saw drone warfare evolve into a key dimension of a full-scale conflict between Russia and Ukraine. The prospect of fully autonomous weapons systems (LAWS), and decisions on the extent to which they should be developed, deployed, and acknowledged, is confronting uniformed and civilian military planners across the developed world, and such systems may soon become available to less-developed nations as well.

This question has already broken into the public domain with the recent public standoff between Anthropic and the US Department of War. Secretary Pete Hegseth issued an explicit request for Anthropic to remove safety guardrails that prevent Claude from being used in fully autonomous lethal systems. Anthropic CEO Dario Amodei refused, stating that today’s AI is “not reliable enough to power fully autonomous weapons” and that doing so would put American warfighters and civilians at risk. Hours later the Pentagon signed a substitute agreement with OpenAI that seemingly retained the guardrails that had been at issue in the agreement with Anthropic.

This dispute highlights the “Rubicon” of military doctrine. While many G20 states, including the UK and US, have historically maintained a meaningful “human control” requirement, the pressure to drop such restrictions is mounting. The Pentagon’s move to designate Anthropic a “supply-chain risk” in retaliation for its refusal – an aggressive step that could have a destructive impact on the company’s business – suggests that the “human-in-the-loop” policy is increasingly viewed as an operational bottleneck.

The Swift Centre forecasts summarized here predated these dramatic events by a few days, but we believe that the fundamentals of our forecasts hold.

The forecasting question folds any debate over the technical viability of LAWS into the broader issue of their official adoption by militaries among the G20 countries. An important stipulation of the question is not just that the technology exists, but that it become an explicit element of a G20 country’s military power – thus the resolution criteria require that autonomous systems be not only developed or potentially covertly deployed but be openly integrated into a country’s military doctrine.

A country’s willingness to take this last step will obviously be influenced by its specific capabilities, its geostrategic situation, and its domestic politics and public opinion.

he Swift Centre forecasters pegged the likelihood of a positive resolution of this question at 24%. The forecasts were fairly tightly bound around the median, ranging from 15% to 45%, with no conspicuous outliers.

The forecasters focused their analysis on the three main states they believed capable of deploying LAWS at scale within the stated timeframe: the US, China, and Russia. On technical viability the team was bullish, estimating that some forms of LAWS would likely be viable for these three countries in the near future – if they are not already. In addition, smaller G20 militaries could have their own strong strategic incentives to adopt LAWS soon, but they may lack the capacity to do so within the question’s timeframe.

But the forecasters also saw three significant obstacles to positive question fulfilment:

the typically slow pace of testing and adoption of new technologies by modern militaries, in the absence of exigent circumstances such as those created by an actual conflict;

the likely absence of an actual conflict involving these three nations within the next two years that could trigger a more rapid adoption of LAWS (the Russia-Ukraine conflict notwithstanding); and

the various political, strategic and legal disincentives for nations to explicitly acknowledge their adoption of LAWS in published military doctrine.

Among these obstacles, official acknowledgement was clearly the key sticking point for the forecasters. They saw very few incentives for any nation to explicitly endorse its adoption of autonomous weapons within the next two years, but they saw numerous disincentives, such as inevitable public blowback, geopolitical retaliation, and potential legal liability. They also noted that maintaining “strategic ambiguity” about military doctrine while LAWS are being developed covertly could be perceived by leaders as very advantageous in the current geostrategic situation – particularly for China, which might be seeking to hedge against US development of these capabilities.

Even without any historical base rate to start from, the forecasters converged around a likelihood of slightly over one-in-five. The odds were not even lower because, on the one hand, the weapons technologies themselves are likely to be imminently available, and on the other hand, there is a sufficiently non-trivial chance of a sufficiently serious military conflict erupting that could spur the combatants to use LAWS and change the incentives around their explicit adoption. Of the three key countries, Russia was viewed as the most likely to be the first to officially incorporate LAWS into its doctrine, the US (at least under the current administration) as the second most likely, and China (which seems to benefit the most from “strategic ambiguity”) as the least likely.

Swift Centre Forecast Visual

Policy advice