Bridging the Gap
The Swift Centre's 'Bridge the Gap' project seeks to improve AI policy making by providing open sourced policy advice that is built upon robust forecasts on AI capabilities, risks, and impacts by the world-leading team at the Swift Centre for Applied Forecasting.
Review forecasts and policy adviceKey Info
Categories Covered
5Policy Advice Submissions
29How it Works
Forecast
The Swift Centre team provides forecasts on AI capabilities, impacts, and risks.
Policy
Anyone can submit policy advice using the forecasts and have it published on the dashboard.
Review
Policymakers, advisors, researchers, and funders can review the policy advice submitted.
Submissions
By Submitted Anonymously
For forecast question: By December 31, 2027, will a G20 member state officially integrate 'Human-out-of-the-loop' Lethal Autonomous Weapon Systems (LAWS) into their active military doctrine or operational manuals?
Advice
To: The Rt Hon John Healey MP, Secretary of State for Defence
Date: 2026-04-06
Summary
The Swift Centre forecasts a 24% likelihood that a G20 state will officially integrate lethal autonomous weapon systems (LAWS) into military doctrine by December 2027. I assess this at 35–40%. Integration is already occurring through procurement and operational practice. The Pentagon’s recent designation of Anthropic as a supply-chain risk for maintaining AI safety guardrails confirms that this policy space is being shaped through procurement. This advice presents options for establishing a UK accountability framework before allied doctrinal decisions narrow the UK’s choices. A decision is sought.
Options Overview
Option 1: Do nothing. Maintain current policy and review in 12 months.
Option 2: Graduated Autonomy Framework. Tiered accountability standards for autonomous targeting in UK defence procurement.
Option 3: NATO norm-setting. Propose allied autonomous weapons doctrine through existing security structures.
Recommendation
Option 2, with Option 3 as the implementation pathway. Developing the framework domestically first gives the UK a working standard regardless of multilateral progress, and provides a credible basis for leading NATO negotiations.
Background
The Swift Centre’s professional forecasting team identified three obstacles to official LAWS integration within the question’s timeframe: slow military adoption, the absence of a catalysing conflict among major powers, and strong political disincentives for public acknowledgment. They rated Russia as the most likely first mover, followed by the United States.
I assess the probability at 35–40% for three reasons. First, the resolution criteria include “strategic reviews” and “procurement announcements”: Russia’s 2024 CCW Group of Governmental Experts statement, explicitly rejecting “meaningful human control” as an appropriate framework, already functions as a doctrinal position. Second, both sides in the Ukraine conflict are deploying increasingly autonomous drone systems with diminishing human involvement in targeting. Doctrine typically codifies operational reality after the fact. Third, the Anthropic–Hegseth confrontation in February 2026 confirms that the US government treats autonomous targeting as a current operational priority. A sitting Secretary of War demanded the removal of AI safety guardrails; when the company refused, the Pentagon designated it a supply-chain risk.
The UK cannot insulate itself from these developments. As a NATO member closely integrated with US defence systems, the UK will be affected by allied doctrinal decisions regardless of its own position. The current MoD policy requires “appropriate levels of human judgement” in weapons deployment, language ambiguous enough to permit almost any interpretation. Without a specific framework, the UK will absorb autonomous weapons standards set elsewhere.
Options
Option 1: Do Nothing
Maintain the current policy position and review in 12 months or upon a significant change in allied doctrine.
Considerations
No monetary cost. No legislative change required. Consistent with the MoD’s current public posture and preserves flexibility in a rapidly evolving field. Public attention on autonomous weapons remains low in the UK, reducing immediate political pressure to act.
Risks
The accountability gap between operational capability and doctrinal clarity widens. If an autonomous targeting incident involves UK forces or UK-allied systems, no framework exists to determine legal or command responsibility. The UK cedes influence over emerging allied standards to states with less interest in accountability.
Option 2: Graduated Autonomy Framework
Commission Dstl, in coordination with the AI Safety Institute, to develop a Graduated Autonomy Framework (GAF): defined tiers of autonomous decision-making authority, each with corresponding accountability requirements. Target delivery of a scoping paper within 90 days; review framework effectiveness at 12 months. The framework would establish four tiers: 1. Tier 1 (Human-directed): AI recommends targets; a human selects and authorises each engagement. 2. Tier 2 (Human-supervised): AI engages within human-defined parameters; a human monitors with override authority. Requires a real-time audit trail. 3. Tier 3 (Human-on-the-loop): AI operates autonomously within a defined mission envelope; a human can intervene. Requires a refusal capability: the system must decline engagements that fall outside rules of engagement. 4. Tier 4 (Fully autonomous): Reserved for defensive systems (e.g., missile defence). Prohibited for offensive targeting. This extends existing graduated authority principles for weapons release to autonomous systems. A named human remains legally responsible at every tier. Procurement contracts for autonomous-capable systems would require compliance with the relevant tier specification.
Considerations
Estimated development cost: £2–5M over 12 months (Dstl and AISI staff time, external consultation). Implementable through Defence Policy updates and procurement standards; no primary legislation required. Consistent with the government’s manifesto commitment to responsible AI and the UK’s established leadership following the 2023 AI Safety Summit. The refusal capability requirement encodes a clear principle: a system capable of lethal force must also be capable of declining to use it. A system that can decline unlawful engagements reduces the risk of war crimes liability for commanders and operators. Public opinion broadly favours accountability in military AI.
Risks
The US may view restrictions on autonomous offensive targeting as an interoperability obstacle. This is manageable: Tiers 1–3 are compatible with allied systems, and the Tier 4 restriction applies to offensive use only. Publication of tier specifications may reveal capability assumptions to adversaries; technical annexes can be classified while keeping the policy framework public.
Option 3: International Norm-Setting via NATO and AUKUS
Propose a Graduated Autonomy Protocol through NATO and AUKUS, establishing shared allied standards. The UK would table a proposal at the June 2026 NATO Defence Ministers’ meeting. The UN CCW process has been gridlocked since 2014; allied agreement through existing structures offers a faster path. Review progress at 12 months.
Considerations
Low direct cost (diplomatic effort and staff time). Strong alignment with UK strategic goals within NATO. A NATO-agreed framework would set the global standard.
Risks
NATO operates by consensus, and several member states (notably the US under the current administration, and Turkey) may resist constraints. Pursuing international agreement first risks leaving the UK without a domestic framework while negotiations proceed. If those negotiations stall, the UK has neither national nor international standards.
Recommendation
Option 2, with Option 3 as the implementation pathway. The UK should develop the Graduated Autonomy Framework domestically as an immediate priority, then table it as the substantive basis for a NATO proposal. Arriving at NATO with a developed framework is diplomatically stronger than arriving with a request for joint development, and ensures the UK has operational standards regardless of multilateral pace.
The framework accepts that autonomous targeting capabilities will be militarily useful. It establishes that autonomy requires accountability: audit trails, override mechanisms, refusal capability, and a named human in the chain of responsibility at every tier.
This framework cannot prevent the forecasted event: the forecast resolves if any single G20 state acts, and the UK alone cannot constrain Russia’s or others’ decisions. Its value lies in ensuring the UK is prepared, accountable, and positioned to shape allied norms, regardless of whether another state takes this step within the forecast’s timeframe.
Next Steps
If the Secretary of State agrees:
-
Commission Dstl, in coordination with AISI, to produce a GAF scoping paper within 90 days, consulting Service Chiefs and Defence Legal Services.
-
Brief the National Security Adviser on implications for Five Eyes and NATO interoperability.
-
Instruct the UK Permanent Representative to NATO to signal British interest in tabling autonomous weapons governance at the next Defence Ministers’ meeting.
-
Engage the Defence Select Committee to build cross-party support before publication.
