Bridging the Gap
The Swift Centre's 'Bridge the Gap' project seeks to improve AI policy making by providing open sourced policy advice that is built upon robust forecasts on AI capabilities, risks, and impacts by the world-leading team at the Swift Centre for Applied Forecasting.
Review forecasts and policy adviceKey Info
Categories Covered
5Policy Advice Submissions
29How it Works
Forecast
The Swift Centre team provides forecasts on AI capabilities, impacts, and risks.
Policy
Anyone can submit policy advice using the forecasts and have it published on the dashboard.
Review
Policymakers, advisors, researchers, and funders can review the policy advice submitted.
Submissions
By Submitted Anonymously
For forecast question: By December 31, 2027, a frontier AI agent will autonomously discover and exploit a previously unknown (“Zero-Day”) vulnerability in a Tier 1 operating system on a real-world device without human intervention?
Advice
To: The Rt Hon Shabana Mahmood MP, Secretary of State for the Home Department
Date: 2026-04-06
Summary
To advise on the policy response to the growing probability that an AI agent will autonomously discover and exploit a previously unknown zero-day vulnerability in a major operating system before end-2027. The Swift Centre’s forecasters assessed this at 44% in February 2026. Evidence since - the first documented autonomous AI zero-day discovery, and the NCSC’s finding that frontier AI offensive capability has improved sixfold in eighteen months - justifies an upward revision to 55–60%. This is now a more-likely-than-not near-term event.
Options Overview
Option 1: Defensive Acceleration
Option 2: Sovereign Offensive Capability Pipeline (RECOMMENDED)
Option 3: International Norms
Recommendation
Approve Option 2: establish a sovereign offensive cyber capability pipeline that recruits and legally protects independent security researchers to discover and disclose AI-enabled zero-days to the UK Government. This requires: (a) reform of the Computer Misuse Act 1990 to create a statutory defence for good-faith AI security research; (b) a standing NCSC bug bounty paying above black-market rates; and (c) a dedicated cyber talent visa route. Option 1 (defensive acceleration) should proceed concurrently. Option 3 (international norms) should be pursued as a parallel diplomatic track. Timing: immediate. The capability trajectory is measured in months, not years.
Background
The question asks whether an AI agent—acting without any human guidance—will discover and exploit a zero-day in a Tier 1 OS (Windows, macOS, Linux, Android, iOS) on real hardware by 31 December 2027. Resolution requires a fully autonomous “kill chain”: reconnaissance, discovery, exploit generation, and privilege escalation, with no human intervention after the initial prompt. The Swift Centre’s forecasters saw rapid AI capability gains as the primary upward driver, and the no-human-involvement requirement as the key constraint.
Three developments since February 2026 shift the assessment upward. First, in December 2025, the platform pwn.ai disclosed CVE-2025-54322—a maximum-severity zero-day in XSpeeder SD-WAN firmware discovered entirely by autonomous AI agents. pwn.ai described it as the first agent- found, remotely exploitable zero-day published. The target was firmware, not a Tier 1 OS, but the gap is one of target hardness, not fundamental capability. Second, the NCSC’s March 2026 joint assessment with AISI found that Anthropic’s Claude Opus 4.6 completed roughly half of a 32-step enterprise attack simulation, with the cost of a full attempt collapsing to approximately £65. Offensive AI capability improved sixfold in eighteen months. Third, Trend Micro’s ÆSIR platform is autonomously discovering vulnerabilities in AI infrastructure at production scale, though patch-bypass discovery still requires human direction—a residual gap that is narrowing.
The probability that this capability exists by end-2027 is meaningfully higher than the probability it will be confirmed. State actors have every incentive to conceal successful autonomous exploitation. Policy must be calibrated to the capability, not the confirmation. The NCSC warns of a “digital divide” between organisations keeping pace with AI threats and those falling behind. AI will almost certainly compress the window between vulnerability disclosure and exploitation from days to hours.
Meanwhile, the US posture has shifted aggressively: the Department of War’s confrontation with Anthropic over autonomous weapons guardrails, and its designation of the company as a “supply- chain risk” for refusing to remove safety restrictions, signals Washington will integrate AI offensive capability regardless of safety concerns. The UK cannot be a bystander in this race. The question is not whether to develop sovereign AI offensive cyber capability, but whether to do so deliberately and on terms that serve British interests—or to allow it to develop in an uncontrolled fashion, with the UK’s most capable researchers working for the highest bidder.
The UK’s current legal framework actively undermines its cybersecurity posture. The Computer Misuse Act 1990 criminalises the same good-faith security research that the NCSC depends upon for threat intelligence. Researchers who discover vulnerabilities in UK systems face prosecution under the same statute as the criminals who exploit them. This creates a perverse incentive structure: the most capable independent researchers either avoid UK-relevant work entirely, operate covertly without disclosure, or sell their findings to foreign governments and commercial exploit brokers. The zero-day market is global, liquid, and indifferent to national interest. A British researcher who discovers an autonomous AI exploit today has stronger financial incentives to sell it to Zerodium or a Gulf state intelligence service than to report it to NCSC. This is a policy failure, and it is one the Home Secretary has the power to correct.
Options
Option 1: Defensive Acceleration
Mandate NCSC to establish a national AI-powered vulnerability discovery programme targeting Tier 1 OS deployments across critical national infrastructure. Cost: £50–75m over two years. Implementation: six to nine months, building on existing NCSC infrastructure. Assessment: necessary but insufficient. Strengthens UK defences but does nothing to ensure the most capable offensive researchers work with the UK rather than against it. Does not address the legal framework that currently criminalises the very research the UK needs.
Considerations
None provided
Risks
None provided
Option 2: Sovereign Offensive Capability Pipeline (RECOMMENDED)
Create a pipeline to recruit, incentivise, and legally protect the world’s best offensive security researchers—including those operating in legal grey zones—to discover and disclose AI-enabled zero-days to the UK Government. Three components: (a) Reform the Computer Misuse Act 1990. Introduce a statutory defence for good-faith autonomous AI security research, requiring: registration with NCSC; disclosure of findings within 90 days; no exfiltration of personal data; no disruption of live services. The CMA, enacted thirty-six years before the AI era, makes no distinction between a criminal and a security researcher. The UK cybersecurity community has called for reform for years. The current framework drives the most capable researchers toward adversarial states or black markets. This must end. (b) Sovereign bug bounty at above-market rates. Establish a standing NCSC-administered fund paying $2–3m per qualifying zero-day above the $500k–$2m black-market range. This is the cheapest form of national defence: a single acquisition costs a fraction of remediating a successful state-sponsored attack on CNI. Fund at £100–150m over three years. (c) Cyber talent visa route. Create a dedicated immigration pathway for offensive security researchers: 28-day processing, five-year leave to remain, pathway to settlement. The US (DARPA, NSA) and Israel (Unit 8200 alumni network) actively recruit global talent. The UK has no equivalent. Every world-class researcher who chooses London over Fort Meade or Shanghai is a direct strategic gain.
Considerations
Cost: £150–225m over three years.
Risks
Political risk: moderate—“paying hackers” framing must be pre-empted with a national security narrative. Risk of inaction: the most capable researchers sell to whoever pays most, and that buyer will not be the United Kingdom.
Option 3: International Norms
Pursue a Five Eyes or G7 agreement on multilateral norms for autonomous AI vulnerability research, modelled on the Wassenaar Arrangement. Cost: low. Timeline: twelve to twenty-four months for a non-binding framework. Assessment: useful for long-term norm-setting but fundamentally mismatched with the threat timeline. Binding power over adversarial states is negligible. Must not delay Options 1 and 2.
Considerations
None provided
Risks
None provided
Recommendation
The case for Option 2 rests on a single proposition: the capability described in this forecast is coming regardless of what the United Kingdom does. Autonomous AI agents have already discovered critical zero-day vulnerabilities in production systems. Frontier models already complete half of a professional attack chain for £65. The trajectory from “half an attack chain” to “a complete one” is not a question of decades. It is a question of model generations—and those generations are now measured in months.
The forecasters themselves identified the key asymmetry: the no-human-involvement requirement is a constraint for now, but human participation is diminishing steadily, not disappearing overnight. The transition from semi-autonomous to fully autonomous exploitation is a gradient, not a cliff edge. The UK must be positioned on the right side of that gradient before it flattens entirely. Defensive measures alone will not achieve this. The UK must also ensure that the people who build these tools, and the people who discover what these tools can do, have every reason to bring their findings to London rather than to Moscow, Beijing, or the open market.
This is not without precedent. The United States’ Vulnerabilities Equities Process, established under the Obama administration and formalised in 2017, created a structured framework for deciding whether to disclose or retain discovered vulnerabilities. Israel’s Unit 8200 has long operated as a de facto pipeline between military cyber capability and the commercial security sector, producing alumni who have founded over a thousand cybersecurity companies. Both models demonstrate that a deliberate, state-orchestrated relationship with the offensive security community yields compounding strategic returns. The UK currently has no equivalent mechanism. The proposed pipeline fills that gap. The CMA reform provides the legal foundation. The sovereign bug bounty provides the economic incentive. The visa route provides the talent supply. Together, they constitute a coherent system, not three isolated measures.
There are two categories of error available to the Home Secretary. The first is to act decisively now and discover, in retrospect, that the autonomous zero-day capability took slightly longer to materialise than expected—in which case the UK will have built a sovereign offensive capability pipeline that strengthens its cybersecurity posture regardless. The second is to wait for certainty, and discover that the capability materialised while the UK was still deliberating—in which case adversaries will hold an advantage that is difficult, expensive, and potentially impossible to reverse.
The only variable the Home Secretary controls is whether, when this capability materialises, it is wielded by those who defend this country’s critical systems—or by those who would attack them. Every quarter of inaction tilts the balance. The cost of acting too early is modest and recoverable. The cost of acting too late is neither.
Next Steps
Within 30 days: Commission Home Office legal team to draft the CMA statutory defence amendment, in consultation with NCSC, GCHQ, and the CyberUp Campaign.
Within 60 days: Direct NCSC to produce a costed design for the sovereign bug bounty, including market rate analysis and operational security protocols.
Within 90 days: Write jointly with the Secretary of State for DSIT to the Immigration Minister requesting design of the cyber talent visa route, targeting launch no later than Q1 2027.
Concurrent: Commence NCSC procurement for the defensive AI vulnerability discovery programme (Option 1), targeting initial capability by Q4 2026.
