Swift Centre for Applied Forecasting

Bridging the Gap

The Swift Centre's 'Bridge the Gap' project seeks to improve AI policy making by providing open sourced policy advice that is built upon robust forecasts on AI capabilities, risks, and impacts by the world-leading team at the Swift Centre for Applied Forecasting.

Review forecasts and policy advice
5 forecasts • 29 policy advice submissions

Key Info

Questions Forecasted

5

Categories Covered

5

Policy Advice Submissions

29

How it Works

1

Forecast

The Swift Centre team provides forecasts on AI capabilities, impacts, and risks.

2

Policy

Anyone can submit policy advice using the forecasts and have it published on the dashboard.

3

Review

Policymakers, advisors, researchers, and funders can review the policy advice submitted.

Submissions

By December 31, 2027, a frontier AI agent will autonomously discover and exploit a previously unknown (“Zero-Day”) vulnerability in a Tier 1 operating system on a real-world device without human intervention?

Forecast: 03/03/2026Resolution: 31/12/202710 advice submissions
Review resolution criteria

The forecast resolves as YES if a verified report (from a National AI Safety Institute (e.g., UK AISI), a G7 cybersecurity agency (e.g., NCSC, CISA), or a major AI company (e.g., OpenAI, Anthropic)) confirms that an AI agent independently completed the following:

Zero-Day Discovery of Tier 1 Target: Identified a vulnerability previously unknown to the software vendor (no CVE assigned at time of discovery) which compromises a current, supported version of Windows, macOS, Linux Kernel, Android, or iOS.

Real-World Device: Executed the exploit on physical hardware (e.g., a laptop or smartphone) rather than a simplified software-only simulation.

Full Administrative Control: Achieved Privilege Escalation (gaining root or SYSTEM access).

Autonomous "Kill Chain": Performed all steps (reconnaissance, discovery, and exploit generation/delivery) without human debugging, sub-prompting, or mid-process guidance. No Mid-Process Intervention: The task fails if there is any "human-in-the-loop" approval at any point after the first prompt.

Resolution Note: Semi-autonomous aids (where humans provide tactical direction or fix AI coding errors) do not count. The agent must operate as a "closed-loop" system

Background

The rapid, highly publicized advances in agentic AI have inevitably prompted debate about the potential cybersecurity implications – for both the “offensive” and “defensive” sides of the ledger – of AI agents capable of acting independently in global IT networks. No one is sure how advanced autonomous systems might affect the rates at which vulnerabilities are detected and successfully exploited relative to current levels; but one question national security agencies must be asking is whether the adoption of autonomous AI will unleash a barrage of cyberattacks.

This forecast question postulates a scenario in which an AI agent successfully carries out a computer exploit that takes advantage of a previously unknown vulnerability on a real-world device. The key stipulation is that no humans are involved in identifying the vulnerability or executing the exploit – it is entirely carried out by an AI agent.

The Swift Centre professional forecasting team assigned a 44% likelihood to this scenario materializing by the end of 2027. The distribution of forecasts ranged fairly closely and symmetrically around the median (the lowest was 20%, the highest 63%).

The forecasters reasoned that advancing AI capabilities are likely to benefit defenders as much as – if not slightly more than – attackers. They pointed to existing security regimes (largely private-sector efforts funded by major tech players) that are already in continuous operation protecting Tier 1 systems, regimes that appear to be scaling up their efforts and use of AI along with the perceived threats. Indeed, one of the most plausible ways for this question to resolve positively would be in the form of a defensively-motivated demonstration.

One sticking point that forecasters keyed on was the stipulation of no human involvement. This hurdle was a limiting factor in their estimates, since human participation, in their view, would more strongly orient the AI agents toward success as opposed to experimentation. The forecasters tended to view human participation in exploits as more likely to diminish steadily over time than to disappear overnight – and they would have assigned higher forecast likelihoods to scenarios that contemplated even minimal human participation, had that been allowed under the question’s criteria.

The forecasters also pointed out some thorny epistemic problems associated with this question. Above all, it might be difficult to determine whether or not a successful exploit of this kind had even taken place, if it was not announced by an industry or foreign-government actor – and many such actors might have strong incentives not to disclose such events. Conversely, if an outside actor claimed that they had successfully carried out a demonstration exploit of this type (or had succeeded in foiling an exploit) it might be hard to verify that claim.

Finally, the forecasters noted that the question’s timeframe was fairly short (under two years). But this factor led many to raise their likelihood estimates, rather than lowering them; given the rapid pace of AI development, they reasoned, any agentic attacker is likely to have a greater advantage in the near term than they will in the longer term after defenses have caught up.

Swift Centre Forecast Visual

Policy advice

AI-Enabled Autonomous Cyber Threats: Policy Response Options

#1

Aditya Thomas

Summary

AI agents are capable enough today to assist in cybersecurity attacks. With the rapid increase in capability there is a real possibility that AI agents could autonomously discover and exploit a previously unknown vulnerability in an operating system or associated software libraries that are the backbone of critical systems like power generation, hospital administration, or inter-bank settlement systems. These software systems have an essentially unbounded attack surface arising from accumulated complexity and open source dependencies...

Summary truncated. Open full advice to continue reading.

AI-Enabled Autonomous Cyber Exploitation: The Case for a Sovereign Offensive Capability Pipeline

#2

Submitted Anonymously

Summary

To advise on the policy response to the growing probability that an AI agent will autonomously discover and exploit a previously unknown zero-day vulnerability in a major operating system before end-2027. The Swift Centre’s forecasters assessed this at 44% in February 2026. Evidence since - the first documented autonomous AI zero-day discovery, and the NCSC’s finding that frontier AI offensive capability has improved sixfold in eighteen months - justifies an upward revision to 55–60%. This...

Summary truncated. Open full advice to continue reading.

AI and National Security: Potential for Operating System Exploitation

#3

Lauren Ochotnicka

Summary

Recent advances in AI have brought us closer to a scenario where autonomous AI systems can identify and exploit security flaws in the Linux, Windows, and macOS operating systems that underpin national security and government infrastructure. Due to the potential threat level and the time that it will take to research and implement effective technical and policy measures, action must be taken now to ensure we have adequate time. The Swift Centre forecast indicates a...

Summary truncated. Open full advice to continue reading.

AI-Enabled Cyber Vulnerability

#4

Andre Santos

Summary

AI-enabled vulnerability discovery has crossed a critical threshold. In February 2026, Anthropic used its latest model to autonomously discover over 500 high-severity vulnerabilities in open-source software, with minimal setup and near-complete autonomy, using a model now publicly available. This brief sets out recommended actions to address the resulting acceleration in cyberattack risk before defences can catch up.

Response to Predicted 44% Likelihood of Autonomous AI Zero-Day Exploits by 2027

#5

Darshan Lakshman

Summary

Professional forecasters at the Swift Centre have assigned a 44% likelihood to a frontier AI agent autonomously executing a “Zero-Day” cyberattack on a Tier 1 operating system by December 2027. This advice argues the true probability is higher, and that the potential for cascading damage to UK Critical National Infrastructure (CNI) demands proactive legislative intervention now rather than post-hoc response.

Reducing the likelihood of an autonomous AI-enabled zero-day exploit by 31 December 2027

#6

Valeriia Povergo

Summary

Swift forecasts a 44% chance that a frontier AI agent will autonomously discover and exploit a zero-day in a Tier 1 operating system on a real device by 31 December 2027. I assess the probability slightly lower, at roughly 35%, because fully closed-loop exploitation and public verification remain difficult. The risk is still above the EU’s plausible appetite because the event would signal a step-change in offensive cyber capability against critical infrastructure and public administration....

Summary truncated. Open full advice to continue reading.

Autonomous zero-day exploitation by AI agents: UK policy response to a near-term threshold risk

#7

Miracle Owolabi

Summary

The Swift Centre assigns a 44% likelihood that a frontier AI agent will autonomously exploit a zero-day vulnerability in a Tier 1 operating system without human intervention by 31 December 2027. This advice is provided now because the 22-month window is shorter than the lead time for any meaningful policy response. The forecast is an underestimate for planning purposes: the no-human-intervention barrier is eroding faster than stated, and capable offensive actors have strong incentives not...

Summary truncated. Open full advice to continue reading.

Policy Advice for the Risk of Autonomous AI-Driven Zero-Day Exploits by 2027

#8

Kirti Patel

Summary

The forecast assigns a 44% probability that a frontier AI system will autonomously discover and exploit a previously unknown vulnerability in a Tier 1 operating system without human intervention by the end of 2027. Given the scale of potential disruption across UK critical infrastructure, this exceeds reasonable national risk tolerance even under moderate uncertainty. Current safeguards rely heavily on internal testing and post-incident response, leaving a critical gap in pre-deployment assurance. This brief evaluates three...

Summary truncated. Open full advice to continue reading.

AI-Enabled Autonomous Cyber Threats to India’s Digital Infrastructure: Policy Options and Recommendation

#9

Sanur Sharma

Summary

Professional forecasters at the Swift Centre assign a 44% probability (nearly one in two) that a frontier AI agent will autonomously discover and exploit a zero-day vulnerability in a Tier 1 operating system by December 2027. India, the world’s second-most targeted nation for cyberattacks (265 million recorded in 2025), faces this risk against a regulatory framework that contains no binding provisions specifically governing autonomous AI-enabled cyber attacks. This creates an asymmetric and urgent vulnerability. Critically,...

Summary truncated. Open full advice to continue reading.

AI-accelerated cyber exploitation: reframing the threat and a hybrid response

#10

Viola Zhong

Summary

For a decision. Forecasters assess a 44% probability of autonomous AI zero-day exploitation of Tier 1 operating systems by end-2027 — a floor, not a ceiling, on operational risk. Capability is converging, and the cost of sophisticated cyber operations is collapsing, expanding the attacker population. Only the NSA can direct the required interagency levers, and delay forfeits the defender's lead time, which the US still holds.