Bridging the Gap
The Swift Centre's 'Bridge the Gap' project seeks to improve AI policy making by providing open sourced policy advice that is built upon robust forecasts on AI capabilities, risks, and impacts by the world-leading team at the Swift Centre for Applied Forecasting.
Review forecasts and policy adviceKey Info
Categories Covered
5Policy Advice Submissions
29How it Works
Forecast
The Swift Centre team provides forecasts on AI capabilities, impacts, and risks.
Policy
Anyone can submit policy advice using the forecasts and have it published on the dashboard.
Review
Policymakers, advisors, researchers, and funders can review the policy advice submitted.
Submissions
By Submitted Anonymously
For forecast question: By December 31st 2028, will a G7 government or the WHO officially attribute a biosafety breach or an outbreak that kills at least 2 people to the use of a Large Language Model (LLM)?
Advice
To: UK Secretary of State for Science, Innovation & Technology
Date: 2026-03-30
Summary
This advice is provided to aid decision making in relation to the relatively unlikely but high-risk scenario wherein by year-end 2028 a biosafety breach or outbreak that kills at least two people is officially attributed to the use of an LLM model. This advice is provided in response to the event being recently evaluated (3/3/26) by The Swift Centre, which provided a 4.7% likelihood.
Options Overview
Option 1: Actively monitor the nature and level of threat
Option 2: AI sector to report hazardous user behaviour
Option 3: Explore LLM knowledge and advice using white-hat prompting
Option 4: Provide public information and support to safeguard individuals
Recommendation
Option 1 (Monitoring) is recommended due to the relatively low-cost and ease of implementation, in addition to the intelligence gathered being useful in informing both the UK’s AI Strategy and future interventions to adequately address the scenario.
Background
This advice is based on The Swift Centre forecast dated 3rd March 2026 calculating a 4.7% likelihood that by December 31st 2028 a G7 government, or the WHO, will officially attribute a biosafety breach or outbreak that kills at least two people to the use of an LLM model. This scenario is highly pertinent to your Office’s goals and responsibilities, including leading the UK’s AI strategy and technology regulation.
This relatively unlikely estimate is predominantly underpinned by 1) the requirement of official attribution, 2) the assumption that barriers to bioterrorism involve a lack of resources rather than lack of knowledge, and 3) a lack of LLM integration into lab protocols reducing the likelihood of breach scenarios. Due to assumption 2, it is reasoned within the Swift forecast that the most likely outbreak pathway would involve a common pathogen or toxin acting on a large-scale disseminated via physical infrastructure. Due to assumption 3, the proposed options focus on pathways whereby LLMs assist with hazardous ideation and execution, as opposed to accidental lab leak pathways.
As fatalities are involved, this is a high-risk scenario, thus even a relatively low calculation likely necessitates action. Furthermore, 4.7% is likely an underestimate as the forecast has not accounted for potential self-poisoning pathways leveraging psychosocial factors to circumvent physical infrastructure access requirements. This includes LLMs combined with wider digital infrastructure (i.e. social media and deep fake technologies) providing avenues for large-scale hazardous misinformation (e.g. encouraging the consumption of hazardous foods, with raw milk alone linked to frequent outbreaks), and pathways involving LLM-induced delusions (for which multiple fatalities have already been attributed).
Options
Option 1: Monitoring
This is a watch and wait strategy wherein the nature and level of threat is monitored and regularly reviewed. A three-month periodic review is recommended due to the high-risk scenario and fast moving nature of the technology. It is recommended to monitor hazardous LLM usage, including any involvement in hazardous misinformation and hazardous consequences of LLM-induced delusions. In addition, monitoring should include probing models as they are released to ascertain ease of accessing hazardous information in relation to biosafety (i.e. ideation and execution).
Considerations
The key benefits of this option include relatively low-cost and ease of implementation. Furthermore, the intelligence gathered will be useful in informing future strategies, including where to target any legislative changes and public information campaigns. This includes informing the UK’s AI Strategy and how best to regulate AI technologies. In turn, this option may thus serve to reduce the likelihood of the event over time.
Risks
The main risk associated with this option is that it fails to prevent the incident occurring. However, given the relatively low estimate for this scenario, this risk remains low.
Option 2: Collaboration with AI companies
This strategy involves requesting AI companies to monitor any potentially hazardous user behaviour in relation to biosafety. This includes identifying and reporting any users prompting an LLM to generate content related to hazardous materials and/or infrastructure vulnerabilities.
Considerations
The benefits of this strategy are it is relatively low-cost as monitoring costs would be incurred by the private sector. The strategy can also form an important safety segment of the UK’s AI Strategy, including in terms of collaborating with AI companies to achieve user monitoring. Collaboration with the AI companies is likely to have a positive impact on public opinion.
Risks
The premise that it is possible to identify users engaging LLMs this way is considerably flawed. LLM technology is openly accessible without identification. Even if major LLM companies were forced to require verification (similar to how centralised cryptocurrency exchanges now require this), users wishing to engage LLMs in this way could instead access alternative models. Thus, the main risk of this option is that it still fails to prevent an incident occurring.
Option 3: White-hat prompting
This strategy is focused on disrupting the key pathway identified by Swift involving a common pathogen or toxin acting on a large-scale disseminated via physical infrastructure. The strategy involves exploring LLM knowledge and advice about UK infrastructure vulnerable to attack (e.g. accessible food chain and water system sites) using white-hat testing. Any identified vulnerabilities are to be reported to the relevant company, body or industry for validation and mitigation. This enables likely routes to dissemination to be identified and addressed prior to any attack, thus reducing the chance of the event occurring.
Considerations
A key drawback is that this strategy is the relatively high-cost, requiring funding to carry out the white-hat testing and further funding to address any threats. The latter of which could be substantial as changes to infrastructure would likely be required. Costs may be passed on to the private sector where relevant (i.e. for privately owned infrastructure), however this would likely require legislative changes to enforce. Furthermore, the strategy would need to be repeated with relative frequency due to rapidly evolving LLM knowledge and behaviour, further increasing costs.
Risks
.
Option 4: Safeguarding individuals
This strategy is focused on disrupting self-poisoning pathways identified herein. This involves public information campaigns to raise awareness of potential LLM-enabled misinformation, and awareness of LLM-induced delusions. In terms of LLM-induced delusions, this may include raising awareness of the dangers of building relationships with LLMs, understanding how LLMs do not reflect reality but instead reinforce individual realities, signs to look for in loved ones, etc. In terms of LLM-enabled misinformation, this will likely include awareness of the harms of raw foods, awareness of commonly found poisonous plants and fungi, the importance of vaccination programmes, etc. In tandem, a helpline or organisation could be set-up offering support for people concerned about their own or others’ LLM usage, and offering opportunities to report AI-mediated health misinformation. The latter of which could in turn inform public information campaigns.
Considerations
This option will likely serve to reduce the likelihood of self-poisoning events over time. It may also have an impact on reducing physical infrastructure dissemination pathways, by reducing the chance of incidents wherein the LLM gave the hazardous idea to the user. It is a relatively low-cost strategy and requires no legislative changes and little to no collaboration with technology companies. The public opinion on raising awareness about LLM dangers is likely to be positive, particularly in light of fatal LLM-induced delusion events being increasingly reported in the press.
Risks
There is a political risk in highlighting these dangers when not actively working with AI companies to reduce their occurrence. I.e., the public may grow increasingly concerned about a lack of government intervention.
Recommendation
Given the identified trade-offs, Option 1 (Monitoring) is recommended.
Option 2 (collaboration with AI companies) is ruled out as it will likely have little-to-no impact on the event occurring. Option 3 (white-hat prompting) is ruled out due to high economic costs for what is a relatively unlikely event. Option 4 (safeguarding individuals) is a viable alternative to Option 1. However, Option 1 is recommended in particular due to the low-cost and ease of implementation, in addition to the intelligence gathered being useful in informing both the UK’s AI Strategy and any future interventions to address this scenario.
Next Steps
If you agree with this recommendation, we will draft an evidence-based monitoring framework which we will work with your Office to further refine and implement.
