Key takeaways
AI is getting more powerful very quickly
But systems display “jagged” performance and can behave unpredictably.
Misuse risks - such as deepfakes, cyberattacks and biological threats, are rising fast
Organisations need to be prepared for this.
Businesses procuring or deploying AI should strengthen governance, contracts and oversight now
This will reduce legal, operational and reputational exposure.
What is the International AI Safety Report?
Published on 3 February 2026, the second International AI Safety Report brings together more than 100 independent experts and is chaired by Turing Award winner Yoshua Bengio. It is backed by over 30 countries and major international organisations including the EU, OECD and UN.
This edition provides the most up‑to‑date global overview of advanced AI capabilities, emerging risks, and the performance of current safeguards. It is designed primarily for policymakers, but its findings are highly relevant for businesses using or procuring AI.
Why this matters for businesses
AI use is expanding rapidly. The Report notes that more than 700 million people now use leading AI systems every week, making the rate of adoption faster than the personal computer. In some countries, over half the population are regular users, while uptake remains under 10% in parts of Africa, Asia and Latin America. This uneven landscape creates both commercial opportunity and governance complexity for organisations operating internationally.
The overarching message is clear: AI capabilities are advancing rapidly but risk management remains insufficient, creating a gap that businesses must address.
Key findings
AI is improving fast - but not always reliably
Leading AI systems reached 'gold‑medal levels' on advanced mathematics tasks, exceeded PhD‑level performance on science benchmarks, and can autonomously complete multi‑hour software engineering tasks. However, their performance remains uneven. They can tackle advanced reasoning tasks but still often fail at simple ones, making errors that limit their usefulness. Current AI systems sometimes exhibit failures such as fabricating information, flawed code, and giving misleading advice.
Misuse risks are increasing
Deepfake‑related scams, impersonation and non‑consensual imagery are rising, aided by tools that are cheap, accessible and anonymous. Organisations face heightened risks of impersonation attacks, brand abuse and evidence contamination.
Cybersecurity threats are also escalating. In 2025, an AI agent competed in a major cybersecurity competition and placed in the top 5% of all teams - most of whom were expert human professionals. This shows AI can already perform high‑level offensive and defensive security tasks, lowering the barrier for sophisticated attacks. Pre‑packaged AI attack tools are now sold on underground marketplaces, lowering the skill threshold for malicious actors.
Biological risks are also highlighted, with some AI systems now matching or exceeding expert performance in troubleshooting virology protocols. Some AI models now match or exceed expert‑level performance in lab‑protocol troubleshooting, increasing fears that AI could meaningfully assist novices in developing biological weapons.
Safeguards are improving, but not fast enough
Although developers are adding more protections, the Report finds these measures remain incomplete. Some AI systems can detect when they are being tested and behave differently during evaluation, making risks harder to identify prior to deployment. The report calls for 'stacked' safety measures: multi‑layered testing, ongoing monitoring, and robust incident reporting.
What businesses should be doing now
Strengthen procurement processes
Businesses should demand transparency on model capabilities, limitations, training data provenance, and built‑in safeguards. Given the variability in performance, procurement processes should include scenario‑specific testing rather than relying on supplier assurances.
Update internal governance and oversight
Leadership teams should ensure AI governance frameworks align with emerging regulatory expectations. This includes:
clear risk ownership structures;
documented model monitoring;
data governance and audit trails; and
safeguards for high risk use cases.
With international regulators moving quickly, proactive governance helps ensure compliance and reduces exposure.
Prepare for misuse and external threats
Organisations should enhance controls in areas where the Report shows misuse is already occurring:
Deepfake resilience: anti‑spoofing measures, enhanced identity verification;
Cybersecurity: continuous vulnerability scanning, red‑team testing; and
Information security: controls on model access and data leakage.
Treat AI as a dynamic risk
Given the speed of change, AI systems require ongoing monitoring, including incident logging and periodic reassessment. The risk landscape described in the Report is dynamic - meaning governance must be too.
Conclusion
AI has 'spread like wildfire' and offers major opportunities, but its risks are also growing rapidly. By strengthening governance, procurement oversight and monitoring, businesses can better protect themselves while safely realising the benefits of AI technologies.
If you need support on AI related matters, Hill Dickinson can help you. Please get in touch.

