Risk Assessment 2025: MI5 Targets Potential Harm from AI Evasion and Lack of Human Control
Description
British spies have officially begun work on tackling the potential risk posed by rogue artificial intelligence (AI) systems. Security Service director general Sir Ken McCallum announced this initiative during his annual speech at the Security Service’s Thames House headquarters.
Sir Ken McCallum insisted that while he is not forecasting “Hollywood movie scenarios,” intelligence agencies must actively consider these risks. He stated he is “on the whole, a tech optimist who sees AI bringing real benefits”, but stressed that it would be “reckless” to ignore AI’s potential to cause harm.
The focus is on the next frontier: potential future risks arising from non-human, autonomous AI systems which may successfully evade both human oversight and control. The Director General noted that MI5 has spent over a century doing ingenious things to out-innovate human adversaries, and now must scope out what defending the realm will need to look like in the years ahead.
This serious consideration of future risk is being undertaken by MI5, GCHQ, and the UK’s ground-breaking AI Security Institute.
The sources also reveal immediate examples of AI misuse: a judge recently ruled that an immigration barrister used AI tools, such as ChatGPT, to prepare legal research. This resulted in the barrister citing cases that were “entirely fictitious” in an asylum appeal, wasting court time with ‘wholly irrelevant’ submissions.







