Writing in the Financial Times, John Thornhill made an interesting point about intelligence, in the sense of competition between nations and their national interests, saying that a key reason why the West won the Cold War is that democracies are better at processing information. In short, lackeys tell their autocrat masters what they want to hear, not what they themselves actually know. So how can democracies retain an edge on autocracy? In a previous age we needed spies, but now we need SpyGPT.
Back in 2023, for reasons that are too complicated to go into, but related to one of my academic positions, I was tasked with preparing a briefing note on the impact of recent developments in AI on open-source intelligence (OSINT) for the Defence Data Research Centre (DDRC). The DDRC is a UK research consortium that focuses on improving how defence organisations use data, especially for artificial intelligence (AI) and data science applications. It aims to tackle both the technical and cultural barriers that stop defence data being exploited effectively by being a centre of excellence for defence data research but the output from the centre is intended to benefit not just defence, but also the wider UK economy by improving data‑driven innovation practices that can transfer to other sectors.
OSINT, which is what John Thornhill wrote about, is about the gathering of open data sources to support decision making. While you can look at it as a category in its own right, it is useful to think about it as a component of the intelligence gathered in other areas, as shown in the picture below. This perspective is based on the Rand work on Second Generation OSINT but I amended the Rand model to pull out cyberintelligence as a separate category covering intelligence activities focused on the collection, validation, exploitation and dissemination of information concerning the threat posed by an adversary in the virtual world.
***Picture****
(There’s nothing secret about this, you won’t be shot for copy and pasting it. I spent some of the early years of my career working for the government, armed forces and NATO so I am perfectly well-acquainted with the rules.)
Developing the briefing note was both interesting and challenging. It was interesting because there is a great body of work on AI in intelligence already out there and I had to get familiar with it quickly in order to structure the briefing note and it was very challenging because the rules around the briefing note were very tight. Remember the old adage attributed to Pascal “I’m sorry I wrote you such a long letter, I didn’t have time write you a short one”.
(As it happens, I ended up presenting the briefing in person to the CTO of the CIA! Life really does take some strange turns sometimes.)
In cyberwar, as in business in general, the march of AI means that as B
xxx
The fundamental problem is that AI must compress reality into model-legible forms. In this setting, adversaries can exploit the compression. They don’t have to attack the territory; they can attack the map.
From: Agentic AI’s OODA Loop Problem – Schneier on Security.
xxx
xxx
AI systems themselves are becoming targets.
As organizations embed AI across products, operations, and workflows, their AI systems have emerged as a new class of assets requiring protection. Organizations need to protect the integrity of their AI models; training data, interaction, and prompting interfaces; and agentic tools
From: AI Is Raising the Stakes in Cybersecurity | BCG.
xxx
adversaries will switch from attacking the territory to attacking the map.
Broadly speaking, my conclusions at the time were that