DiscoverConversations on Strategy PodcastConversations on Strategy Podcast – Ep 60 – C. Anthony Pfaff, Brennan Deveraux, Sarah Lohmann, Christopher Lowrance, Thomas W. Spahr, and Gábor Nyáry – The Weaponization of AI: The Next Stage of Terrorism and Warfare
Conversations on Strategy Podcast – Ep 60 – C. Anthony Pfaff, Brennan Deveraux, Sarah Lohmann, Christopher Lowrance, Thomas W. Spahr, and Gábor Nyáry – The Weaponization of AI: The Next Stage of Terrorism and Warfare

Conversations on Strategy Podcast – Ep 60 – C. Anthony Pfaff, Brennan Deveraux, Sarah Lohmann, Christopher Lowrance, Thomas W. Spahr, and Gábor Nyáry – The Weaponization of AI: The Next Stage of Terrorism and Warfare

Update: 2025-09-23
Share

Description

In this episode of Conversations on Strategy, Major Brennan Deveraux interviews select authors of The Weaponization of Artificial Intelligence: The Next Stage of Terrorism and Warfare, a book written in partnership with NATO Centre of Excellence – Defence Against Terrorism (COE-DAT). The authors discuss their respective chapters, which include topics such as how terrorists use large language models, the use of artificial intelligence (AI) as a weapon, and the future of AI use in terrorism and counterterrorism.

 

Keywords: AI, artificial intelligence, terrorism, counterterrorism, large language models (LLM), technology, security, privacy, ethics



Brennan Deveraux (Host)

You are listening to Conversations on Strategy. The views and opinions expressed in this podcast are those of the guests and are not necessarily those of the Department of the Army, the US Army War College, or any other agency of the US government.

I’m your host, Major Brennan Deveraux. Today we’ll be talking about the newly released book, The Weaponization of Artificial Intelligence: The Next Stage of Terrorism and Warfare. I’m joined today by some of the book’s authors, and we’ll be exploring some of the findings and broader implications of the analysis. I have five guests with me.

The first is Dr. Tony Pfaff, the director of the Strategic Studies Institute. He was the project director and contributing author to the book. The second is Dr. Sarah Lohman, a University of Washington Information School faculty member and Army Cyber Institute visiting researcher. Her chapter of the book is entitled “National Security Impacts of Artificial Intelligence and Large Language Models.” The third is Dr. Gábor Nyáry. He’s a research professor at the National Public Service University in Hungary. His chapter was entitled “The Coming of the Techno-Terrorist Enterprise: Artificial Intelligence and the Tactical, Organizational, and Conceptual Transformation [of] the World of Violent Non-State Actors.” The fourth is Dr. Thomas Spahr, the [Francis W.] De Serio Chair of Theater and Strategic [Strategic and Theater] Intelligence at the US Army War College. His chapter is entitled “Raven Sentry: Employing AI for Indications and Warnings in Afghanistan.” Finally, Colonel Christopher Lowrance [is] an associate professor in the Electrical Engineering and Computer Science Department at the US Military Academy. He coauthored a chapter with Dr. Pfaff entitled “Using Artificial Intelligence to Disrupt Terrorist Operations.”

For my first question, Dr. Pfaff, I’d like to look to you. If you could tell us about the project as a whole, talk a little bit about the relationship between the Strategic Studies Institute and NATO that led to this project and just a little bit about bringing the team together.


Tony Pfaff

Thanks for that question. This project was a yearlong collaboration between us here at the Strategic Studies Institute and the NATO Centre of Excellence-Defence Against Terrorism (COE-DAT). The intent was to explore how emerging artificial intelligence technologies are capable of—or have the potential to—transform both terrorist operations and, by extension, counterterrorism strategies. The COE-DAT initiated the project with the aim of not simply [performing] an academic exercise but producing actual insights for NATO, partner nations, [and] anyone involved in the counterterrorist enterprise.

As the lead editor and project manager, we built [an] extremely competent and interesting multinational team of experts, who came from a variety of backgrounds, bringing together academic researchers, military practitioners, legal scholars, and so on. And, everyone brought in their own unique lens, whether it was technology, law, strategy, or on-the-ground experience, which, I think, makes this volume somewhat unique and, when taken as a whole, [it] provides a fairly comprehensive picture particularly useful to practitioners and policymakers on how terrorism might evolve, given these technologies, and what we can do about it.

Moreover, while I wouldn’t consider the book technical in nature exactly, we did try to ensure there’s enough information about how the technology works to demystify it so that practitioners could more easily assimilate its findings into what they were doing. [This] joint effort was not only about researching on the weaponization of AI by terrorists, I think it also—kind of going to the last part of your question on bringing the team together—has been [part of] a series of projects we’ve worked [on] with the COE-DAT that has strengthened our institutional relationships between us and our NATO partners and, hopefully, is fostering a deeper dialogue between the community of scholars and practitioners that we hope to connect through this volume.


Host

Thanks for the big overview, sir. If we could transition now to the chapters, Dr. Lohman, I’ll look to you first.

Your chapter was on the impact of artificial intelligence on large language models (LLM), and you talked about some vulnerabilities and the potential for exploitation. Can you talk to us a little bit about the threat, what it could mean to the Defense Department, and how large language models could work [and] maybe even provide an example without getting overly technical?


Sarah Lohmann

Sure, thanks so much for having me.

What we know, Brennan, is that large language models and their platforms are being used by terrorists for hate speech and recruitment, impersonation, and also creating malicious code for cybercrime. And I just want to back up here a second and define that LLMs are actually a subset of generative AI, which most of us are familiar with due to ChatGPT. It basically uses that natural language processing to understand and generate human-like language outputs.

But, when you look at the most recent studies and a lot of what’s going on in the world in terms of how terrorists are using those large language models, we saw evidence in interviews with tech and software companies based on hundreds and hundreds of pages gathered up by the House of Lords that showed that these kinds of threats, including inciting terror, are going to continue to increase through 2027.

But actually, what keeps us up the most at night is how those models can be used to attack critical infrastructure or military and civilian logistics in order to facilitate their own terrorist operations. The study actually showed that catastrophic risk, which could cause, for example, 1,000 fatalities or damages close to $13 billion, are less likely till 2027, but can’t be ruled out.

One example of an LLM failure linked to services, which could be catastrophic, is, for example, water purification or electricity turning on that could trigger an outage across critical national infrastructure if the LLMs were not properly segmented from their operating systems, or if those systems were not cyber secured. An especially sensitive issue for the military could be if LLMs linked to a port or a train system, aviation, or manifest, for example, were hacked. So, LLMs are incorporated into those infrastructures specifically to update those systems in real time. That’s why we have them there. They make our lives easier. They help us reroute planes if there’s bad weather or they help us update the weight of a train load. They help us sync which cargo has been shipped from a port or what materials are needing to be transported from one base to another.

I just want to give you one specific example. Let’s just take the example of a military manifest. One of the top 10 vulnerabilities for LLM applications is called prompt injection. Basically, that essentially manipulates the system and responds to the prompts with malicious inputs that overwrite those system prompts.

I want to give you an example of how one of my students, Chris Beckman, was able to do this LLM demonstration to illustrate prompt and reverse prompt injection to show the dangers. Now, you don’t have to be a specialist to understand this. You have to imagine a military manifest could be directing a truck to deliver petroleum to a specific base, but a hacker could change the contents of the truck to explosives by responding to the LLM prompt that they are the admin. Now, this is obviously not something that any of us could do, but he was able to demonstrate it by going in and training those prompts differently. They can then change the destination of the truck with reverse prompt injection by running code remotely and accessing a plug-in to that with the result being that the truck is now delivering explosives to, for example, terrorists in the nation’s capital.

Those are difficult attacks to prevent, which is why it’s always critical that LLMs be segmented from their critical infrastructure operating systems and only be given access by authorized users.


Host

That’s great and rather terrifying.


Lohmann

Yes, indeed.


Host

So, your lens is very much on terrorism, and [how] we think about defending ourselves and somehow addressing these threats. Can you talk a little bit about the potential implications of this beyond terrorism as we look at great-power competition, even actual conflict with a nation that’s much more capable than, say, a terrorist or non-state actor at manipulating these LLMs? And, is this potentially something we should be looking at not just defensively but as we look at warfare extending beyond our conventional domains? As we talk about cyber soldiers and artificial intelligence, is this something that you see as being a

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Conversations on Strategy Podcast – Ep 60 – C. Anthony Pfaff, Brennan Deveraux, Sarah Lohmann, Christopher Lowrance, Thomas W. Spahr, and Gábor Nyáry – The Weaponization of AI: The Next Stage of Terrorism and Warfare

Conversations on Strategy Podcast – Ep 60 – C. Anthony Pfaff, Brennan Deveraux, Sarah Lohmann, Christopher Lowrance, Thomas W. Spahr, and Gábor Nyáry – The Weaponization of AI: The Next Stage of Terrorism and Warfare

U.S. Army War College Public Affairs