DiscoverSoftware Engineering Institute (SEI) Podcast SeriesWhat Could Possibly Go Wrong? Safety Analysis for AI Systems
What Could Possibly Go Wrong? Safety Analysis for AI Systems

What Could Possibly Go Wrong? Safety Analysis for AI Systems

Update: 2025-10-31
Share

Description

How can you ever know whether an LLM is safe to use? Even self-hosted LLM systems are vulnerable to adversarial prompts left on the internet and waiting to be found by system search engines. These attacks and others exploit the complexity of even seemingly secure AI systems 

In our latest podcast from the Carnegie Mellon University Software Engineering Institute (SEI), David Schulker and Matthew Walsh, both senior data scientists in the SEI's CERT Division, sit down with Thomas Scanlon, lead of the CERT Data Science Technical Program, to discuss their work on System Theoretic Process Analysis, or STPA, a hazard-analysis technique uniquely suitable for dealing with AI complexity when assuring AI systems. 

Comments 
In Channel
Deploying on the Edge

Deploying on the Edge

2025-05-2801:01:02

loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

What Could Possibly Go Wrong? Safety Analysis for AI Systems

What Could Possibly Go Wrong? Safety Analysis for AI Systems

David Schulker, Matthew Walsh