DiscoverDetection Engineering DispatchPrompted to Fail: When LLMs Go Rogue
Prompted to Fail: When LLMs Go Rogue

Prompted to Fail: When LLMs Go Rogue

Update: 2025-06-18
Share

Description

LLMs are rewriting the rules of app security—and not always in a good way.

In this episode Alex sits down with Scott Rogers, a seasoned data scientist at ANvilogic to unpack why LLMs are the new wild west of application risk—and how old-school OWASP principles are making a serious comeback.

We cover:

  • Real-world prompt injection failures (yes, including Air Canada’s rogue chatbot)
  • How RAG systems can accidentally leak sensitive data
  • Why GenAI risk ≠ traditional appsec—but it rhymes
  • How classic tools like SAST, DAST, and logs can still save your bacon

Whether you're threat modeling your first LLM system or already knee-deep in GenAI, this episode is full of spicy detection ideas, war stories, and practical advice you won’t want to miss.

Stay in the loop! Connect with us:

Detection Engineering Dispatch features candid conversations with security teams at top companies on how they build, measure, and scale world-class detection programs.

Comments 
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Prompted to Fail: When LLMs Go Rogue

Prompted to Fail: When LLMs Go Rogue

Anvilogic