DiscoverThe AI FundamentalistsAI in practice: Guardrails and security for LLMs
AI in practice: Guardrails and security for LLMs

AI in practice: Guardrails and security for LLMs

Update: 2025-09-30
Share

Description

In this episode, we talk about practical guardrails for LLMs with data scientist Nicholas Brathwaite. We focus on how to stop PII leaks, retrieve data, and evaluate safety with real limits. We weigh managed solutions like AWS Bedrock against open-source approaches and discuss when to skip LLMs altogether.

• Why guardrails matter for PII, secrets, and access control
• Where to place controls across prompt, training, and output
• Prompt injection, jailbreaks, and adversarial handling
• RAG design with vector DB separation and permissions
• Evaluation methods, risk scoring, and cost trade-offs
AWS Bedrock guardrails vs open-source customization
• Domain-adapted safety models and policy matching
• When deterministic systems beat LLM complexity

This episode is part of our "AI in Practice” series, where we invite guests to talk about the reality of their work in AI. From hands-on development to scientific research, be sure to check out other episodes under this heading in our listings.

Related research:


What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

AI in practice: Guardrails and security for LLMs

AI in practice: Guardrails and security for LLMs

Dr. Andrew Clark & Sid Mangalik