DiscoverThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)Ensuring LLM Safety for Production Applications with Shreya Rajpal - #647
Ensuring LLM Safety for Production Applications with Shreya Rajpal - #647

Ensuring LLM Safety for Production Applications with Shreya Rajpal - #647

Update: 2023-09-18
Share

Description

Today we’re joined by Shreya Rajpal, founder and CEO of Guardrails AI. In our conversation with Shreya, we discuss ensuring the safety and reliability of language models for production applications. We explore the risks and challenges associated with these models, including different types of hallucinations and other LLM failure modes. We also talk about the susceptibility of the popular retrieval augmented generation (RAG) technique to closed-domain hallucination, and how this challenge can be addressed. We also cover the need for robust evaluation metrics and tooling for building with large language models. Lastly, we explore Guardrails, an open-source project that provides a catalog of validators that run on top of language models to enforce correctness and reliability efficiently.


The complete show notes for this episode can be found at twimlai.com/go/647.

Comments 
In Channel
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Ensuring LLM Safety for Production Applications with Shreya Rajpal - #647

Ensuring LLM Safety for Production Applications with Shreya Rajpal - #647

Sam Charrington

We and our partners use cookies to personalize your experience, to show you ads based on your interests, and for measurement and analytics purposes. By using our website and our services, you agree to our use of cookies as described in our Cookie Policy.