DiscoverODSC's Ai X PodcastMitigating AI Risk in Practice with Guardrails AI with Shreya Rajpal
Mitigating AI Risk in Practice with Guardrails AI with Shreya Rajpal

Mitigating AI Risk in Practice with Guardrails AI with Shreya Rajpal

Update: 2025-01-29
Share

Description

In this episode of ODSC’s AiX Podcast, we’re joined by Shreya Rajpal, the co-founder and CEO of GuardrailsAI. Prior to this, she was the Founding Engineer of Predibase and a Senior Machine Learning Engineer of the Special Projects Group at Apple. Guardrails AI is an open-source platform developed to ensure increased safety, reliability, and robustness of large language models in real-world applications.

In the conversation, Shreya reflects on her journey from AI research to startups and self-driving cars, emphasizing the growing importance of AI safety. She shares insights on real-world AI failures, such as Air Canada’s chatbot misinforming customers, and the broader implications of unreliable AI adoption.


Key Takeaways:

- AI Reliability is Crucial: AI adoption is often limited by risks like hallucinations, misinformation, and compliance issues. Guardrails AI helps mitigate these risks to make AI safer and more reliable.

- Guardrails AI in Action: The open-source framework provides over 65 AI guardrails, allowing businesses to set validation checks on AI inputs and outputs, ensuring accuracy and safety.

- Real-World AI Failures Highlight the Need for Guardrails: Examples like Air Canada’s chatbot misinforming users show how unreliable AI can lead to legal and reputational consequences.

- Low-Latency & Scalable AI Risk Management: Guardrails AI operates with minimal overhead, allowing companies to deploy safe AI without sacrificing performance.

- Upcoming Guardrails Index: A new benchmarking study will compare AI risk mitigation models, providing insights into best practices for AI safety.


Topics Discussed:

- Shreya’s background in machine learning, self-driving cars, and how she ended up co-founding Guardrails AI

- A deep, technical overview of the open-source side of Guardrails AI

- Common mistakes companies make when building AI without safeguards

- Detecting bugs and failures in AI models, specifically generative AI

- Case studies of companies that haven’t set up guardrails properly


References and Resources Mentioned:

1. Connect with Shreya!

- LinkedIn: https://www.linkedin.com/in/shreya-rajpal/

- Twitter/X: https://x.com/shreyar


2. Learn more about Guardrails AI:

- GitHub: https://github.com/guardrails-ai/guardrails

- Website: https://www.guardrailsai.com/

- Guardrails Hub: https://hub.guardrailsai.com/

- Guardrails Blog: https://www.guardrailsai.com/blog

- Discord: https://discord.com/invite/kVZEnR4WQK


This episode was sponsored by:  

Ai+ Training https://aiplus.training/ 

Home to hundreds of hours of on-demand, self-paced AI training, ODSC interviews, free webinars, and certifications in in-demand skills like LLMs and Agentic AI

And created in partnership with ODSC https://odsc.com/ 

The Leading AI Builders Conference, featuring expert-led, hands-on workshops, training sessions, and talks on cutting-edge AI topics and tools, from data science and machine learning to generative AI to LLMOps

Never miss an episode, subscribe now!

Comments 
loading
In Channel
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Mitigating AI Risk in Practice with Guardrails AI with Shreya Rajpal

Mitigating AI Risk in Practice with Guardrails AI with Shreya Rajpal