DiscoverAgents of IntelligenceAI Security Deep Dive: Safeguarding LLMs in the Cloud
AI Security Deep Dive: Safeguarding LLMs in the Cloud

AI Security Deep Dive: Safeguarding LLMs in the Cloud

Update: 2025-03-12
Share

Description

In this episode, we explore the hidden risks of deploying large language models (LLMs) like DeepSeek in enterprise cloud environments and the best security practices to mitigate them. Hosted by AI security experts and cloud engineers, each episode breaks down critical topics such as preventing sensitive data exposure, securing API endpoints, enforcing RBAC with Azure AD and AWS IAM, and meeting compliance standards like China’s MLPS 2.0 and PIPL. We’ll also tackle real-world AI threats like prompt injection, model evasion, and API abuse, with actionable guidance for technical teams working with Azure, AWS, and hybrid infrastructures. Whether you're an AI/ML engineer, platform architect, or security leader, this podcast will equip you with the strategies and technical insights needed to securely deploy generative AI models in the cloud.

Comments 
loading
In Channel
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

AI Security Deep Dive: Safeguarding LLMs in the Cloud

AI Security Deep Dive: Safeguarding LLMs in the Cloud

Sam Zamany