Discover
Cloud Security Podcast
Cloud Security Podcast
Author: TechRiot.io
Subscribed: 399Played: 13,187Subscribe
Share
© TechRiot.io
Description
Learn Cloud Security in Public Cloud and for AI systems, the unbiased way from CyberSecurity Experts solving challenges at Cloud Scale. We are honest because we are not owned by Cloud Service Provider like AWS, Azure or Google Cloud.
We aim to make the community learn Cloud Security through community stories from small - Large organisations solving multi-cloud challenges to diving into specific topics of Cloud Security.
We STREAM interviews on Cloud Security Topics every week on Linkedin, YouTube and Twitter with over 150K people tuning in.
We aim to make the community learn Cloud Security through community stories from small - Large organisations solving multi-cloud challenges to diving into specific topics of Cloud Security.
We STREAM interviews on Cloud Security Topics every week on Linkedin, YouTube and Twitter with over 150K people tuning in.
341 Episodes
Reverse
Is AI security just "Cloud Security 2.0"? Toni De La Fuente, creator of the open-source tool Prowler, joins Ashish to explain why securing AI workloads requires a fundamentally different approach than traditional cloud infrastructure.We dive deep into the "Shared Responsibility Gap" emerging with managed AI services like AWS Bedrock and OpenAI. Toni spoke about the hidden dangers of default AI architectures, why you should never connect an MCP (Model Context Protocol) directly to a database.We discuss the new AI-driven SDLC, where tools like Claude Code can generate infrastructure but also create massive security blind spots if not monitored.Guest Socials - Toni's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:50) Who is Toni De La Fuente? (Creator of Prowler)(03:50) AI Security vs. Cloud Security: What's the Difference? (07:20) The Shared Responsibility Gap in AI Services (Bedrock, OpenAI) (11:30) The "Fifth Party" Risk: Managed AI Access (13:40) AI Architecture Best Practices: Never Connect MCP to DB Directly (16:40) Prowler's AI Pillars: Generating Dashboards & Detections (22:30) The New SDLC: Securing Code from Claude Code & Lovable (25:30) The "Magic" Trap: Why AI Doesn't Know Your Security Context (28:30) Top 3 Priorities for Security Leaders (Infra, LLM, Shadow AI) (30:40) Future Predictions: Why Predicting 12 Months Out is Impossible
In the world of Generative AI, natural language has become the new executable. Attackers no longer need complex code to breach your systems, sometimes, asking for a "poem" is enough to steal your passwords .In this episode, Eduardo Garcia (Global Head of Cloud Security Architecture at Check Point) joins Ashish to explain the paradigm shift in AI security. He shares his experience building AI-powered fraud detection systems and why traditional security controls fail against intent-based attacks like prompt injection and data poisoning .We dive deep into the reality of Shadow AI, where employees unknowingly train public models with sensitive corporate data , and the sophisticated world of Deepfakes, where attackers can bypass biometric security using AI-generated images unless you're tracking micro-movements of the eye .Guest Socials - Eduardo's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security Podcast(00:00) Introduction(01:55) Who is Eduardo Garcia? (Check Point)(03:00) Defining Security for GenAI: The Focus on Prompts (05:20) Why Natural Language is the New Executable (08:50) Multilingual Attacks: Bypassing Filters with Mandarin (12:00) Shift Left vs. Shift Right: The 70/30 Rule for AI Security (15:30) The "Poem Hack": Stealing Passwords with Creative Prompts (21:00) Shadow AI: The "HR Spreadsheet" Leak Scenario (25:40) Security vs. Compliance in a Blurring World (28:00) The Conflict: "My Budget Doesn't Include Security" (34:00) The 5 V's of AI Data: Volume, Veracity, Velocity (40:00) Deepfakes & Biometrics: Detecting Micro-Movements (43:40) Fun Questions: Soccer, Family, and Honduran Tacos
In this episode, Brad Hibbert (COO & Chief Strategy Officer at Brinqa) joins Ashish to explain why traditional risk-based vulnerability management (RBVM) is no longer enough in a cloud-first world .We explore the evolution from simple patch management to Exposure Management a holistic approach that sits above your security tools to connect infrastructure, code, and cloud risks to actual business impact . Brad breaks down the critical difference between a "Risk Owner" (the service owner) and a "Remediation Owner" (the team fixing the bug) and why this distinction solves the "who fixes this?" problem .This conversation covers practical steps to uplift your VM program, how AI is helping prioritize the noise , and why compliance often just "proves activity" rather than reducing real risk . Whether you're drowning in Jira tickets or trying to automate remediation, this episode provides a roadmap for modernizing your security postureGuest Socials - Brad's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:50) Who is Brad Hibbert? (Brinqa)(04:55) The Evolution: From Scanning Servers to Cloud Complexity (06:50) What is Risk-Based Vulnerability Management? (08:50) Risk Owners vs. Remediation Owners: Who Fixes What? (12:00) How AI is Changing Vulnerability Management (15:20) Defining Exposure Management: Moving Beyond the Tools (18:30) The Challenge of "Data Inconsistency" Between Tools (22:30) Readiness Check: Are You Ready for Exposure Management? (25:10) Automated Remediation: Is "Zero Tickets" Possible? (28:40) Compliance vs. Risk: Why "Activity" isn't "Impact" (31:30) Maturity Milestones for Exposure Management (36:50) Fun Questions: Golf, Turkish Kebabs & Friendships
Is "developer-friendly" AI security actually possible? In this episode, Bryan Woolgar-O'Neil (CTO & Co-founder of Harmonic Security) joins Ashish to dismantle the traditional "block everything" approach to security.Bryan explains why 70% of Model Context Protocol (MCP) servers are running locally on developer laptops and why trying to block them is a losing battle . Instead, he advocates for a "coaching" approach, intervening in real-time to guide engineers rather than stopping their flow .We dive deep into the technical realities of MCP (Model Context Protocol), why it's becoming the standard for connecting AI to data, and the security risks of connecting it to production environments . Bryan also shares his prediction that Small Language Models (SLMs) will eventually outperform general giants like ChatGPT for specific business tasks .Guest Socials - Bryan's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(01:55) Who is Bryan Woolgar-O'Neil?(03:00) Why AI Adoption Stops at Experimentation(05:15) The "Shadow AI" Blind Spot: Firewall Stats vs. Reality (08:00) Is AI Security Fundamentally Different? (Speed & Scale) (10:45) Can Security Ever Be "Developer Friendly"? (14:30) What is MCP (Model Context Protocol)? (17:20) Why 70% of MCP Usage is Local (and the Risks) (21:30) The "Coaching" Approach: Don't Just Block, Educate (25:40) Developer First: Permissive vs. Blocking Cultures (30:20) The Rise of the "Head of AI" Role (34:30) Use Cases: Workforce Productivity vs. Product Integration (41:00) An AI Security Maturity Model (Visibility -> Access -> Coaching) (46:00) Future Prediction: Agentic Flows & Urgent Tasks (49:30) Why Small Language Models (SLMs) Will Win (53:30) Fun Questions: Feature Films & Pork Dumplings
Is the AI SOC a reality, or just vendor hype? In this episode, Antoinette Stevens (Principal Security Engineer at Ramp) joins Ashish to dissect the true state of AI in detection engineering.Antoinette shares her experience building detection program from scratch, explaining why she doesn't trust AI to close alerts due to hallucinations and faulty logic . We explore the "engineering-led" approach to detection, moving beyond simple hunting to building rigorous testing suites for detection-as-code .We discuss the shrinking entry-level job market for security roles , why software engineering skills are becoming non-negotiable , and the critical importance of treating AI as a "force multiplier, not your brain".Guest Socials - Antoinette's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:25) Who is Antoinette Stevens?(04:10) What is an "Engineering-Led" Approach to Detection? (06:00) Moving from Hunting to Automated Testing Suites (09:30) Build vs. Buy: Is AI Making it Easier to Build Your Own Tools? (11:30) Using AI for Documentation & Playbook Updates (14:30) Why Software Engineers Still Need to Learn Detection Domain Knowledge (17:50) The Problem with AI SOC: Why ChatGPT Lies During Triage (23:30) Defining AI Concepts: Memory, Evals, and Inference (26:30) Multi-Agent Architectures: Using Specialized "Persona" Agents (28:40) Advice for Building a Detection Program in 2025 (Back to Basics) (33:00) Measuring Success: Noise Reduction vs. False Positive Rates (36:30) Building an Alerting Data Lake for Metrics (40:00) The Disappearing Entry-Level Security Job & Career Advice (44:20) Why Junior Roles are Becoming "Personality Hires" (48:20) Fun Questions: Wine Certification, Side Quests, and Georgian Food
Traditional vulnerability management is simple: find the flaw, patch it, and verify the fix. But what happens when the "asset" is a neural network that has learned something ethically wrong? In this episode, Sapna Paul (Senior Manager at Dayforce) explains why there are no "Patch Tuesdays" for AI models .Sapna breaks down the three critical layers of AI vulnerability management: protecting production models, securing the data layer against poisoning, and monitoring model behavior for technically correct but ethically flawed outcomes . We discuss how to update your risk register to speak the language of business and the essential skills security professionals need to survive in an AI-first world .The conversation also covers practical ways to use AI within your security team to combat alert fatigue , the importance of explainability tools like SHAP and LIME , and how to align with frameworks like the NIST AI RMF and the EU AI Act .Guest Socials - Sapna's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:00) Who is Sapna Paul?(02:40) What is Vulnerability Management in the Age of AI? (05:00) Defining the New Asset: Neural Networks & Models (07:00) The 3 Layers of AI Vulnerability (Production, Data, Behavior) (10:20) Updating the Risk Register for AI Business Risks (13:30) Compliance vs. Innovation: Preventing AI from Going Rogue (18:20) Using AI to Solve Vulnerability Alert Fatigue (23:00) Skills Required for Future VM Professionals (25:40) Measuring AI Adoption in Security Teams (29:20) Key Frameworks: NIST AI RMF & EU AI Act (31:30) Tools for AI Security: Counterfit, SHAP, and LIME (33:30) Where to Start: Learning & Persona-Based Prompts (38:30) Fun Questions: Painting, Mentoring, and Vegan Ramen
Think your cloud backups will save you from a ransomware attack? Think again. In this episode, Matt Castriotta (Field CTO at Rubrik) explains why the traditional "I have backups" mindset is dangerous. He distinguishes between Disaster Recovery (business continuity for operational errors) and Cyber Resilience (recovering from a malicious attack where data and identity are untrusted) .Matt speaks about the "dirty secrets" of cloud-native recovery, explaining why S3 versioning and replication are not valid cyber recovery strategies . The conversation shifts to the critical, often overlooked aspect of Identity Recovery. If your Active Directory or Entra ID is compromised, it's "ground zero” and you can't access anything. Matt argues that identity must be treated as the new perimeter and backed up just like any other critical data source .We also explore the impact of AI agents on data integrity, how do you "rewind" an AI agent that hallucinated and corrupted your data? Plus, practical advice on DORA compliance, multi-cloud resiliency, and the "people and process" side of surviving a breach.Guest Socials - Matt's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions:(00:00) Introduction(02:20) Who is Matt Castriotta?(03:20) Defining Cyber Resilience: The Ability to Say "No" to Ransomware(05:00) Why "I Have Backups" is Not Enough(06:45) The Difference Between Disaster Recovery and Cyber Recovery(10:20) Cloud Native Risks: Versioning and Replication Are Not Backups(12:50) DORA Compliance: Multi-Cloud Resiliency & Egress Costs(15:10) The "Shared Responsibility Model" Trap in Cloud(17:45) Identity is the New Perimeter: Why You Must Back It Up(22:30) Identity Recovery: Can You Restore Your Active Directory in Minutes?(25:40) AI and Data: The New "Oil" and "Crown Jewels"(27:20) Rubrik Agent Cloud: Rewinding AI Agent Actions(29:40) Top 3 Priorities for a 2026 Resiliency Program(33:10) Fun Questions: Guitar, Family, and Italian Food
Transitioning a mature organization from an API-first model to an AI-first model is no small feat. In this episode, Yash Kosaraju, CISO of Sendbird, shares the story of how they pivoted from a traditional chat API platform to an AI agent platform and how security had to evolve to keep up.Yash spoke about the industry's obsession with "Zero Trust," arguing instead for a practical "Multi-Layer Trust" approach that assumes controls will fail . We dive deep into the specific architecture of securing AI agents, including the concept of a "Trust OS," dealing with new incident response definitions (is a wrong AI answer an incident?), and the critical need to secure the bridge between AI agents and customer environments .This episode is packed with actionable advice for AppSec engineers feeling overwhelmed by the speed of AI. Yash shares how his team embeds security engineers into sprint teams for real-time feedback, the importance of "AI CTFs" for security awareness, and why enabling employees with enterprise-grade AI tools is better than blocking them entirely .Questions asked:Guest Socials - Yash's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:20) Who is Yash Kosaraju? (CISO at Sendbird)(03:30) Sendbird's Pivot: From Chat API to AI Agent Platform(05:00) Balancing Speed and Security in an AI Transition(06:50) Embedding Security Engineers into AI Sprint Teams(08:20) Threats in the AI Agent World (Data & Vendor Risks)(10:50) Blind Spots: "It's Microsoft, so it must be secure"(12:00) Securing AI Agents vs. AI-Embedded Applications(13:15) The Risk of Agents Making Changes in Customer Environments(14:30) Multi-Layer Trust vs. Zero Trust (Marketing vs. Reality) (17:30) Practical Multi-Layer Security: Device, Browser, Identity, MFA(18:25) What is "Trust OS"? A Foundation for Responsible AI(20:45) Balancing Agent Security vs. Endpoint Security(24:15) AI Incident Response: When an AI Gives a Wrong Answer(29:20) Security for Platform Engineers: Enabling vs. Blocking(30:45) Providing Enterprise AI Tools (Gemini, ChatGPT, Cursor) to Employees(32:45) Building a "Security as Enabler" Culture(36:15) What Questions to Ask AI Vendors (Paying with Data?)(39:20) Personal Use of Corporate AI Accounts(43:30) Using AI to Learn AI (Gemini Conversations)(45:00) The Stress on AppSec Engineers: "I Don't Know What I'm Doing"(48:20) The AI CTF: Gamifying Security Training(50:10) Fun Questions: Outdoors, Team Building, and Indian/Korean Food
Thinking of building your own AI security tool? In this episode, Santiago Castiñeira, CTO of Maze, breaks down the realities of the "Build vs. Buy" debate for AI-first vulnerability management.While building a prototype script is easy, scaling it into a maintainable, audit-proof system is a massive undertaking requiring specialized skills often missing in security teams. The "RAG drug" relies too heavily on Retrieval-Augmented Generation for precise technical data like version numbers, which often fails .The conversation gets into the architecture required for a true AI-first system, moving beyond simple chatbots to complex multi-agent workflows that can reason about context and risk . We also cover the critical importance of rigorous "evals" over "vibe checks" to ensure AI reliability, the hidden costs of LLM inference at scale, and why well-crafted agents might soon be indistinguishable from super-intelligence .Guest Socials - Santiago's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:00) Who is Santiago Castiñeira?(02:40) What is "AI-First" Vulnerability Management? (Rules vs. Reasoning)(04:55) The "Build vs. Buy" Debate: Can I Just Use ChatGPT?(07:30) The "Bus Factor" Risk of Internal Tools(08:30) Why MCP (Model Context Protocol) Struggles at Scale(10:15) The Architecture of an AI-First Security System(13:45) The Problem with "Vibe Checks": Why You Need Proper Evals(17:20) Where to Start if You Must Build Internally(19:00) The Hidden Need for Data & Software Engineers in Security Teams(21:50) Managing Prompt Drift and Consistency(27:30) The Challenge of Changing LLM Models (Claude vs. Gemini)(30:20) Rethinking Vulnerability Management Metrics in the AI Era(33:30) Surprises in AI Agent Behavior: "Let's Get Back on Topic"(35:30) The Hidden Cost of AI: Token Usage at Scale(37:15) Multi-Agent Governance: Preventing Rogue Agents(41:15) The Future: Semi-Autonomous Security Fleets(45:30) Why RAG Fails for Precise Technical Data (The "RAG Drug")(47:30) How to Evaluate AI Vendors: Is it AI-First or AI-Sprinkled?(50:20) Common Architectural Mistakes: Vibe Evals & Cost Ignorance(56:00) Unpopular Opinion: Well-Crafted Agents vs. Super Intelligence(58:15) Final Questions: Kids, Argentine Steak, and Closing
In this episode, Cliff Crosland, CEO & co-founder of Scanner.dev, shares his candid journey of trying (and initially failing) to build an in-house security data lake to replace an expensive traditional SIEM.Cliff explains the economic breaking point where scaling a SIEM became "more expensive than the entire budget for the engineering team". He details the technical challenges of moving terabytes of logs to S3 and the painful realization that querying them with Amazon Athena was slow and costly for security use cases .This episode is a deep dive into the evolution of logging architecture, from SQL-based legacy tools to the modern "messy" data lake that embraces full-text search on unstructured data. We discuss the "data engineering lift" required to build your own, the promise (and limitations) of Amazon Security Lake, and how AI agents are starting to automate detection engineering and schema management.Guest Socials - Cliff's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:25) Who is Cliff Crosford?(03:00) Why Teams Are Switching from SIEMs to Data Lakes(06:00) The "Black Hole" of S3 Logs: Cliff's First Failed Data Lake(07:30) The Engineering Lift: Do You Need a Data Engineer to Build a Lake?(11:00) Why Amazon Athena Failed for Security Investigations(14:20) The Danger of Dropping Logs to Save Costs(17:00) Misconceptions About Building Your Own Data Lake(19:00) The Evolution of Logging: From SQL to Full-Text Search(21:30) Is Amazon Security Lake the Answer? (OCSF & Custom Logs)(24:40) The Nightmare of Log Normalization & Custom Schemas(28:00) Why Future Tools Must Embrace "Messy" Logs(29:55) How AI Agents Are Automating Detection Engineering(35:45) Using AI to Monitor Schema Changes at Scale(39:45) Build vs. Buy: Does Your Security Team Need Data Engineers?(43:15) Fun Questions: Physics Simulations & Pumpkin Pie
How do you establish trust in an AI SOC, especially in a regulated environment? Grant Oviatt, Head of SOC at Prophet Security and a former SOC leader at Mandiant and Red Canary, tackles this head-on as a self-proclaimed "AI skeptic". Grant shared that after 15 years of being "scared to death" by high-false-positive AI, modern LLMs have changed the game .The key to trust lies in two pillars: explainability (is the decision reasonable?) and traceability (can you audit the entire data trail, including all 40-50 queries?) . Grant talks about yje critical architectural components for regulated industries, including single-tenancy , bring-your-own-cloud (BYOC) for data sovereignty , and model portability.In this episode we will be comparing AI SOC to traditional MDRs and talking about real-world "bake-off" results where an AI SOC had 99.3% agreement with a human team on 12,000 alerts but was 11x faster, with an average investigation time of just four minutes .Guest Socials - Grant's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security Podcast(00:00) Introduction(02:00) Who is Grant Oviatt?(02:30) How to Establish Trust in an AI SOC for Regulated Environments(03:45) Explainability vs. Traceability: The Two Pillars of Trust(06:00) The "Hard SOC Life": Pre-AI vs. AI SOC(09:00) From AI Skeptic to AI SOC Founder: What Changed? (10:50) The "Aha!" Moment: Breaking Problems into Bite-Sized Pieces(12:30) What Regulated Bodies Expect from an AI SOC(13:30) Data Management: The Key for Regulated Industries (PII/PHI) (14:40) Why Point-in-Time Queries are Safer than a SIEM (15:10) Bring-Your-Own-Cloud (BYOC) for Financial Services (16:20) Single-Tenant Architecture & No Training on Customer Data (17:40) Bring-Your-Own-Model: The Rise of Model Portability (19:20) AI SOC vs. MDR: Can it Replace Your Provider? (19:50) The 4-Minute Investigation: Speed & Custom Detections (21:20) The Reality of Building Your Own AI SOC (Build vs. Buy)(23:10) Managing Model Drift & Updates(24:30) Why Prophet Avoids MCPs: The Lack of Auditability (26:10) How Far Can AI SOC Go? (Analysis vs. Threat Hunting)(27:40) The Future: From "Human in the Loop" to "Manager in the Loop" (28:20) Do We Still Need a Human in the Loop? (95% Auto-Closed) (29:20) The Red Lines: What AI Shouldn't Automate (Yet) (30:20) The Problem with "Creative" AI Remediation(33:10) What AI SOC is Not Ready For (Risk Appetite)(35:00) Gaining Confidence: The 12,000 Alert Bake-Off (99.3% Agreement) (37:40) Fun Questions: Iron Mans, Texas BBQ & SeafoodThank you to Prophet Security for sponsoring this episode.
Are we underestimating how the agentic world is impacting cybersecurity? We spoke to Mohan Kumar, who did production security at Box for a deep dive into the threats of true autonomous AI agents.The conversation moves beyond simple LLM applications (like chatbots) to the new world of dynamic, goal-driven agents that can take autonomous actions. Mohan took us through why this shift introduces a new class of threats we aren't prepared for, such as agents developing new, unmonitorable communication methods ("Jibber-link" mode).Mohan shared his top three security threats for AI agents in production:Memory Poisoning: How an agent's trusted memory (long-term, short-term, or entity memory) can be corrupted via indirect prompt injection, altering its core decisions.Tool Misuse: The risk of agents connecting to rogue tools or MCP servers, or having their legitimate tools (like a calendar) exploited for data exfiltration.Privilege Compromise: The critical need to enforce least-privilege on agents that can shift roles and identities, often through misconfiguration.Guest Socials - Mohan's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(01:30) Who is Mohan Kumar? (Production Security at Box)(03:30) LLM Application vs. AI Agent: What's the Difference?(06:50) "We are totally underestimating" AI agent threats(07:45) Software 3.0: When Prompts Become the New Software(08:20) The "Jibber-link" Threat: Agents Ditching Human Language(10:45) The Top 3 AI Agent Security Threats(11:10) Threat 1: Memory Poisoning & Context Manipulation(14:00) Threat 2: Tool Misuse (e.g., exploiting a calendar tool)(16:50) Threat 3: Privilege Compromise (Least Privilege for Agents)(18:20) How Do You Monitor & Audit Autonomous Agents?(20:30) The Need for "Observer" Agents(24:45) The 6 Components of an AI Agent Architecture(27:00) Threat Modeling: Using CSA's MAESTRO Framework(31:20) Are Leaks Only from Open Source Models or Closed (OpenAI, Claude) Too?(34:10) The "Grandma Trick": Any Model is Susceptible(38:15) Where is AI Agent Security Evolving? (Orchestration, Data, Interface)(42:00) Fun Questions: Hacking MCPs, Skydiving & Risk, BiryaniResources mentioned during the episode:Mohan’s Udemy Course -AI Security Bootcamp: LLM Hacking Basics Andre Karpathy's "Software 3.0" Concept "Jibber-link Mode" VideoCrewAI FrameworkOWASP Top 10 for LLM Applications Cloud Security Alliance (CSA) MAESTRO Framework
The silos between Application Security and Cloud Security are officially breaking down, and AI is the primary catalyst. In this episode, Tejas Dakve, Senior Manager, Application Security, Bloomberg Industry Group and Aditya Patel, VP of Cybersecurity Architecture discuss how the AI-driven landscape is forcing a fundamental change in how we secure our applications and infrastructure.The conversation explores why traditional security models and gates are "absolutely impossible" to maintain against the sheer speed and volume of AI-generated code . Learn why traditional threat modeling is no longer a one-time event, how the lines between AppSec and CloudSec are merging, and why the future of the industry belongs to "T-shaped engineers" with a multidisciplinary range of skills.Guest Socials - Tejas's Linkedin + Aditya's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:30) Who is Tejas Dakve? (AppSec)(03:40) Who is Aditya Patel? (CloudSec)(04:30) Common Use Cases for AI in Cloud & Applications(08:00) How AI Changed the Landscape for AppSec Teams(09:00) Why Traditional Security Models Don't Work for AI(11:00) AI is Breaking Down Security Silos (CloudSec & AppSec)(12:15) The "Hallucination" Problem: AI Knows Everything Until You're the Expert(12:45) The Speed & Volume of AI-Generated Code is the Real Challenge(14:30) How to Handle the AI Code Explosion? "Paved Roads"(15:45) From "Department of No" to "Department of Safe Yes"(16:30) Baking Security into the AI Lifecycle (Like DevSecOps)(18:25) Securing Agentic AI: Why IAM is More Important than the Chat(24:00) The Silo: AppSec Doesn't Have Visibility into Cloud IAM(25:00) Merging Threat Models: AppSec + CloudSec(26:20) Using New Frameworks: MITRE ATLAS & OWASP LLM Top 10(27:30) Threat Modeling Must Be a "Living & Breathing Process"(28:30) Using AI for Automated Threat Modeling(31:00) Building vs. Buying AI Security Tools(34:10) Prioritizing Vulnerabilities: Quality Over Quantity(37:20) The Rise of the "T-Shaped" Security Engineer(39:20) Building AI Governance with Cross-Functional Teams(40:10) Secure by Design for AI-Native Applications(44:10) AI Adoption Maturity: The 5 Stages of Grief(50:00) How the Security Role is Evolving with AI(55:20) Career Advice for Evolving in the Age of AI(01:00:00) Career Advice for Newcomers: Get an IT Help Desk Job(01:03:00) Fun Questions: Cats, Philanthropy, and Thai FoodResources discussed during the interview:Amazon Rufus: (Amazon's AI review summarizer) OWASP Top 10 for LLMsSTRIDE Threat Model: (Microsoft methodology) MITRE ATLASCloud Security Alliance (CSA) Maestro Framework CISA KEV (Known Exploited Vulnerabilities)Book: Range: Why Generalists Triumph in a Specialized World by David Epstein Anjali Charitable TrustAditya Patel's Blog
Is the AI SOC analyst just hype, or is there measurable ROI? We spoke to Edward Wu, founder of Dropzone AI about this and he shared insights from a recent Cloud Security Alliance (CSA) benchmark report that quantified the impact of AI augmentation on SOC teams. The study revealed significant improvements in speed (45-60% faster investigations) and completeness, even for analysts using the tech for the first time.Edward spoke about the "robotic" limitations of traditional SOAR playbooks with the adaptive capabilities of agentic AI systems, which can autonomously investigate alerts end-to-end without pre-defined scripts . He shared that while AI won't entirely replace human analysts ("That's not going to happen"), it will automate much of the manual Tier 1 toil, freeing up humans for higher-value roles like security architecture, transformation, and detection engineering .Guest Socials - Edward's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:40) Who is Edward Wu?(03:30) The Evolution of AI Agents Since ChatGPT(04:35) Surprising Findings from the CSA AI SOC Benchmark Report(06:40) Why Has Traditional Security Automation (SOAR) Underdelivered?(09:30) How AI SOC Analysts Differ from SOAR Playbooks(11:30) Does Agentic AI Reduce the Need for Security Data Lakes?(13:20) The Evolving ROI for SOC in the AI Era(14:50) ROI Use Case 1: Reducing Alert Investigation Latency(15:15) ROI Use Case 2: Increasing Alert Coverage (Mediums & Lows)(16:20) ROI Use Case 3: Depth of Coverage & Skill Uniformity(18:15) Achieving Both Speed and Thoroughness with AI(19:40) How Far Can AI Go? Detection vs. Investigation vs. Response(21:35) AI SOC Hype vs. Reality: Receptiveness and Trust(24:20) The Future Role of Tier 1 SOC Analysts(27:40) What Scale Benefits Most from AI SOC Analysts? (Enterprise & MSPs)(29:00) The Build vs. Buy Dilemma for AI SOC Technology ($20M R&D Reality)(33:10) Training Budgets: What Skills Should Future SOC Teams Learn?Resources spoken about during the episode:Beyond the Hype: AI Agents in the SOC Benchmark StudyRequest a Demo here
Can you just use Claude Code or another LLM to "vibe code" your way into building an AI SOC? In this episode, Ariful Huq, Co-Founder and Head of Product at Exaforce spoke about the reality being far more complex than the hype suggests. He explains why a simple "bolt-on" approach to AI in the SOC is insufficient if you're looking for real security outcomes.We speak about foundational elements required to build a true AI SOC, starting with the data. It's "well more than just logs and event data," requiring the integration of config, code, and business context to remove guesswork and provide LLMs with the necessary information to function accurately . The discussion covers the evolution beyond traditional SIEM capabilities, the challenges of data lake architectures for real-time security processing, and the critical need for domain-specific knowledge to build effective detections, especially for SaaS platforms like GitHub that lack native threat detection .This is for SOC leaders and CISOs feeling the pressure to integrate AI. Learn what it really takes to build an AI SOC, the unspoken complexities, and how the role of the security professional is evolving towards the "full-stack security engineer".Guest Socials - Ariful's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:30) Who is Ariful Huq?(03:40) Can You Just Use Claude Code to Build an AI SOC?(06:50) Why a "Bolt-On" AI Approach is Tough for SOCs(08:15) The Importance of Data: Beyond Logs to Config, Code & Context(09:10) Building AI Native Capabilities for Every SOC Task (Detection, Triage, Investigation, Response)(12:40) The Impact of Cloud & SaaS Data Volume on Traditional SIEMs(14:15) Building AI Capabilities on AWS Bedrock: Best Practices & Challenges(17:20) Why SIEM Might Not Be Good Enough Anymore(19:10) The Critical Role of Diverse Data (Config, Code, Context) for AI Accuracy(22:15) Data Lake Challenges (e.g., Snowflake) for Real-Time Security Processing(26:50) Detection Coverage Blind Spots, Especially for SaaS (e.g., GitHub)(31:40) Building Trust & Transparency in AI SOCs(35:40) Rethinking the SOC Team Structure: The Rise of the Full-Stack Security Engineer(42:15) Final Questions: Running, Family, and Turkish Food
How do you perform incident response on a Kubernetes cluster when you're not even on the same network? In this episode, Damien Burks, Senior Security engineer breaks down the immense challenges of container security and why most commercial tools are failing at automated response.While many CNAPPs provide runtime detection, they lack a "sophisticated approach to automating incident response or containment" in complex environments like private EKS . He shares his hands-on experience building a platform that uses a dynamically deployed Lambda function to achieve containment of a compromised EKS node in just 10 minutes, a process that would otherwise take hours of manual work and approvals .This is a guide for any DevSecOps or cloud security professional tasked with securing containerized workloads. The conversation also covers a layered prevention strategy, the evolving role of the cloud security engineer, and career advice for those looking to enter the field.Guest Socials - Damien's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:15) Who is Damien Burks?(03:20) The State of Cloud Incident Response in 2025(05:15) Why There is No Sophisticated, Automated IR for Kubernetes(06:20) A Deep Dive into Kubernetes Incident Response(07:30) The Unique Challenge of a Private EKS Cluster(12:15) A Layered Approach to Prevention in a DevSecOps Culture(17:00) How to Automate Containment in a Private EKS Cluster(17:40) From Hours to 10 Minutes: The Impact of Automation(22:00) The Evolving & Complex Role of the Cloud Security Engineer(25:40) Do We Have Too Much Visibility or Not Enough?(29:00) Career Path: The Value of Learning to Code for DevSecOps(35:00) Damien's Hot Take: "Multi-Cloud Just Means Chaos"(44:20) Career Advice for Traditional IR Professionals Moving to Cloud(47:50) Final Questions: Video Games, Life's Journey, and GumboResources spoke about during the interviewDamien's Website
"The next five years are gonna be wild." That's the verdict from Forrester Principal Analyst Allie Mellen on the state of Security Operations. This episode dives into the "massive reset" that is transforming the SOC, driven by the rise of generative AI and a revolution in data management.Allie explains why the traditional L1, L2, L3 SOC model, long considered a "rite of passage" that leads to burnout is being replaced by a more agile and effective Detection Engineering structure. As a self-proclaimed "AI skeptic," she cuts through the marketing hype to reveal what's real and what's not, arguing that while we are "not really at the point of agentic" AI, the real value lies in specialized triage and investigation agents.Guest Socials - Allie's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:35) Who is Allie Mellen?(03:15) What is Security Operations in 2025? The SIEM & XDR Shakeup(06:20) The Rise of Security Data Lakes & Data Pipeline Tools(09:20) A "Great Reset" is Coming for the SOC(10:30) Why the L1/L2/L3 Model is a Burnout Machine(13:25) The Future is Detection Engineering: An "Infinite Loop of Improvement"(17:10) Using AI Hallucinations as a Feature for New Detections(18:30) AI in the SOC: Separating Hype from Reality(22:30) What is "Agentic AI" (and Are We There Yet?)(26:20) "No One Knows How to Secure AI": The Detection & Response Challenge(28:10) The Critical Role of Observability Data for AI Security(31:30) Are SOC Teams Actually Using AI Today?(34:30) How to Build a SOC Team in the AI Era: Uplift & Upskill(39:20) The 3 Things to Look for When Buying Security AI Tools(41:40) Final Questions: Reading, Cooking, and SushiResources:You can read Allie's blogs here
The race to deploy AI is on, but are the cloud platforms we rely on secure by default? This episode features a practical, in-the-weeds discussion with Kyler Middleton, Principal Developer, Internal AI Solutions, Veradigm and Sai Gunaranjan, Lead Architect, Veradigm as they compare the security realities of building AI applications on the two largest cloud providers.The conversation uncovers critical security gaps you need to be aware of. Sai reveals that Azure AI defaults to sending customer data globally for processing to keep costs low, a major compliance risk that must be manually disabled . Kyler breaks down the challenges with AWS Bedrock, including the lack of resource-level security policies and a consolidated logging system that mixes all AI conversations into one place, making incident response incredibly difficult .This is an essential guide for any cloud security or platform engineer moving into the AI space. Learn about the real-world architectural patterns, the insecure defaults to watch out for, and the new skills required to transition from a Cloud Security Engineer to an AI Security Engineer.Guest Socials - Kyler's Linkedin + Sai's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:30) Who are Kyler Middleton & Sai Gunaranjan?(03:40) Common AI Use Cases: Chatbots & Product Integration(05:15) Beyond IAM: The Full Scope of AI Security in the Cloud(07:30) The Role of the Cloud in Deploying Secure AI(13:10) AWS AI Architecture: Bedrock, Knowledge Bases & Vector Databases(15:10) Azure AI Architecture: AI Services, ML Workspaces & Foundry(21:00) The "Delete the Frontend" Problem: The Risk of Agentic AI(23:25) A Security Deep Dive into Microsoft Azure AI Services(29:20) Azure's Insecure Default: Sending Your Data Globally(31:35) A Security Deep Dive into AWS Bedrock(32:30) The Critical Gap: No Resource Policies in AWS Bedrock(33:20) AWS Bedrock's Logging Problem: A Nightmare for Incident Response(36:15) AWS vs. Azure: Which is More Secure for AI Today?(39:20) A Maturity Model for Adopting AI Security in the Cloud(44:15) From Cloud Security to AI Security Engineer: What's the Skill Gap?(48:45) Final Questions: Toddlers, Kickball, Barbecue & Ice Cream
For the last 30 years, email security has been stuck in the past, focusing almost entirely on stopping bad things from getting into the inbox. In this episode, Rajan Kapoor, Field CISO at Material Security and former Director of Security at Dropbox, argues that this pre-breach mindset is dangerously outdated. The real challenge today is post-breach: protecting the sensitive data that already lives inside your mailboxes.The conversation explores why we must evolve from "email security" to the broader concept of "workspace security" . Rajan explains how interconnected productivity suites like Google Workspace and Microsoft 365 have turned the inbox into a gateway to everything else Drive, accounts, and sensitive company data. We also discuss how the rise of AI co-pilots will create new risks, as they can instantly find and surface over-shared data that was previously hidden in plain sight .Guest Socials - Rajan's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:00) Who is Rajan Kapoor? Field CISO at Material Security(02:38) What is Email Security in 2025? The 30-Year-Old Problem(03:20) The Critical Shift: From Pre-Breach to Post-Breach Protection(04:20) The Rise of Workspace Security: Beyond the Inbox(06:00) Why Focusing on Email is "Not Even Half" The Problem(06:50) Are Microsoft 365 Security Challenges Different from Google's?(09:30) Rethinking the Approach to Email Security(11:40) How AI Co-Pilots Will Exploit Your Over-Shared Data(13:30) A Real-World Attack: From Email to Malicious OAuth App(17:00) How Should CISOs Structure Their Teams for Workspace Security?(19:25) The Role of CASB vs. API-Based Security for Data at Rest(23:10) How CISOs Can Separate Signal From Noise in a Crowded Market(24:45) Final Questions: Home Automation, Career Risks, and Ethiopian Food
You have the visibility, you see the alerts, but your security backlog is still growing faster than your team can fix it. So, are you actually getting more secure? In this episode, Snir Ben Shimol, CEO of Zest Security, argues that "knowing about an open door or an open window don't make you more secure... just make you more aware" .We spoke about the traditional "whack-a-mole" approach to vulnerability management. Snir shared an analogy: when planning a trip, the most important question isn't who goes first, but "what is the vehicle?" . He explains how AI's ability to perform recursive analysis can find the "vehicle" for your remediation efforts, that one base image upgrade or single code change that can reduce 20-30% of your entire vulnerability backlog in one action .Guest Socials - Snir's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Cybersecurity PodcastQuestions:(00:00) Introduction(02:30) Who is Snir Ben Shimol?(03:20) What is Cloud Security in 2025? Moving from Visibility to Action(07:25) Why Visibility Isn't Making You More Secure(10:20) The Slow, Manual Process of Remediation Today: Losing the Battle(16:00) The "Vehicle vs. Priority" Analogy for Vulnerability Management(17:45) How AI Enables Recursive Analysis to Find the Most Impactful Fix(20:00) The Three Pillars of AI-Driven Cloud Security Resolution(22:30) Why Your CNAPP/CSPM Can't Solve the Remediation Problem(25:20) Why Traditional Prioritization (EPSS, KEV) is a Waterfall Approach(28:10) The "Buy vs. Build" Dilemma for AI Security Solutions(30:15) The Complexity of Building a Multi-Agent AI System for Security(41:45) How CISOs Can Separate Real AI Products from Marketing Fluff(44:50) Final Questions: Surfing, Communication, and Thai Food





Great podcast about Cloud Security! Highly recommend it! Ashish is the best!
Thank you so much for amazing resources, will keep a look on it always!
The podcast has finished recording 50 episodes so far with guests from Netflix, Capital One, Hashicorp on how they do security in cloud, definitely worth checking out. FULL DISCLOSURE: I'm the host of the podcast and each month we pick a review to be featured on the podcast website (www.cloudsecuritypodcast.tv). Make sure you leave a review to be featured.