Discover
Cloud Security Podcast by Google
Cloud Security Podcast by Google
Author: Anton Chuvakin
Subscribed: 309Played: 9,139Subscribe
Share
© Copyright Google Cloud
Description
Cloud Security Podcast by Google focuses on security in the cloud, delivering security from the cloud, and all things at the intersection of security and cloud. Of course, we will also cover what we are doing in Google Cloud to help keep our users' data safe and workloads secure.
We're going to do our best to avoid security theater, and cut to the heart of real security questions and issues. Expect us to question threat models and ask if something is done for the data subject's benefit or just for organizational benefit.
We hope you'll join us if you're interested in where technology overlaps with process and bumps up against organizational design. We're hoping to attract listeners who are happy to hear conventional wisdom questioned, and who are curious about what lessons we can and can't keep as the world moves from on-premises computing to cloud computing.
We're going to do our best to avoid security theater, and cut to the heart of real security questions and issues. Expect us to question threat models and ask if something is done for the data subject's benefit or just for organizational benefit.
We hope you'll join us if you're interested in where technology overlaps with process and bumps up against organizational design. We're hoping to attract listeners who are happy to hear conventional wisdom questioned, and who are curious about what lessons we can and can't keep as the world moves from on-premises computing to cloud computing.
269 Episodes
Reverse
Guests: Kelli Vanderlee, Senior Manager, Threat Analysis, Mandiant, Google Cloud Scott Runnels, Mandiant Incident Response, Google Cloud Topics: Do we need to rethink "Mean Time to Respond" entirely, or are we just in deep trouble? Why are threat groups collaborating so well, and are there actual lessons for defenders in their "business" model? What is the scalable advice for teams worried about voice phishing and GenAI cloning? What does "weaponizing the administrative fabric" actually mean in a world where identity is the perimeter? Why is identity/SaaS compromise "news" in 2026 when cloud security folks have been shouting about it for years? What actually changed? What's the latest in supply chain compromise, particularly regarding malicious open-source packages? How do we defend against malware that is "lazy" enough to use the victim's own AI tools for reconnaissance? What is the specific advice for Detection and Response (D&R) teams to handle "living off the land" (or "living off the cloud")? How do you fix the situation when IT and Security departments genuinely hate each other? Besides reading the report, what is the one book or piece of advice for a CISO to survive this year? Resources: Video version M-Trends 2026 Report EP222 From Post-IR Lessons to Proactive Security: Deconstructing Mandiant M-Trends EP254 Escaping 1990s Vulnerability Management: From Unauthenticated Scans to AI-Driven Mitigation EP205 Cybersecurity Forecast 2025: Beyond the Hype and into the Reality EP147 Special: 2024 Security Forecast Report "The Evolution of Cooperation" book
Guest: Raffael Marty, Operating Advisor, a SIEM legend since 1999 Topics: You argue that declaring existing SIEM being obsolete is a "marketing slogan" rather than a true thesis. What is the real pain point and the actual gap in traditional SIEMs as opposed to the more sensational claims? You highlight that "correlation, state, timelines, and real-time detection require locality," making centralization a necessary trade-off. Can a truly federated or decoupled SIEM architecture achieve the same fidelity and real-time performance for complex, stateful detections as a centralized one? You call the rise of independent security data pipelines the "SIEM Trojan Horse." How quickly is this abstraction layer turning SIEM into a "swappable" component, and what should SIEM vendors have done differently years ago to prevent this market from existing? This "AI SOC" thing, is this even real? Is AI in a SOC a better label? Do you think major SIEM vendors will own this very soon, like they did with UEBA and SOAR? If volume-based pricing is flawed because it penalizes good security hygiene, what is a better SIEM pricing model that fairly addresses compute, enrichment, and retention costs without just shifting the volume cost to unpredictable query charges? You question the idea that startups can find a better way to release detection rules than large vendors with significant content teams. What metrics should security leaders use to evaluate the quality of a vendor's detection engineering (DE) output beyond just coverage numbers? Can AI fix DE? Resources: Video version The SIEM Maturity Framework: A Practical Scoring Tool for Security Analytics Platforms and raffy.ch/SIEM/ The Gaps That Created the New Wave of SIEM and AI SOC Vendors How AI Impacts the Cyber Market and The Future of SIEM Why Venture Capital Is Betting Against Traditional SIEMs EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI EP234 The SIEM Paradox: Logs, Lies, and Failing to Detect EP125 Will SIEM Ever Die: SIEM Lessons from the Past for the Future Decoupled SIEM: Brilliant or Stupid? Decoupled SIEM: Where I Think We Are Now?
Guest: Allie Mellen, Principal Analyst @ Forrester, author of "Code War: How Nations Hack, Spy, and Shape the Digital Battlefield" Topics: Your book focuses on the US, China, and Russia. When you were planning the book did you also want to cover players like Israel, Iran, and North Korea? Most of our listeners are migrating to or operating heavily in the cloud. As nations refine their "digital battlefield" strategies, does the "shared responsibility model" actually hold up against a nation-state actor? How does a company's detection strategy need to change when the adversary isn't a teenager looking for a ransom, but a state-funded group whose goal might be long-term persistence or subtle data manipulation? How should people allocate their resources to defending against both of these threats? How afraid are you of a "bad guy with AI" scenarios? Mild anxiety or apocalyptic fears? Do you see AI primarily helping "Tier 2" nations close the capability gap with the "Big Three," or does it just further cement the dominance of the nations that own the underlying compute and models? You've spent a lot of time as an analyst looking at how enterprises buy and run security tech. For a CISO at (say) mid-tier logistics company, should 'nation-state cyberattacks' even be on their threat model? Or is worrying about the spies just a form of security theater when they haven't even solved basic credential theft yet? Resource: Video version "Code War: How Nations Hack, Spy, and Shape the Digital Battlefield" by Allie Mellen Allie Mellen substack The source for the original "air defense on the roof" argument (2008) EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking EP256 Rewiring Democracy & Hacking Trust: Bruce Schneier on the AI Offense-Defense Balance EP156 Living Off the Land and Attacking Critical Infrastructure: Mandiant Incident Deep Dive "Disrupting the first reported AI-orchestrated cyber espionage campaign" report
Guest: Alastair Paterson, CEO and co-founder @ Harmonic Security Topics: Harmonic Security focuses on securing generative AI in use. Can you walk us through a real, anonymized example of a data leak caused by employee AI usage that your platform has identified? AI governance gets thrown around a lot. What does this mean in the context of Shadow AI? How should organizations be thinking about governing AI in light of upcoming AI regulations in the US and in the EU? If we generally agree that employees are using AI tools before they are sanctioned, how can organizations control this? Network, API, endpoint? Many organizations struggle with the "ban vs. embrace" debate for generative AI. Based on your experience, what's a compelling argument for moving from a blanket ban to a managed, secure adoption model? Can you share a success story where this approach demonstrably reduced risk? The term "shadow AI" is often used interchangeably with "shadow IT" (but for AI-powered applications) but you've highlighted that AI is a different beast. What is the single biggest distinction between managing the risk of unsanctioned AI tools versus unsanctioned IT applications? Looking forward, where do you see the biggest risks in the evolution of shadow AI? For instance, will the next threat be from highly specialized AI agents trained on proprietary data, or from the rapid proliferation of new, unmonitored open-source models? Given the speed of change in this space, what's one piece of advice you'd give to a CISO today who is just beginning to get a handle on their organization's shadow AI problem? Resources: Video version Harmonic Security research Shadow AI Strikes Back: Enterprise AI Absent Oversight in the Age of Gen AI blog Shadow Agents: A New Era of Shadow AI Risk in the Enterprise blog (RSA 2026 presentation coming!) Spotlighting 'shadow AI': How to protect against risky AI practices blog EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side (aka "dirty bomb episode") A Conversation with Alastair Paterson from Harmonic Security video
Guests: Alexander Pabst, Global Deputy CISO, Allianz SE Michael Sinno, Director of D&R, Google Topics: We've spent decades obsessed with MTTD (Mean Time to Detect) and MTTR (Mean Time to Respond). As AI agents begin to handle the bulk of triage at machine speed, do these metrics become "vanity metrics"? If an AI resolves an alert in seconds, does measuring the "mean" still tell us anything about the health of our security program, or should we be looking at "Time to Context" instead? You mentioned the Maturity Triangle. Can you walk us through that framework? Specifically, how does AI change the balance between the three points of that triangle—is it shifting us from a "People-heavy" model to something more "Engineering-led," and where does the "Measurement" piece sit? Google is famous for its "Engineering-led" approach to D&R. How is Google currently measuring the success of its own internal D&R program? Specifically, how are you quantifying "Toil Reduction"? Are we measuring how many hours we saved, or are we measuring the complexity of the threats our humans are now free to hunt? Toil reduction is a laudable goal for the team members, what are the metrics we track and report up to document the overall improvement in D&R for Google's board? When you talk to your board about the success of AI in your security program, what are the 2 or 3 "Golden Metrics" that actually move the needle for them? How do you prove that an AI-driven SOC is actually better, not just faster? We often talk about AI as an "assistant," but we're moving toward Agentic SOCs. How should organizations measure the "unit economics" of their SOC? Should we be tracking the ratio of AI-handled vs. Human-handled incidents, and at what point does a high AI-handle rate become a risk rather than a success? Resources: Video version EP252 The Agentic SOC Reality: Governing AI Agents, Data Fidelity, and Measuring Success EP238 Google Lessons for Using AI Agents for Securing Our Enterprise EP91 "Hacking Google", Op Aurora and Insider Threat at Google EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI EP189 How Google Does Security Programs at Scale: CISO Insights EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil The SOC Metrics that Matter…or Do They? blog An Actual Complete List Of SOC Metrics (And Your Path To DIY) blog Achieving Autonomic Security Operations: Why metrics matter (but not how you think) blog
Guest: Daniel Lyman, VP of Threat Detection and Response, Fiserv Topics: What is the right way for people to bridge the gap and translate executive dreams and board goals into the reality of life on the ground? How do we talk to people who think they have "transformed" their SOC simply by buying a better, shinier product (like a modern SIEM) while leaving their old processes intact? What are the specific challenges and advantages you've seen with a federated SOC versus a centralized one? What does a "federated" or "sub-SOC" model actually mean in practice? Why is the message that "EDR doesn't cover everything" so hard for some people to hear? Is this obsession with EDR a business decision or technology debt? How do you expect AI to change the calculus around data centralization versus data federation? What is your favorite example of telemetry that is useful, but usually excluded from a SIEM? What are the Detection and Response organizational metrics that you think are most valuable? Is the continued use of Excel an issue of tooling, laziness, or just because it is a fundamentally good way to interact with a small database? Resources: Video version "In My Time of Dying" book EP258 Why Your Security Strategy Needs an Immune System, Not a Fortress with Royal Hansen EP197 SIEM (Decoupled or Not), and Security Data Lakes: A Google SecOps Perspective The Gravity of Process: Why New Tech Never Fixes Broken Process and Can AI Change It? blog
Guest: Alex Shulman-Peleg, Global CISO at Kraken Topics: You mentioned that centralized security can't work anymore. Can you elaborate on the key changes—driven by cloud, SaaS, and AI—that have made this traditional model unsustainable for a modern organization? Why do some persist at centralized, top down approach to security, despite that? What do you mean by "Freedom, Responsibility and distributed security"? Can you explain the difference between "centralized security" and what you define as "security with distributed ownership"? Is this the same "federated"? In our conversation you mentioned "cloud and AI- native", what do you mean by this (especially "AI-native") and how is this changing your approach to security? You introduce the concept of "Security as quality" suggesting that a security-unaware developer is essentially a bad software developer. How do you shift the culture and internal metrics to make security an inherent quality standard, rather than a separate, compliance-driven checklist? You likened the central security team's new role to a "911 emergency service." Beyond incident response, what stays central no matter what, and how does the central team successfully influence the security posture of the entire organization without being directly responsible for the day-to-day work. Resources: Video version EP129 How CISO Cloud Dreams and Realities Collide EP258 Why Your Security Strategy Needs an Immune System, Not a Fortress with Royal Hansen EP212 Securing the Cloud at Scale: Modern Bank CISO on Metrics, Challenges, and SecOps
Guest: Dennis Chow, Director of Detection Engineering at UKG Topics: We ended our season talking about the AI apocalypse. In your opinion, are we living in the world that the guests describe in their apocalypse paper? Do you think AI-powered attacks are really here, and if so, what is your plan to respond? Is it faster patching? Better D&R? Something else altogether? Your team has a hybrid agent workflow: could you tell us what that means? Also, define "AI agent" please. What are your production use cases for AI and AI agents in your SOC? What are your overall SOC metrics and how does the agentic AI part play into that? It's one thing to ask a team "hey what did y'all do last week" and get a good report - how are you measuring the agentic parts of your SOC? How are you thinking about what comes next once AI is automatically writing good (!) rules for your team out of research blog posts and TI papers? Resources: Video version Agentic AI in the SOC: Build vs Buy Lessons EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking EP256 Rewiring Democracy & Hacking Trust: Bruce Schneier on the AI Offense-Defense Balance EP252 The Agentic SOC Reality: Governing AI Agents, Data Fidelity, and Measuring Success EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI EP242 The AI SOC: Is This The Automation We've Been Waiting For? Google Cloud Skill Boost
Guest: Vishwas Manral, CEO at Precize.ai Topic: Why is agent security so different from "just" LLM security? Why now? Agents are coming, sure, but they are - to put it mildly - not in wide use. Why create a top 10 list now and not wait for people to make the mistakes? It sounds like "agents + IAM" is a disaster waiting to happen. What should be our approach for solving this? Do we have one? Which one agentic AI risk keeps you up at night? Is there an interesting AI shared responsibility angle here? Agent developer, operator, downstream system operator? We are having a lot of experimentation, but sometimes little value from Agents. What are the biggest challenges of secure agentic AI and AI agents adoption in enterprises? Resources: Top 10 threats and mitigation for AI Agents Past podcast AI episodes Cloud CISO Perspectives: How Google secures AI Agents (and paper) Top AI Risks from SAIF CoSAI From turnkey to custom: Tailor your AI risk governance to help build confidence
Guest: Elie Burstein, Distinguished Scientist, Google Deepmind Topics: What is Sec-Gemini, why are we building it? How does DeepMind decide when to create something like Sec-Gemini? What motivates a decision to focus on something like this vs anything else we might build as a dedicated set of regular Gemini capabilities? What is Sec-Gemini good at? How do we know it's good at those things? Where and how is it better than a general LLM? Are we using Sec-Gemini internally? Resources: Video version EP238 Google Lessons for Using AI Agents for Securing Our Enterprise EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side Big Sleep, CodeMender blogs
Guest: Royal Hansen, VP of Engineering at Google, former CISO of Alphabet Topics: The "God-Like Designer" Fallacy: You've argued that we need to move away from the "God-like designer" model of security—where we pre-calculate every risk like building a bridge—and towards a biological model. Can you explain why that old engineering mindset is becoming risky in today's cloud and AI environments? Resilience vs. Robustness: In your view, what is the practical difference between a robust system (like a fortress that eventually breaks) and a resilient system (like an immune system)? How does a CISO start shifting their team's focus from creating the former to nurturing the latter? Securing the Unknown: We're entering an era where AI agents will call other agents, creating pathways we never explicitly designed. If we can't predict these interactions, how can we possibly secure them? What does "emergent security" look like in practice? Primitives for Agents: You mentioned the need for new "biological primitives" for these agents—things like time-bound access or inherent throttling. Are these just new names for old concepts like Zero Trust, or is there something different about how we need to apply them to AI? The Compliance Friction: There's a massive tension between this dynamic, probabilistic reality and the static, checklist-based world of many compliance regimes. How do you, as a leader, bridge that gap? How do you convince an auditor or a board that a "probabilistic" approach doesn't just mean "we don't know for sure"? "Safe" Failures: How can organizations get comfortable with the idea of designing for allowable failure in their subsystems, rather than striving for 100% uptime and security everywhere? Resources: Video version EP189 How Google Does Security Programs at Scale: CISO Insights BigSleep and CodeMender agents "Chasing the Rabbit" book "How Life Works: A User's Guide to the New Biology" book
Guest: Chris Sistrunk, Technical Leader, OT Consulting, Mandiant Topics: When we hear "attacks on Operational Technology (OT)" some think of Stuxnet targeting PLCs or even backdoored pipeline control software plot in the 1980s. Is this space always so spectacular or are there less "kaboom" style attacks we are more concerned about in practice? Given the old "air-gapped" mindset of many OT environments, what are the most common security gaps or blind spots you see when organizations start to integrate cloud services for things like data analytics or remote monitoring? How is the shift to cloud connectivity - for things like data analytics, centralized management, and remote access - changing the security posture of these systems? What's a real-world example of a positive security outcome you've seen as a direct result of this cloud adoption? How do the Tactics, Techniques, and Procedures outlined in the MITRE ATT&CK for ICS framework change or evolve when attackers can leverage cloud-based reconnaissance and command-and-control infrastructure to target OT networks? Can you provide an example? OT environments are generating vast amounts of operational data. What is interesting for OT Detection and Response (D&R)? Resources: Video version Cybersecurity Forecast 2026 report by Google Complex, hybrid manufacturing needs strong security. Here's how CISOs can get it done blog "Security Guidance for Cloud-Enabled Hybrid Operational Technology Networks" paper by Google Cloud Office of the CISO DEF CON 23 - Chris Sistrunk - NSM 101 for ICS MITRE ATT&CK for ICS
Guest: Bruce Schneier Topics: Do you believe that AI is going to end up being a net improvement for defenders or attackers? Is short term vs long term different? We're excited about the new book you have coming out with your co-author Nathan Sanders "Rewiring Democracy". We want to ask the same question, but for society: do you think AI is going to end up helping the forces of liberal democracy, or the forces of corruption, illiberalism, and authoritarianism? If exploitation is always cheaper than patching (and attackers don't follow as many rules and procedures), do we have a chance here? If this requires pervasive and fast "humanless" automatic patching (kinda like what Chrome does for years), will this ever work for most organizations? Do defenders have to do the same and just discover and fix issues faster? Or can we use AI somehow differently? Does this make defense in depth more important? How do you see AI as changing how society develops and maintains trust? Resources: "Rewiring Democracy" book "Informacracy Trilogy" book Agentic AI's OODA Loop Problem EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking AI and Trust AI and Data Integrity EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025 RSA 2025: AI's Promise vs. Security's Past — A Reality Check
Guest: Heather Adkins, VP of Security Engineering, Google Topic: The term "AI Hacking Singularity" sounds like pure sci-fi, yet you and some other very credible folks are using it to describe an imminent threat. How much of this is hyperbole to shock the complacent, and how much is based on actual, observed capabilities today? Can autonomous AI agents really achieve that "exploit - at - machine - velocity" without human intervention for the zero-day discovery phase? On the other hand, why may it actually not happen? When we talk about autonomous AI attack platforms, are we talking about highly resourced nation-states and top-tier criminal groups, or will this capability truly be accessible to the average threat actor within the next 6-12 months? What's the "Metasploit" equivalent for AI-powered exploitation that will be ubiquitous? Can you paint a realistic picture of the worst-case scenario that autonomous AI hacking enables? Is it a complete breakdown of patch cycles, a global infrastructure collapse, or something worse? If attackers are operating at "machine speed," the human defender is fundamentally outmatched. Is there a genuine "AI-to-AI" counter-tactic that doesn't just devolve into an infinite arms race? Or can we counter without AI at all? Given that AI can expedite vulnerability discovery, how does this amplified threat vector impact the software supply chain? If a dependency is compromised within minutes of a new vulnerability being created, does this force the industry to completely abandon the open-source model, or does it demand a radical, real-time security scanning and patching system that only a handful of tech giants can afford? Are current proposed regulations, like those focusing on model safety or disclosure, even targeting the right problem? If the real danger is the combinatorial speed of autonomous attack agents, what simple, impactful policy change should world governments prioritize right now? Resources: "Autonomous AI hacking and the future of cybersecurity" article EP20 Security Operations, Reliability, and Securing Google with Heather Adkins Introducing CodeMender: an AI agent for code security EP251 Beyond Fancy Scripts: Can AI Red Teaming Find Truly Novel Attacks? Daniel Miessler site and podcast "How SAIF can accelerate secure AI experiments" blog "Staying on top of AI Developments" blog
Guest: Caleb Hoch, Consulting Manager on Security Transformation Team, Mandiant, Google Cloud Topics: How has vulnerability management (VM) evolved beyond basic scanning and reporting, and what are the biggest gaps between modern practices and what organizations are actually doing? Why are so many organizations stuck with 1990s VM practices? Why mitigation planning is still hard for so many? Why do many organizations, including large ones, still rely on unauthenticated scans despite the known importance of authenticated scanning for accurate results? What constitutes a "gold standard" vulnerability prioritization process in 2025 that moves beyond CVSS scores to incorporate threat intelligence, asset criticality, and other contextual factors? What are the primary human and organizational challenges in vulnerability management, and how can issues like unclear governance, lack of accountability, and fear of system crashes be overcome? How is AI impacting vulnerability management, and does the shift to cloud environments fundamentally change VM practices? Resources: EP109 How Google Does Vulnerability Management: The Not So Secret Secrets! EP246 From Scanners to AI: 25 Years of Vulnerability Management with Qualys CEO Sumedh Thakar EP248 Cloud IR Tabletop Wins: How to Stop Playing Security Theater and Start Practicing How Low Can You Go? An Analysis of 2023 Time-to-Exploit Trends Mandiant M Trends 2025 EP204 Beyond PCAST: Phil Venables on the Future of Resilience and Leading Indicators Mandiant Vulnerability Management
Guests: Sivanesh Ashok, bug bounty hunter Sreeram KL, bug bounty hunter Topics: We hear from the Cloud VRP team that you write excellent bugbounty reports - is there any advice you'd give to other researchers when they write reports? You are one of Cloud VRP's top researchers and won the MVH (most valuable hacker) award at their event in June - what do you think makes you so successful at finding issues? What is a Bugswat? What do you find most enjoyable and least enjoyable about the VRP? What is the single best piece of advice you'd give an aspiring cloud bug hunter today? Resources: EP220 Big Rewards for Cloud Security: Exploring the Google VRP Cloud Vulnerability Reward Program Rules Insights from BugSWAT Google Cloud's Vulnerability Reward Program Critical Thinking Podcast
Guests: Alexander Pabst, Deputy Group CISO, Allianz Lars Koenig, Global Head of D&R, Allianz Topics: Moving from traditional SIEM to an agentic SOC model, especially in a heavily regulated insurer, is a massive undertaking. What did the collaboration model with your vendor look like? Agentic AI introduces a new layer of risk - that of unconstrained or unintended autonomous action. In the context of Allianz, how did you establish the governance framework for the SOC alert triage agents? Where did you draw the line between fully automated action and the mandatory "human-in-the-loop" for investigation or response? Agentic triage is only as good as the data it analyzes. From your perspective, what were the biggest challenges - and wins - in ensuring the data fidelity, freshness, and completeness in your SIEM to fuel reliable agent decisions? We've been talking about SOC automation for years, but this agentic wave feels different. As a deputy CISO, what was your primary, non-negotiable goal for the agent? Was it purely Mean Time to Respond (MTTR) reduction, or was the bigger strategic prize to fundamentally re-skill and uplevel your Tier 2/3 analysts by removing the low-value alert noise? As you built this out, were there any surprises along the way that left you shaking your head or laughing at the unexpected AI behaviors? We felt a major lack of proof - Anton kept asking for pudding - that any of the agentic SOC vendors we saw at RSA had actually achieved anything beyond hype! When it comes to your org, how are you measuring agent success? What are the key metrics you are using right now? Resources: EP238 Google Lessons for Using AI Agents for Securing Our Enterprise EP242 The AI SOC: Is This The Automation We've Been Waiting For? EP249 Data First: What Really Makes Your SOC 'AI Ready'? EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI "Simple to Ask: Is Your SOC AI Ready? Not Simple to Answer!" blog "How Google Does It: Building AI agents for cybersecurity and defense" blog Company annual report to look for risk "How to Win Friends and Influence People" by Dale Carnegie "Will It Make the Boat Go Faster?" book
Guest: Ari Herbert-Voss, CEO at RunSybil Topics: The market already has Breach and Attack Simulation (BAS), for testing known TTPs. You're calling this 'AI-powered' red teaming. Is this just a fancy LLM stringing together known attacks, or is there a genuine agent here that can discover a truly novel attack path that a human hasn't scripted for it? Let's talk about the 'so what?' problem. Pentest reports are famous for becoming shelf-ware. How do you turn a complex AI finding into an actionable ticket for a developer, and more importantly, how do you help a CISO decide which of the thousand 'criticals' to actually fix first? You're asking customers to unleash a 'hacker AI' in their production environment. That's terrifying. What are the 'do no harm' guardrails? How do you guarantee your AI won't accidentally rm -rf a critical server or cause a denial of service while it's 'exploring'? You mentioned the AI is particularly good at finding authentication bugs. Why that specific category? What's the secret sauce there, and what's the reaction from customers when you show them those types of flaws? Is this AI meant to replace a human red teamer, or make them better? Does it automate the boring stuff so experts can focus on creative business logic attacks, or is the ultimate goal to automate the entire red team function away? So, is this just about finding holes, or are you closing the loop for the blue team? Can the attack paths your AI finds be automatically translated into high-fidelity detection rules? Is the end goal a continuous purple team engine that's constantly training our defenses? Also, what about fixing? What makes your findings more fixable? What will happen to red team testing in 2-3 years if this technology gets better? Resource: Kim Zetter Zero Day blog EP230 AI Red Teaming: Surprises, Strategies, and Lessons from Google EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes? EP68 How We Attack AI? Learn More at Our RSA Panel! EP71 Attacking Google to Defend Google: How Google Does Red Team
Guest: Balazs Scheidler, CEO at Axoflow, original founder of syslog-ng Topics: Are we really coming to "access to security data" and away from "centralizing the data"? How to detect without the same storage for all logs? Is data pipeline a part of SIEM or is it standalone? Will this just collapse into SIEM soon? Tell us about the issues with log pipelines in the past? What about enrichment? Why do it in a pipeline, and not in a SIEM? We are unable to share enough practices between security teams. How are we fixing it? Is pipelines part of the answer? Do you have a piece of advice for people who want to do more than save on their SIEM costs? Resources: EP197 SIEM (Decoupled or Not), and Security Data Lakes: A Google SecOps Perspective EP190 Unraveling the Security Data Fabric: Need, Benefits, and Futures EP228 SIEM in 2025: Still Hard? Reimagining Detection at Cloud Scale and with More Pipelines Axoflow podcast and Anton on it "Decoupled SIEM: Where I Think We Are Now?" blog "Decoupled SIEM: Brilliant or Stupid?" blog "Output-driven SIEM — 13 years later" blog
Guest: Monzy Merza, co-founder and CEO at Crogl Topics: We often hear about the aspirational idea of an "IronMan suit" for the SOC—a system that empowers analysts to be faster and more effective. What does this ideal future of security operations look like from your perspective, and what are the primary obstacles preventing SOCs from achieving it today? You've also raised a metaphor of AI in the SOC as a "Dr. Jekyll and Mr. Hyde" situation. Could you walk us through what you see as the "Jekyll"—the noble, beneficial promise of AI—and what are the factors that can turn it into the dangerous "Mr. Hyde"? Let's drill down into the heart of the "Mr. Hyde" problem: the data. Many believe that AI can fix a team's messy data, but you've noted that "it's all about the data, duh." What's the story? "AI ready SOC" - What is the foundational work a SOC needs to do to ensure their data is AI-ready, and what happens when they skip this step? And is there anything we can do to use AI to help with this foundational problem? How do we measure progress towards AI SOC? What gets better at what time? How would we know? What SOC metrics will show improvement? Will anything get worse? Resources: EP242 The AI SOC: Is This The Automation We've Been Waiting For? EP170 Redefining Security Operations: Practical Applications of GenAI in the SOC EP227 AI-Native MDR: Betting on the Future of Security Operations? EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI EP238 Google Lessons for Using AI Agents for Securing Our Enterprise "Simple to Ask: Is Your SOC AI Ready? Not Simple to Answer!" blog Nassim Taleb "Antifragile" book "AI Superpowers" book "Attention Is All You Need" paper



