DiscoverCloud Security Podcast by Google
Cloud Security Podcast by Google
Claim Ownership

Cloud Security Podcast by Google

Author: Anton Chuvakin

Subscribed: 211Played: 5,652


Cloud Security Podcast by Google focuses on security in the cloud, delivering security from the cloud, and all things at the intersection of security and cloud. Of course, we will also cover what we are doing in Google Cloud to help keep our users' data safe and workloads secure.

We’re going to do our best to avoid security theater, and cut to the heart of real security questions and issues. Expect us to question threat models and ask if something is done for the data subject’s benefit or just for organizational benefit.

We hope you’ll join us if you’re interested in where technology overlaps with process and bumps up against organizational design. We’re hoping to attract listeners who are happy to hear conventional wisdom questioned, and who are curious about what lessons we can and can’t keep as the world moves from on-premises computing to cloud computing.
178 Episodes
Guests: Omar ElAhdan, Principal Consultant, Mandiant, Google Cloud Will Silverstone, Senior Consultant, Mandiant, Google Cloud Topics: Most organizations you see use both cloud and on-premise environments. What are the most common challenges organizations face in securing their hybrid cloud environments? You do IR so in your experience, what are top 5  mistakes organizations make that lead to cloud incidents? How and why do organizations get the attack surface wrong? Are there pillars of attack surface? We talk a lot about how IAM matters in the cloud.  Is that true that AD is what gets you in many cases even for other clouds? What is your best cloud incident preparedness advice for organizations that are new to cloud and still use on-prem as well? Resources: Next 2024 LIVE Video of this episode / LinkedIn version (sorry for the audio quality!) “Lessons Learned from Cloud Compromise” podcast at The Defender’s Advantage “Cloud compromises: Lessons learned from Mandiant investigations” in 2023 from Next 2024 EP174 How to Measure and Improve Your Cloud Incident Response Readiness: A New Framework EP103 Security Incident Response and Public Cloud - Exploring with Mandiant EP162 IAM in the Cloud: What it Means to Do It 'Right' with Kat Traxler
Guest: Seth Vargo, Principal Software Engineer responsible for Google's use of the public cloud, Google Topics: Google uses the public cloud, no way, right? Which one? Oh, yeah, I guess this is obvious: GCP, right? Where are we like other clients of GCP?  Where are we not like other cloud users? Do we have any unique cloud security technology that we use that others may benefit from? How does our cloud usage inform our cloud security products? So is our cloud use profile similar to cloud natives or traditional companies? What are some of the most interesting cloud security practices and controls that we use that are usable by others? How do we make them work at scale?  Resources: EP12 Threat Models and Cloud Security (previous episode with Seth) EP66 Is This Binary Legit? How Google Uses Binary Authorization and Code Provenance EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil EP158 Ghostbusters for the Cloud: Who You Gonna Call for Cloud Forensics IAM Deny Seth Vargo blog “Attention Is All You Need” paper (yes, that one)
Guest: Crystal Lister, Technical Program Manager, Google Cloud Security Topics: Your background can be sheepishly called “public sector”, what’s your experience been transitioning from public to private? How did you end up here doing what you are doing? We imagine you learned a lot from what you just described – how’s that impacted your work at Google? How have you seen risk management practices and outcomes differ? You now lead Google Threat Horizons reports, do you have a vision for this? How does your past work inform it? Given the prevalence of ransomware attacks, many organizations are focused on external threats. In your experience, does the risk of insider threats still hold significant weight? What type of company needs a dedicated and separate insider threat program? Resources: Video on YouTube Google Cybersecurity Action Team Threat Horizons Report #9 Is Out! Google Cybersecurity Action Team site for previous Threat Horizons Reports EP112 Threat Horizons - How Google Does Threat Intelligence Psychology of Intelligence Analysis by Richards J. Heuer The Coming Wave by Mustafa Suleyman  Visualizing Google Cloud: 101 Illustrated References for Cloud Engineers and Architects  
Guest: Angelika Rohrer, Sr. Technical Program Manager , Cyber Security Response at Alphabet Topics: Incident response (IR) is by definition “reactive”, but ultimately incident prep determines your IR success. What are the broad areas where one needs to prepare? You have created a new framework for measuring how ready you are for an incident, what is the approach you took to create it? Can you elaborate on the core principles behind the Continuous Improvement (CI) Framework for incident response? Why is continuous improvement crucial for effective incident response, especially in cloud environments? Can’t you just make a playbook and use it? How to overcome the desire to focus on the easy metrics and go to more valuable ones? What do you think Google does best in this area? Can you share examples of how the CI Framework could have helped prevent or mitigate a real-world cloud security incident? How can other organizations practically implement the CI Framework to enhance their incident response capabilities after they read the paper? Resources: “How do you know you are "Ready  to Respond"? paper EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil EP103 Security Incident Response and Public Cloud - Exploring with Mandiant EP158 Ghostbusters for the Cloud: Who You Gonna Call for Cloud Forensics EP98 How to Cloud IR or Why Attackers Become Cloud Native Faster?  
Guest: Shan  Rao, Group Product Manager, Google  Topics: What are the unique challenges when securing AI for cloud environments, compared to traditional IT systems? Your talk covers 5 risks, why did you pick these five? What are the five, and are these the worst? Some of the mitigation seems the same for all risks. What are the popular SAIF mitigations that cover more of the risks? Can we move quickly and securely with AI? How? What future trends and developments do you foresee in the field of securing AI for cloud environments, and how can organizations prepare for them? Do you think in 2-3 years AI security will be a separate domain or a part of … application security? Data security? Cloud security?  Resource: Video (LinkedIn, YouTube)  [live audio is not great in these] “A cybersecurity expert's guide  to securing AI products with Google SAIF“ presentation SAIF Site “To securely build AI on Google Cloud, follow these best practices” (paper) “Secure AI Framework (SAIF): A Conceptual Framework for Secure AI Systems” resources Corey Quinn on X (long story why this is here… listen to the episode)
Guests: None Topics: What have we seen at RSA 2024? Which buzzwords are rising (AI! AI! AI!) and which ones are falling (hi XDR)? Is this really all about AI? Is this all marketing? Security platforms or focused tools, who is winning at RSA? Anything fun going on with SecOps? Is cloud security still largely about CSPM? Any interesting presentations spotted? Resources: EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side (RSA 2024 episode 1 of 2) “From Assistant to Analyst: The Power of Gemini 1.5 Pro for Malware Analysis” blog “Decoupled SIEM: Brilliant or Stupid?” blog “Introducing Google Security Operations: Intel-driven, AI-powered SecOps” blog “Advancing the art of AI-driven security with Google Cloud” blog
Guest: Elie Bursztein, Google DeepMind Cybersecurity Research Lead, Google  Topics: Given your experience, how afraid or nervous are you about the use of GenAI by the criminals (PoisonGPT, WormGPT and such)? What can a top-tier state-sponsored threat actor do better with LLM? Are there “extra scary” examples, real or hypothetical? Do we really have to care about this “dangerous capabilities” stuff (CBRN)? Really really? Why do you think that AI favors the defenders? Is this a long term or a short term view? What about vulnerability discovery? Some people are freaking out that LLM will discover new zero days, is this a real risk?  Resources: “How Large Language Models Are Reshaping the Cybersecurity Landscape” RSA 2024 presentation by Elie (May 6 at 9:40AM) “Lessons Learned from Developing Secure AI Workflows” RSA 2024 presentation by Elie (May 8, 2:25PM) EP50 The Epic Battle: Machine Learning vs Millions of Malicious Documents EP40 2021: Phishing is Solved? EP135 AI and Security: The Good, the Bad, and the Magical EP170 Redefining Security Operations: Practical Applications of GenAI in the SOC EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It PyRIT LLM red-teaming tool Accelerating incident response using generative AI Threat Actors are Interested in Generative AI, but Use Remains Limited OpenAI’s Approach to Frontier Risk  
Guest: Payal Chakravarty, Director of Product Management, Google SecOps, Google Cloud Topics: What are the different use cases for GenAI in security operations and how can organizations  prioritize them for maximum impact to their organization? We’ve heard a lot of worries from people that GenAI will replace junior team members–how do you see GenAI enabling more people to be part of the security mission? What are the challenges and risks associated with using GenAI in security operations? We’ve been down the road of automation for SOCs before–UEBA and SOAR both claimed it–and AI looks a lot like those but with way more matrix math-what are we going to get right this time that we didn’t quite live up to last time(s) around? Imagine a SOC or a D&R team of 2029. What AI-based magic is routine at this time? What new things are done by AI? What do humans do? Resources: Live video (LinkedIn, YouTube) [live audio is not great in these] Practical use cases for AI in security operations, Cloud Next 2024 session by Payal EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It EP169 Google Cloud Next 2024 Recap: Is Cloud an Island, So Much AI, Bots in SecOps 15 must-attend security sessions at Next '24  
Guests:  no guests (just us!) Topics: What are some of the fun security-related launches from Next 2024 (sorry for our brief “marketing hat” moment!)? Any fun security vendors we spotted “in the clouds”? OK, what are our favorite sessions? Our own, right? Anything else we had time to go to? What are the new security ideas inspired by the event (you really want to listen to this part! Because “freatures”...) Any tricky questions at the end? Resources: Live video (LinkedIn, YouTube) [live audio is not great in these] 15 must-attend security sessions at Next '24 Cloud CISO Perspectives: 20 major security announcements from Next ‘24 EP137 Next 2023 Special: Conference Recap - AI, Cloud, Security, Magical Hallway Conversations (last year!) EP136 Next 2023 Special: Building AI-powered Security Tools - How Do We Do It? EP90 Next Special - Google Cybersecurity Action Team: One Year Later! A cybersecurity expert's guide to securing AI products with Google SAIF Next 2024 session How AI can transform your approach to security Next 2024 session
Guests:  Umesh Shankar, Distinguished Engineer, Chief Technologist for Google Cloud Security Scott Coull, Head of Data Science Research, Google Cloud Security Topics: What does it mean to “teach AI security”? How did we make SecLM? And also: why did we make SecLM? What can “security trained LLM” do better vs regular LLM? Does making it better at security make it worse at other things that we care about? What can a security team do with it today?  What are the “starter use cases” for SecLM? What has been the feedback so far in terms of impact - both from practitioners but also from team leaders? Are we seeing the limits of LLMs for our use cases? Is the “LLM is not magic” finally dawning? Resources: “How to tackle security tasks and workflows with generative AI” (Google Cloud Next 2024 session on SecLM) EP136 Next 2023 Special: Building AI-powered Security Tools - How Do We Do It? EP144 LLMs: A Double-Edged Sword for Cloud Security? Weighing the Benefits and Risks of Large Language Models Supercharging security with generative AI  Secure, Empower, Advance: How AI Can Reverse the Defender’s Dilemma? Considerations for Evaluating Large Language Models for Cybersecurity Tasks Introducing Google’s Secure AI Framework Deep Learning Security and Privacy Workshop  Security Architectures for Generative AI Systems ACM Workshop on Artificial Intelligence and Security Conference on Applied Machine Learning in Information Security  
Speakers:  Maria Riaz, Cloud Counter-Abuse, Engineering Lead, Google Cloud Topics: What is “counter abuse”? Is this the same as security? What does counter-abuse look like for GCP? What are the popular abuse types we face?  Do people use stolen cards to get accounts to then violate the terms with? How do we deal with this, generally? Beyond core technical skills, what are some of the relevant competencies for working in this space that would appeal to a diverse set of audience? You have worked in academia and industry. What similarities or differences have you observed? Resources / reading: Video EP165 Your Cloud Is Not a Pet - Decoding 'Shifting Left' for Cloud Security P161 Cloud Compliance: A Lawyer - Turned Technologist! - Perspective on Navigating the Cloud “Art of War” by Sun Tzu “Dare to Lead” by Brene Brown "Multipliers" by Liz Wiseman
Guests: Evan Gilman, co-founder CEO of Spirl Eli Nesterov, co-founder CTO of Spril Topics: Today we have IAM,  zero trust and security made easy. With that intro, could you give us the 30 second version of what a workload identity is and why people need them?  What’s so spiffy about SPIFFE anyway?  What’s different between this and micro segmentation of your network–why is one better or worse?  You call your book “solving the bottom turtle” could you tell us what that means? What are the challenges you’re seeing large organizations run into when adopting this approach at scale?  Of all the things a CISO could prioritize, why should this one get added to the list? What makes this, which is so core to our internal security model–ripe for the outside world? How people do it now, what gets thrown away when you deploy SPIFFE? Are there alternative? SPIFFE is interesting, yet can a startup really “solve for the bottom turtle”?  Resources: SPIFFE  and Spirl “Solving the Bottom Turtle” book [PDF, free] “Surely You're Joking, Mr. Feynman!” book [also, one of Anton’s faves for years!] “Zero Trust Networks” book Workload Identity Federation in GCP
Guest: Ahmad Robinson,  Cloud Security Architect, Google Cloud Topics: You’ve done a BlackHat webinar where you discuss a Pets vs Cattle mentality when it comes to cloud operations. Can you explain this mentality and how it applies to security? What in your past led you to these insights?  Tell us more about your background and your journey to Google.  How did that background contribute to your team? One term that often comes up on the show and with our customers is 'shifting left.'  Could you explain what 'shifting left' means in the context of cloud security? What’s hard about shift left, and where do orgs get stuck too far right? A lot of “cloud people” talk about IaC and PaC but the terms and the concepts are occasionally confusing to those new to cloud. Can you briefly explain Policy as Code  and its security implications? Does PaC help or hurt security? Resources: “No Pets Allowed - Mastering The Basics Of Cloud Infrastructure” webinar EP33 Cloud Migrations: Security Perspectives from The Field EP126 What is Policy as Code and How Can It Help You Secure Your Cloud Environment? EP138 Terraform for Security Teams: How to Use IaC to Secure the Cloud  
Guest: Jennifer Fernick, Senor Staff Security Engineer and UTL, Google Topics: Since one of us (!) doesn't have a PhD in quantum mechanics, could you explain what a quantum computer is and how do we know they are on a credible path towards being real threats to cryptography? How soon do we need to worry about this one? We’ve heard that quantum computers are more of a threat to asymmetric/public key crypto than symmetric crypto. First off, why? And second, what does this difference mean for defenders? Why (how) are we sure this is coming? Are we mitigating a threat that is perennially 10 years ahead and then vanishes due to some other broad technology change? What is a post-quantum algorithm anyway? If we’re baking new key exchange crypto into our systems, how confident are we that we are going to be resistant to both quantum and traditional cryptanalysis?  Why does NIST think it's time to be doing the PQC thing now? Where is the rest of the industry on this evolution? How can a person tell the difference here between reality and snakeoil? I think Anton and I both responded to your initial email with a heavy dose of skepticism, and probably more skepticism than it deserved, so you get the rare on-air apology from both of us! Resources: Securing tomorrow today: Why Google now protects its internal communications from quantum threats How Google is preparing for a post-quantum world NIST PQC standards PQ Crypto conferences “Quantum Computation & Quantum Information” by Nielsen & Chuang book “Quantum Computing Since Democritus” by Scott Aaronson book EP154 Mike Schiffman: from Blueboxing to LLMs via Network Security at Google  
Guest: Phil Venables, Vice President, Chief Information Security Officer (CISO) @ Google Cloud  Topics:  You had this epic 8 megatrends idea in 2021, where are we now with them? We now have 9 of them, what made you add this particular one (AI)? A lot of CISOs fear runaway AI. Hence good governance is key! What is your secret of success for AI governance?  What questions are CISOs asking you about AI? What questions about AI should they be asking that they are not asking? Which one of the megatrends is the most contentious based on your presenting them worldwide? Is cloud really making the world of IT simpler (megatrend #6)? Do most enterprise cloud users appreciate the software-defined nature of cloud (megatrend #5) or do they continue to fight it? Which megatrend is manifesting the most strongly in your experience? Resources: Megatrends drive cloud adoption—and improve security for all and infographic “Keynote | The Latest Cloud Security Megatrend: AI for Security” “Lessons from the future: Why shared fate shows us a better cloud roadmap” blog and shared fate page SAIF page “Spotlighting ‘shadow AI’: How to protect against risky AI practices” blog EP135 AI and Security: The Good, the Bad, and the Magical EP47 Megatrends, Macro-changes, Microservices, Oh My! Changes in 2022 and Beyond in Cloud Security Secure by Design by CISA  
Guest: Kat Traxler, Security Researcher, TrustOnCloud Topics: What is your reaction to “in the cloud you are one IAM mistake away from a breach”? Do you like it or do you hate it? A lot of people say “in the cloud, you must do IAM ‘right’”. What do you think that means? What is the first or the main idea that comes to your mind when you hear it? How have you seen the CSPs take different approaches to IAM? What does it mean for the cloud users? Why do people still screw up IAM in the cloud so badly after years of trying? Deeper, why do people still screw up resource hierarchy and resource management?  Are the identity sins of cloud IAM users truly the sins of the creators? How did the "big 3" get it wrong and how does that continue to manifest today? Your best cloud IAM advice is “assign roles at the lowest resource-level possible”, please explain this one? Where is the magic? Resources: Video (Linkedin, YouTube) Kat blog “Diving Deeply into IAM Policy Evaluation” blog “Complexity: a Guided Tour” book EP141 Cloud Security Coast to Coast: From 2015 to 2023, What's Changed and What's the Same? EP129 How CISO Cloud Dreams and Realities Collide  
Guest: Victoria Geronimo, Cloud Security Architect, Google Cloud Topics: You work with technical folks at the intersection of compliance, security, and cloud. So  what do you do, and where do you find the biggest challenges in communicating across those boundaries? How does cloud make compliance easier? Does it ever make compliance harder?  What is your best advice to organizations that approach cloud compliance as they did for the 1990s data centers and classic IT? What has been the most surprising compliance challenge you’ve helped teams debug in your time here?  You also work on standards development –can you tell us about how you got into that and what’s been surprising in that for you?  We often say on this show that an organization’s ability to threat model is only as good as their team’s perspectives are diverse: how has your background shaped your work here?   Resources: Video (YouTube) EP14 Making Compliance Cloud-native EP25 Beyond Compliance: Cloud Security in Europe  Fordham University Law and Technology site IAPP  site  
Guest: Merritt Baer, Field CTO,  Lacework, ex-AWS, ex-USG Topics: How can organizations ensure that their security posture is maintained or improved during a cloud migration? Is cloud migration a risk reduction move? What are some of the common security challenges that organizations face during a cloud migration? Are there different gotchas between the three public clouds? What advice would you give to those security leaders who insist on lift/shift or on lift/shift first? How should security and compliance teams approach their engineering and DevOps colleagues to make sure things are starting on the right foot? In your view, what is the essence of a cloud-native approach to security? How can organizations ensure that their security posture scales as their cloud usage grows? Resources: Video (LinkedIn, YouTube) EP69 Cloud Threats and How to Observe Them EP138 Terraform for Security Teams: How to Use IaC to Secure the Cloud EP67 Cyber Defense Matrix and Does Cloud Security Have to DIE to Win? 9 Megatrends drive cloud adoption—and improve security for all Darknet Diaries podcast  
Guests: Emre Kanlikilicer, Senior Engineering Manager @ Google Sophia Gu, Engineering Manager at Google  Topics Workspace makes the claim that unlike other productivity suites available today, it’s architectured for the modern threat landscape. That’s a big claim! What gives Google the ability to make this claim? Workspace environments would have many different types of data, some very sensitive. What are some of the common challenges with controlling access to data and protecting data in hybrid work?  What are some of the common mistakes you see customers making with Workspace security? What are some of the ways context aware access and DLP (now SDP) help with this? What are the cool future plans for DLP and CAA? Resources: Google Workspace blog & Workspace Update blog EP99 Google Workspace Security: from Threats to Zero Trust CISA Zero Trust Maturity Model 2.0  
Guest: Jason Solomon, Security Engineer, Google Topics: Could you share a bit about when you get pulled into incidents and what are your goals when you are? How does that change in the cloud? How do you establish a chain of custody and prove it for law enforcement, if needed? What tooling do you rely on for cloud forensics and is that tooling available to "normal people"?  How do we at Google know when it’s time to call for help, and how should our customers know that it’s time?  Can I quote Ray Parker Jr and ask, who you gonna call? What’s your advice to a security leader on how to “prepare for the inevitable” in this context?  Cloud forensics - is it easier or harder than the 1990s classic forensics?  Resource: EP157 Decoding CDR & CIRA: What Happens When SecOps Meets Cloud EP98 How to Cloud IR or Why Attackers Become Cloud Native Faster? EP103 Security Incident Response and Public Cloud - Exploring with Mandiant Google SRE Workbook (Ch 9) GRR Cloud Logging LibCloudForensics, Turbinia, Timesketch tools