Guest: Ari Herbert-Voss, CEO at RunSybil Topics: The market already has Breach and Attack Simulation (BAS), for testing known TTPs. You're calling this 'AI-powered' red teaming. Is this just a fancy LLM stringing together known attacks, or is there a genuine agent here that can discover a truly novel attack path that a human hasn't scripted for it? Let's talk about the 'so what?' problem. Pentest reports are famous for becoming shelf-ware. How do you turn a complex AI finding into an actionable ticket for a developer, and more importantly, how do you help a CISO decide which of the thousand 'criticals' to actually fix first? You're asking customers to unleash a 'hacker AI' in their production environment. That's terrifying. What are the 'do no harm' guardrails? How do you guarantee your AI won't accidentally rm -rf a critical server or cause a denial of service while it's 'exploring'? You mentioned the AI is particularly good at finding authentication bugs. Why that specific category? What's the secret sauce there, and what's the reaction from customers when you show them those types of flaws? Is this AI meant to replace a human red teamer, or make them better? Does it automate the boring stuff so experts can focus on creative business logic attacks, or is the ultimate goal to automate the entire red team function away? So, is this just about finding holes, or are you closing the loop for the blue team? Can the attack paths your AI finds be automatically translated into high-fidelity detection rules? Is the end goal a continuous purple team engine that's constantly training our defenses? Also, what about fixing? What makes your findings more fixable? What will happen to red team testing in 2-3 years if this technology gets better? Resource: Kim Zetter Zero Day blog EP230 AI Red Teaming: Surprises, Strategies, and Lessons from Google EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes? EP68 How We Attack AI? Learn More at Our RSA Panel! EP71 Attacking Google to Defend Google: How Google Does Red Team
Guest: Balazs Scheidler, CEO at Axoflow, original founder of syslog-ng Topics: Are we really coming to "access to security data" and away from "centralizing the data"? How to detect without the same storage for all logs? Is data pipeline a part of SIEM or is it standalone? Will this just collapse into SIEM soon? Tell us about the issues with log pipelines in the past? What about enrichment? Why do it in a pipeline, and not in a SIEM? We are unable to share enough practices between security teams. How are we fixing it? Is pipelines part of the answer? Do you have a piece of advice for people who want to do more than save on their SIEM costs? Resources: EP197 SIEM (Decoupled or Not), and Security Data Lakes: A Google SecOps Perspective EP190 Unraveling the Security Data Fabric: Need, Benefits, and Futures EP228 SIEM in 2025: Still Hard? Reimagining Detection at Cloud Scale and with More Pipelines Axoflow podcast and Anton on it "Decoupled SIEM: Where I Think We Are Now?" blog "Decoupled SIEM: Brilliant or Stupid?" blog "Output-driven SIEM — 13 years later" blog
Guest: Monzy Merza, co-founder and CEO at Crogl Topics: We often hear about the aspirational idea of an "IronMan suit" for the SOC—a system that empowers analysts to be faster and more effective. What does this ideal future of security operations look like from your perspective, and what are the primary obstacles preventing SOCs from achieving it today? You've also raised a metaphor of AI in the SOC as a "Dr. Jekyll and Mr. Hyde" situation. Could you walk us through what you see as the "Jekyll"—the noble, beneficial promise of AI—and what are the factors that can turn it into the dangerous "Mr. Hyde"? Let's drill down into the heart of the "Mr. Hyde" problem: the data. Many believe that AI can fix a team's messy data, but you've noted that "it's all about the data, duh." What's the story? "AI ready SOC" - What is the foundational work a SOC needs to do to ensure their data is AI-ready, and what happens when they skip this step? And is there anything we can do to use AI to help with this foundational problem? How do we measure progress towards AI SOC? What gets better at what time? How would we know? What SOC metrics will show improvement? Will anything get worse? Resources: EP242 The AI SOC: Is This The Automation We've Been Waiting For? EP170 Redefining Security Operations: Practical Applications of GenAI in the SOC EP227 AI-Native MDR: Betting on the Future of Security Operations? EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI EP238 Google Lessons for Using AI Agents for Securing Our Enterprise "Simple to Ask: Is Your SOC AI Ready? Not Simple to Answer!" blog Nassim Taleb "Antifragile" book "AI Superpowers" book "Attention Is All You Need" paper
Guest: Jibran Ilyas, Director for Incident Response at Google Cloud Topics: What is this tabletop thing, please tell us about running a good security incident tabletop? Why are tabletops for incident response preparedness so amazingly effective yet rarely done well? This is cheap/easy/useful so why do so many fail to do it? Why are tabletops seen as kind of like elite pursuit? What's your favorite Cloud-centric scenario for tabletop exercises? Ransomware? But there is little ransomware in the cloud, no? What are other good cloud tabletop scenarios? Resources: EP60 Impersonating Service Accounts in GCP and Beyond: Cloud Security Is About IAM? EP179 Teamwork Under Stress: Expedition Behavior in Cybersecurity Incident Response EP222 From Post-IR Lessons to Proactive Security: Deconstructing Mandiant M-Trends EP177 Cloud Incident Confessions: Top 5 Mistakes Leading to Breaches from Mandiant EP158 Ghostbusters for the Cloud: Who You Gonna Call for Cloud Forensics EP98 How to Cloud IR or Why Attackers Become Cloud Native Faster?
Guest: David Gee, Board Risk Advisor, Non-Executive Director & Author, former CISO Topics: Drawing from the "Aspiring CIO and CISO" book's focus on continuous improvement, how have you seen the necessary skills, knowledge, experience, and behaviors for a CISO evolve, especially when guiding an organization through a transformation? Could you share lessons learned about leadership and organizational resilience during such a critical period, and how does that experience reshape your approach to future transformations? Many organizations are undergoing transformations, often heavily involving cloud technologies. From your perspective, what is the most crucial—and perhaps often overlooked—role a CISO plays in ensuring security is an enabler, not a roadblock, during such large-scale changes? Have you ever seen a CISO who is a cloud champion for the organization? Your best advice for a CISO meeting cloud for the first time? What is your best advice for a CISO meeting AI for the first time? How do you balance the continuous self-improvement and development with the day-to-day pressures and responsibilities? Resources: "A Day in the Life of a CISO: Personal Mentorship from 24+ Battle-Tested CISOs — Mentoring We Never Got" book "The Aspiring CIO and CISO: A career guide to developing leadership skills, knowledge, experience, and behavior" book EP201 Every CTO Should Be a CSTO (Or Else!) - Transformation Lessons from The Hoff EP101 Cloud Threat Detection Lessons from a CISO EP104 CISO Walks Into the Cloud: And The Magic Starts to Happen! EP129 How CISO Cloud Dreams and Realities Collide All CISO podcast episodes "Shadow Agents: A New Era of Shadow AI Risk in the Enterprise" blog "Blocking shadow agents won't work. Here's a more secure way forward" blog
Guest: Sumedh Thakar, President and CEO, Qualys Topics: How did vulnerability management (VM) change since Qualys was founded in 1999? What is different about VM today? Can we actually remediate vulnerabilities automatically at scale? Why did this work for you even though many expected it would not? Where does cloud fit into modern vulnerability management? How does AI help vulnerability management today? What is real? What is this Risk Operations Center (ROC) concept and how it helps in vulnerability management? Resources: 2025 DBIR Report Qualys ROC concept defined Qualys ROC-on conference Shaping the Future of Cyber Risk Management blog Qualys State of Cyber Risk Assessment Report EP109 How Google Does Vulnerability Management: The Not So Secret Secrets!
Guest: Rick Caccia, CEO and Co-Founder, Witness AI Topics: In what ways is the current wave of enterprise AI adoption different from previous technology shifts? If we say "but it is different this time", then why? What is your take on "consumer grade AI for business" vs enterprise AI? A lot of this sounds a bit like the CASB era circa 2014. How is this different with AI? The concept of "routing prompts for risk and cost management" is intriguing. Can you elaborate on the architecture and specific AI engines Witness AI uses to achieve this, especially for large global corporations? What are you seeing in the identity space for AI access? Can you give us a rundown of the different tradeoffs teams are making when it comes to managing identities for agents? Resources: EP226 AI Supply Chain Security: Old Lessons, New Poisons, and Agentic Dreams EP173 SAIF in Focus: 5 AI Security Risks and SAIF Mitigations EP84 How to Secure Artificial Intelligence (AI): Threats, Approaches, Lessons So Far Witness AI blog Shadow Agents: A New Era of Shadow AI Risk in the Enterprise Blocking shadow agents won't work. Here's a more secure way forward Shadow AI Strikes Back: Enterprise AI Absent Oversight in the Age of Gen AI Cloud CISO Perspectives: How Google secures AI Agents "The Soul of a New Machine" book Emoji Attack: A Method for Misleading Judge LLMs in Safety Risk Detection
Guest: Jon Oltsik, security researcher, ex-ESG analyst Topics: You invented the concept of SOAPA – Security Operations & Analytics Platform Architecture. As we look towards SOAPA 2025, how do you see the ongoing debate between consolidating security around a single platform versus a more disaggregated, best-of-breed approach playing out? What are the key drivers for either strategy in today's complex environments? How can we have both "decoupling" and platformization going at the same time? With all the buzz around Generative AI and Agentic AI, how do you envision these technologies changing the future of the Security Operations Center (and SOAPA of course)? Where do you see AI really work today in the SOC and what is the proof of that actually happening? What does a realistic "AI SOC" look like in the next few years, and what are the practical implications for security teams? "Integration" is always a hot topic in security - and it has been for decades. Within the context of SOAPA and the adoption of advanced analytics, where do you see the most critical integration challenges today – whether it's vendor-centric ecosystems, strategic partnerships, or the push for open standards? Resources: Jon Oltsik "The Cybersecurity Bridge" podcast (Anton on it) EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI EP242 The AI SOC: Is This The Automation We've Been Waiting For? EP202 Beyond Tiered SOCs: Detection as Code and the Rise of Response Engineering EP180 SOC Crossroads: Optimization vs Transformation - Two Paths for Security Operations Center EP170 Redefining Security Operations: Practical Applications of GenAI in the SOC EP73 Your SOC Is Dead? Evolve to Output-driven Detect and Respond! Daniel Suarez "Daemon" book and its sequel "Delta V"
Guest: Cy Khormaee, CEO, AegisAI Ryan Luo, CTO, AegisAI Topics: What is the state of email security in 2025? Why start an email security company now? Is it true that there are new and accelerating AI threats to email? It sounds cliche, but do you really have to use good AI to fight bad AI? What did you learn from your time fighting abuse at scale at Google that is helping you now How do you see the future of email security and what role will AI play? Resources: aegisai.ai EP40 2021: Phishing is Solved? EP41 Beyond Phishing: Email Security Isn't Solved EP28 Tales from the Trenches: Using AI for Gmail Security EP50 The Epic Battle: Machine Learning vs Millions of Malicious Documents
Guest: Augusto Barros, Principal Product Manager, Prophet Security, ex-Gartner analyst Topics: What is your definition of "AI SOC"? What will AI change in a SOC? What will the post-AI SOC look like? What are the primary mechanisms by which AI SOC tools reduce attacker dwell time, and what challenges do they face in maintaining signal fidelity? Why would this wave of SOC automation (namely, AI SOC) work now, if it did not fully succeed before (SOAR)? How do we measure progress towards AI SOC? What gets better at what time? How would we know? What SOC metrics will show improvement? What common misconceptions or challenges have organizations encountered during the initial stages of AI SOC adoption, and how can they be overcome? Do you have a timeline for SOC AI adoption? Sure, everybody wants AI alerts triage? What's next? What's after that? Resources: "State of AI in Security Operations 2025" report LinkedIn SOAR vs AI SOC argument post Are AI SOC Solutions the Real Deal or Just Hype? EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI EP238 Google Lessons for Using AI Agents for Securing Our Enterprise EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025 RSA 2025: AI's Promise vs. Security's Past — A Reality Check "Noise: A flaw in human judgement" book "Security Chaos Engineering" book (and Kelly episode) A Brief Guide for Dealing with 'Humanless SOC' Idiots
Guest: Rick Correa,Uber TL Google SecOps, Google Cloud Topics: On the 3rd anniversary of Curated Detections, you've grown from 70 rules to over 4700. Can you walk us through that journey? What were some of the key inflection points and what have been the biggest lessons learned in scaling a detection portfolio so massively? Historically the SecOps Curated Detection content was opaque, which led to, understandably, a bit of customer friction. We've recently made nearly all of that content transparent and editable by users. What were the challenges in that transition? You make a distinction between "Detection-as-Code" and a more mature "Software Engineering" paradigm. What gets better for a security team when they move beyond just version control and a CI/CD pipeline and start incorporating things like unit testing, readability reviews, and performance testing for their detections? The idea of a "Goldilocks Zone" for detections is intriguing – not too many, not too few. How do you find that balance, and what are the metrics that matter when measuring the effectiveness of a detection program? You mentioned customer feedback is important, but a confusion matrix isn't possible, why is that? You talk about enabling customers to use your "building blocks" to create their own detections. Can you give us a practical example of how a customer might use a building block for something like detecting VPN and Tor traffic to augment their security? You have started using LLMs for reviewing the explainability of human-generated metadata. Can you expand on that? What have you found are the ripe areas for AI in detection engineering, and can you share any anecdotes of where AI has succeeded and where it has failed? Resources EP197 SIEM (Decoupled or Not), and Security Data Lakes: A Google SecOps Perspective EP231 Beyond the Buzzword: Practical Detection as Code in the Enterprise EP181 Detection Engineering Deep Dive: From Career Paths to Scaling SOC Teams EP139 What is Chronicle? Beyond XDR and into the Next Generation of Security Operations EP123 The Good, the Bad, and the Epic of Threat Detection at Scale with Panther "Back to Cooking: Detection Engineer vs Detection Consumer, Again?" blog "On Trust and Transparency in Detection" blog "Detection Engineering Weekly" newsletter "Practical Threat Detection Engineering" book
Guest: Errol Weiss, Chief Security Officer (CSO) at Health-ISAC Topics: How adding digital resilience is crucial for enterprises? How to make the leaders shift from "just cybersecurity" to "digital resilience"? How to be the most resilient you can be given the resources? How to be the most resilient with the least amount of money? How to make yourself a smaller target? Smaller target measures fit into what some call "basics." But "Basic" hygiene is actually very hard for many. What are your top 3 hygiene tips for making it happen that actually work? We are talking about under-resources orgs, but some are much more under-resourced, what is your advice for those with extreme shortage of security resources? Assessing vendor security - what is most important to consider today in 2025? How not to be hacked via your vendor? Resources: ISAC history (1998 PDD 63) CISA Known Exploited Vulnerabilities Catalog Brian Krebs blog Health-ISAC Annual Threat Report Health-ISAC Home Health Sector Coordinating Council Publications Health Industry Cybersecurity Practices 2023 HHS Cyber Performance Goals (CPGs) 10 ways to make cyber-physical systems more resilient EP193 Inherited a Cloud? Now What? How Do I Secure It? EP65 Is Your Healthcare Security Healthy? Mandiant Incident Response Insights EP49 Lifesaving Tradeoffs: CISO Considerations in Moving Healthcare to Cloud EP233 Product Security Engineering at Google: Resilience and Security EP204 Beyond PCAST: Phil Venables on the Future of Resilience and Leading Indicators
Guest: Craig H. Rowland, Founder and CEO, Sandfly Security Topics: When it comes to Linux environments – spanning on-prem, cloud, and even–gasp–hybrid setups – where are you seeing the most significant blind spots for security teams today? There's sometimes a perception that Linux is inherently more secure or less of a malware target than Windows. Could you break down some of the fundamental differences in how malware behaves on Linux versus Windows, and why that matters for defenders in the cloud? 'Living off the Land' isn't a new concept, but on Linux, it feels like attackers have a particularly rich set of native tools at their disposal. What are some of the more subtly abused but legitimate Linux utilities you're seeing weaponized in cloud attacks, and how does that complicate detection? When you weigh agent-based versus agentless monitoring in cloud and containerized Linux environments, what are the operational trade-offs and outcome trade-offs security teams really need to consider? SSH keys are the de facto keys to the kingdom in many Linux environments. Beyond just 'use strong passphrases,' what are the critical, often overlooked, risks associated with SSH key management, credential theft, and subsequent lateral movement that you see plaguing organizations, especially at scale in the cloud? What are the biggest operational hurdles teams face when trying to conduct incident response effectively and rapidly across such a distributed Linux environment, and what's key to overcoming them? Resources: EP194 Deep Dive into ADR - Application Detection and Response EP228 SIEM in 2025: Still Hard? Reimagining Detection at Cloud Scale and with More Pipelines
Guest: Dominik Swierad, Senior PM D&R AI and Sec-Gemini Topics: When introducing AI agents to security teams at Google, what was your initial strategy to build trust and overcome the natural skepticism? Can you walk us through the very first conversations and the key concerns that were raised? With a vast array of applications, how did you identify and prioritize the initial use cases for AI agents within Google's enterprise security? What specific criteria made a use case a good candidate for early evaluation? Were there any surprising 'no-go' areas you discovered?" Beyond simple efficiency gains, what were the key metrics and qualitative feedback mechanisms you used to evaluate the success of the initial AI agent deployments? What were the most significant hurdles you faced in transitioning from successful pilots to broader adoption of AI agents? How do you manage the inherent risks of autonomous agents, such as potential for errors or adversarial manipulation, within a live and critical environment like Google's? How has the introduction of AI agents changed the day-to-day responsibilities and skill requirements for Google's security engineers? From your unique vantage point of deploying defensive AI agents, what are your biggest concerns about how threat actors will inevitably leverage similar technologies? Resources: EP235 The Autonomous Frontier: Governing AI Agents from Code to Courtroom EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI EP224 Protecting the Learning Machines: From AI Agents to Provenance in MLSecOps EP227 AI-Native MDR: Betting on the Future of Security Operations? EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil
Guest: Kim Albarella, Global Head of Security, TikTok Questions: Security is part of your DNA. In your day to day at TikTok, what are some tips you'd share with users about staying safe online? Many regulations were written with older technologies in mind. How do you bridge the gap between these legacy requirements and the realities of a modern, microservices-based tech stack like TikTok's, ensuring both compliance and agility? You have a background in compliance and risk management. How do you approach demonstrating the effectiveness of security controls, not just their existence, especially given the rapid pace of change in both technology and regulations? TikTok operates on a global scale, facing a complex web of varying regulations and user expectations. How do you balance the need for localized compliance with the desire for a consistent global security posture? How do you avoid creating a fragmented and overly complex system, and what role does automation play in this balancing act? What strategies and metrics do you use to ensure auditability and provide confidence to stakeholders? We understand you've used TikTok videos for security training. Can you elaborate on how you've fostered a strong security culture internally, especially in such a dynamic environment? What is in your TikTok feed? Resources: Kim on TikTok @securishe and TikTopTips EP214 Reconciling the Impossible: Engineering Cloud Systems for Diverging Regulations EP161 Cloud Compliance: A Lawyer - Turned Technologist! - Perspective on Navigating the Cloud EP14 Making Compliance Cloud-native
Guest: Manija Poulatova, Director of Security Engineering and Operations at Lloyd's Banking Group Topics: SIEM migration is hard, and it can take ages. Yours was - given the scale and the industry - on a relatively short side of 9 months. What's been your experience so far with that and what could have gone faster? Anton might be a "reformed" analyst but I can't resist asking a three legged stool question: of the people/process/technology aspects, which are the hardest for this transformation? What helped the most in solving your big challenges? Was there a process that people wanted to keep but it needed to go for the new tool? One thing we talked about was the plan to adopt composite alerting techniques and what we've been calling the "funnel model" for detection in Google SecOps. Could you share what that means and how your team is adopting? There are a lot of moving parts in a D&R journey from a process and tooling perspective, how did you structure your plan and why? It wouldn't be our show in 2025 if I didn't ask at least one AI question! What lessons do you have for other security leaders preparing their teams for the AI in SOC transition? Resources: EP234 The SIEM Paradox: Logs, Lies, and Failing to Detect EP197 SIEM (Decoupled or Not), and Security Data Lakes: A Google SecOps Perspective EP231 Beyond the Buzzword: Practical Detection as Code in the Enterprise EP184 One Week SIEM Migration: Fact or Fiction? EP125 Will SIEM Ever Die: SIEM Lessons from the Past for the Future EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025 "Maverick" — Scorched Earth SIEM Migration FTW! blog "Hack the box" site
Guest: Anna Gressel, Partner at Paul, Weiss, one of the AI practice leads Episode co-host: Marina Kaganovich, Office of the CISO, Google Cloud Questions: Agentic AI and AI agents, with its promise of autonomous decision-making and learning capabilities, presents a unique set of risks across various domains. What are some of the key areas of concern for you? What frameworks are most relevant to the deployment of agentic AI, and where are the potential gaps? What are you seeing in terms of how regulatory frameworks may need to be adapted to address the unique challenges posed by agentic AI? How about legal aspects - does traditional tort law or product liability apply? How does the autonomous nature of agentic AI challenge established legal concepts of liability and responsibility? The other related topic is knowing what agents "think" on the inside. So what are the key legal considerations for managing transparency and explainability in agentic AI decision-making? Resources: Paul, Weiss Waking Up With AI (Apple, Spotify) Cloud CISO Perspectives: How Google secures AI Agents Securing the Future of Agentic AI: Governance, Cybersecurity, and Privacy Considerations
Guest: Svetla Yankova, Founder and CEO, Citreno Topics: Why do so many organizations still collect logs yet don't detect threats? In other words, why is our industry spending more money than ever on SIEM tooling and still not "winning" against Tier 1 ... or even Tier 5 adversaries? What are the hardest parts about getting the right context into a SOC analyst's face when they're triaging and investigating an alert? Is it integration? SOAR playbook development? Data enrichment? All of the above? What are the organizational problems that keep organizations from getting the full benefit of the security operations tools they're buying? Top SIEM mistakes? Is it trying to migrate too fast? Is it accepting a too slow migration? In other words, where are expectations tyrannical for customers? Have they changed much since 2015? Do you expect people to write their own detections? Detecting engineering seems popular with elite clients and nobody else, what can we do? Do you think AI will change how we SOC (Tim: "SOC" is not a verb?) in the next 1- 3 -5 years? Do you think that AI SOC tech is repeating the mistakes SOAR vendors made 10 years ago? Are we making the same mistakes all over again? Are we making new mistakes? Resources: EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025 EP231 Beyond the Buzzword: Practical Detection as Code in the Enterprise EP228 SIEM in 2025: Still Hard? Reimagining Detection at Cloud Scale and with More Pipelines EP202 Beyond Tiered SOCs: Detection as Code and the Rise of Response Engineering "RSA 2025: AI's Promise vs. Security's Past — A Reality Check" blog Citreno, The Backstory "Parenting Teens With Love And Logic" book (as a management book) "Security Correlation Then and Now: A Sad Truth About SIEM" blog (the classic from 2019)
Guest: Cristina Vintila, Product Security Engineering Manager, Google Cloud Topic: Could you share insights into how Product Security Engineering approaches at Google have evolved, particularly in response to emerging threats (like Log4j in 2021)? You mentioned applying SRE best practices in detection and response, and overall in securing the Google Cloud products. How does Google balance high reliability and operational excellence with the needs of detection and response (D&R)? How does Google decide which data sources and tools are most critical for effective D&R? How do we deal with high volumes of data? Resources: EP215 Threat Modeling at Google: From Basics to AI-powered Magic EP117 Can a Small Team Adopt an Engineering-Centric Approach to Cybersecurity? Podcast episodes on how Google does security EP17 Modern Threat Detection at Google EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil Google SRE book Google SRS book
Guest: Sarah Aoun, Privacy Engineer, Google Topic: You have had a fascinating career since we [Tim] graduated from college together – you mentioned before we met that you've consulted with a literal world leader on his personal digital security footprint. Maybe tell us how you got into this field of helping organizations treat sensitive information securely and how that led to helping keep targeted individuals secure? You also work as a privacy engineer on Fuschia, Google's new operating system kernel. How did you go from human rights and privacy to that? What are the key privacy considerations when designing an operating system for "ambient computing"? How do you design privacy into something like that? More importantly, not only "how do you do it", but how do you convince people that you did do it? When we talk about "higher risk" individuals, the definition can be broad. How can an average person or someone working in a seemingly less sensitive role better assess if they might be a higher-risk target? What are the subtle indicators? Thinking about the advice you give for personal security beyond passwords and multi-factor auth, how much of effective personal digital hygiene comes down to behavioral changes versus purely technical solutions? Given your deep understanding of both individual security needs and large-scale OS design, what's one thing you wish developers building cloud services or applications would fundamentally prioritize about user privacy? Resources: Google privacy controls Advanced protection program