Discover
Future of Threat Intelligence
Future of Threat Intelligence
Author: Team Cymru
Subscribed: 16Played: 103Subscribe
Share
© Team Cymru
Description
Welcome to the Future of Threat Intelligence podcast, where we explore the transformative shift from reactive detection to proactive threat management. Join us as we engage with top cybersecurity leaders and practitioners, uncovering strategies that empower organizations to anticipate and neutralize threats before they strike. Each episode is packed with actionable insights, helping you stay ahead of the curve and prepare for the trends and technologies shaping the future.
110 Episodes
Reverse
Deepfakes have moved well past the uncanny valley and into active threat operations, and Tom Cross, Head of Threat Research at GetReal, has the client-side case studies to back it up. Tom explains how North Korean IT worker infiltration campaigns have transformed HR and video conferencing from administrative functions into active attack surface, albeit one that most security teams aren't monitoring, logging, or ingesting into their SIEM.Drawing on a long-running collaboration with a former West Point professor and intelligence officer, Tom also applies the military framework of tactical, operational, and strategic intelligence to cybersecurity, arguing that most CTI programs are really just lists of burned indicators. The actual value of IOCs, he contends, is retrospective: discovering you were communicating with a known-bad actor means you may still be compromised. He makes the case for connecting adversary intent models, red team findings, and vulnerability data into a unified predictive picture. YT Thumbnail title: Your Zoom Call Is an Attack SurfaceTopics discussed:How North Korean IT worker infiltration has converted HR processes and video conferencing into an active, unmonitored attack surfaceVoice-cloned peer impersonation via messaging apps, followed by deepfaked video calls and malware deliveryWhy deepfake audio attacks on IT help desk credential reset processes are among the most likely near-term vectorsBiometric indicators of compromise and the significant false-positive risks that distinguish them from traditional IP or domain IOCsHow the military intelligence framework of tactical, operational, and strategic analysis applies to CTI programsThe strategic importance of retrospective IOC analysis versus forward-looking ingestionWhy DPRK's financial motivation model expands their target set far beyond what traditional nation-state threat modeling would predictKey Takeaways: Ingest video conferencing logs into your SIEM.Audit your remote credential reset process for social engineering resistance.Map red team findings and vulnerability data to specific adversary profiles rather than treating them as a generic remediation backlog.Implement retrospective IOC analysis alongside forward-looking blocking.Treat DPRK's financial motivation as an equalizer when assessing APT exposure.Build threat intelligence at the strategic layer by modeling adversary intent and objectives, not just cataloging observed TTPs.Apply extra care to biometric IOC sharing.Monitor employee working-hour patterns against claimed time zones as a behavioral indicator of potential employment fraud.Extend IOC taxonomy to include multimedia and biometric formats.Listen to more episodes: Apple Spotify YouTubeWebsite
Vincent Passaro, Engineering Manager at Stripe Security, didn't get there through a slide deck or a company mandate. He got there through a shower thought that followed a conversation with a friend, and it broke how he'd been thinking about building, leading, and even measuring his own team.The reframe was simple and did not start with "we're all going to be software developers. Rather, "we're going to be product owners." That single pivot changed everything downstream, including how he approached prototyping, how he set success criteria for agents, and how he coached his team out of chasing bugs and into defining outcomes.In this episode, Will and Vince trace both of their "pin drop" moments: the specific conversations that shifted their mental models, then try to articulate what that shift actually means for CTI analysts and security engineers working real problems today.They talk about what it felt like to stop asking "how do I wire this" and start asking "what does success look like," and how fast things moved once that happened. They're honest about what breaks, like the siloed tools that don't talk to each other, the governance vacuum that opens when every analyst is shipping products, and the dopamine trap of adding features instead of finishing work. And they're equally direct about what becomes possible when outcome velocity: not headcount or tooling budget, and what becomes the competitive edge.This isn't a conversation about AI hype. It's about what happens when two practitioners who've spent years operating the plumbing realize the plumbing has been commoditized and what that means for where human judgment actually matters now.If you've been waiting for the right moment to pay attention, this is probably the episode where you stop waiting.Topics Discussed"Product owner" vs. "developer" mindset and why it changes how analysts build toolingDefining outcome criteria upfront as the core discipline for AI-assisted developmentHow AI collapses experimentation costs and eliminates dev team dependencyAnalyst-owned toolkits and outcome velocity as a competitive edge for small teamsThe governance risk: product silos, duplicated tooling, and inconsistent standardsFT3 as an open-source framework built to lower the community contribution barrierWhy CISO/board resistance to AI on security grounds will backfireThreat actors are scaling the same way — analyst adaptation is the necessary responseKey Takeaways: The unlock isn't learning to code: it's learning to think backwards from the outcome. Define what success looks like, set the criteria the agent has to meet before it moves on, and stop micromanaging the implementation. That's the product owner shift.Slow down before you build. Spend more time in planning than in execution using deep research across multiple models, comparing outputs, stress-testing the concept before a single line gets written.Drop the subscription and treat the model like a teacher, not a tool. Start with a problem you already understand. Ask it to walk you from zero to fluent. It will tell you to stop thinking like a developer and start thinking like a product owner. If you have a backlog of problems you gave up on because they weren't staffable, go find them. The feasibility question that used to take months to answer now takes an afternoon. Start there.Before your next team planning cycle, map what everyone is building. The duplicate tools are already being written in parallel by people who don't know about each other. Get ahead of it now, because it only compounds.If you're involved in open-source threat intel frameworks, the contribution problem was never motivation, it was friction. The tooling gap is closable. Build the on-ramp and the community will use it.Listen to more episodes: Apple Spotify YouTubeWebsite
What happens when a DPRK IT worker operation lands inside one of your clients, and the three-letter agency you call says they can't show up? Duaine Labno, Director of Special Investigations & Threat Intelligence at TIG Risk Services, walks through exactly that case: his team built a ruse to recover the compromised laptop, staged a physical handoff at corporate HQ, filmed the courier, ran his plates, and traced him to multiple properties. This produced the kind of ground-level intelligence the FBI told him they'd never seen before in a US-based DPRK case. Duaine explains why digital and physical investigations have to run in parallel from day one, not handed off sequentially, and what that looks like operationally when federal resources don't materialize. He also breaks down how post-COVID remote hiring processes that are speed-optimized gave adversaries a repeatable entry point, and why an untrained recruiter doing a soft document check is now a meaningful attack surface for corporate networks.YT Thumbnail title: Remote Hiring Broke Your Security PerimeterTopics discussed:How post-COVID remote hiring processes relaxed identity verification standards and created repeatable enterprise network entry points Running parallel digital and physical investigations simultaneously when tracking identity fraud and insider threatsUsing open-source intelligence and proprietary threat monitoring software to scan millions of data points for suspect behavioral patternsExecuting a live DPRK IT worker case using physical surveillance, a document ruse, and plate runs to identify a U.S.-based operatorWhy untrained recruiters conducting soft document checks have become a meaningful attack surface in corporate hiring pipelinesHow adversaries are weaponizing AI for voice alteration, deepfakes, and document manipulation to bypass hiring and KYC verification processesThe case for vetted, secure cross-industry intelligence sharing platforms to close gaps that individual organizational silos leave openWhere cyber threat intelligence trails end and physical investigation must pick up to produce actionable, court-ready evidenceKey Takeaways: Treat remote hiring pipelines as an active attack surface by pulling security, legal, and HR into the process.Train recruiters to recognize fraudulent identity documents as a first line of defense against adversarial infiltration of corporate networks.Run digital and physical investigations in parallel from the start rather than waiting for cyber analysis to conclude.Build contingency plans for federal non-response into any investigation involving foreign threat actors.Deploy threat monitoring software capable of scanning open-source data at scale to surface behavioral patterns and connections.Establish vetted, secure intelligence sharing relationships with peer organizations and law enforcement to close the visibility gaps.Pressure-test AI-assisted hiring tools against deepfake and voice alteration scenarios before deploying them.Listen to more episodes: Apple Spotify YouTubeWebsite
When Matt McKnew, Senior Manager of Incident Response at Thermo Fisher, tracked down the Nimda worm in 2001 by analyzing packet captures to identify NetBIOS saturation patterns, threat actors weren't trying to get paid; they were causing disruption. Today, he's defending against ransomware groups that operate like businesses, complete with service models and affiliate networks. Matt explains why Clop's acquisition of six zero-days puts them in APT territory regardless of financial motivation, how attackers now hide in the noise of criminal operations making nation-state activity harder to detect, and why the North Korean IT worker scam succeeds by exploiting weak hiring processes rather than technical vulnerabilities. Topics discussed:Responding to the Nimda worm using packet capture analysis to identify NetBIOS saturation patterns across satellite ISP infrastructureBuilding trusted peer networks for crowdsourcing threat intelligence during active incidents rather than relying solely on formal feedsAnalyzing Clop ransomware's acquisition of six zero-days as evidence of APT-level sophistication despite purely financial motivationImplementing structured incident response documentation and processes to enable faster recovery and more nimble responseEvaluating nation-state threat actors by understanding their 5-year strategic plans and objectives rather than mapping everything to MITRE ATT&CKDeploying agentic AI to standardize analyst work products and maintain consistent intelligence delivery across global security teamsExamining North Korean IT worker infiltration campaigns that exploit weak HR and recruitment processesDifferentiating financially-motivated ransomware operations from nation-state APT campaigns while recognizing blurred lines in TTPsKey Takeaways: Document incident response procedures upfront with standardized policies to reduce response time during active security incidents.Build trusted peer networks across industry for crowdsourcing threat intelligence when formal feeds lack critical real-time information.Evaluate ransomware groups for APT-level capabilities when they acquire multiple zero-days regardless of their financial motivations.Research adversary 5-year strategic plans and national objectives to understand nation state threat actor targeting.Deploy agentic AI systems to standardize analyst work products and maintain consistent intelligence delivery formatting.Strengthen HR and recruitment processes with technical screening questions to defend against North Korean IT worker infiltration.Maintain curiosity and interrogate suspicious indicators until they make complete sense rather than accepting surface-level explanations.Recognize that attackers leverage the same automation and AI capabilities defenders use, requiring equivalent adoption to maintain defensive parity.Listen to more episodes: Apple Spotify YouTubeWebsite
Running CTI at a cyber insurance carrier and across more than tens of thousands of companies forces a triage discipline most programs never need to build. Alex Bovicelli, Senior Director of Threat Intelligence at Tokio Marine HCC, describes how his team scaled by narrowing focus to one thing: the initial access vectors threat actors are actually using right now: not CVSS scores, not spray-and-pray alerts, but underground forum activity, access broker behavior, and credential exposure from info stealer logs that most SMBs have zero visibility into. When a detection fires, his team doesn't just notify, they walk the customer through remediation and confirm the issue is closed, because for a company relying on an MSP with no internal security staff, an alert without support is just noise.The more pointed conversation is about what's not making headlines: thousands of SMBs are getting hit by ransomware every year, and groups like Akira have built a business model specifically around it; high volume, low ransom, staying below the threshold that triggers serious law enforcement attention. Alex explains how those attacks succeed not through sophisticated tradecraft but through SSL VPN brute forcing tools left running unattended, returning thousands of valid credentials against organizations that have no account lockout policies, no MFA on remote access, and no way to know their credentials are already in a log collector somewhere. Topics discussed:Building intelligence-led CTI programs at scale by anchoring detection on initial access vectors, access broker activity, and credential exposureUsing underground forum proximity and info stealer log correlation to identify compromised credentials across thousands of organizationsOperationalizing pre-claim threat intelligence within cyber insurance to eradicate initial access before events generate claimsClosing the alert-to-remediation loop for SMBs by delivering detection, support, and mitigation confirmation as a single workflowHow Akira and similar ransomware groups deliberately target SMBs with high-volume, sub-threshold attacks Rethinking CVSS-based patching prioritization by incorporating criminal exploitability and at-scale attack frequency into triageSeparating AI as an intelligence producer from AI as a report summarizer, where automation could realistically drive patching priorityWhy most external threat feeds leave CTI teams in a retroactive posture, and how incident response data from insurance claims changes thatKey Takeaways: Anchor your CTI program on initial access vectors rather than trying to cover every vulnerability class across your environment.Monitor access broker activity and underground forums to understand which threat actors are actively buying and selling against your industry or infrastructure.Integrate info stealer log analysis into your detection pipeline to identify compromised credentials before threat actors use them for lateral movement or ransomware deployment.Shift your patching prioritization model away from CVSS scores and toward criminal exploitability.Design alerts for smaller IT teams to be remediation-ready on receipt because an alert without a clear next step will not get acted on.Close the loop on every detection by confirming mitigation was completed, not just that the alert was acknowledged.Enforce account lockout policies and MFA on all SSL VPN and remote access entry points as a baseline control.Assess AI tooling for your CTI program on whether it can produce intelligence rather than just consume it through report summarization.Use incident response data from post-claim analysis to validate your pre-claim detection signals.Listen to more episodes: Apple Spotify YouTubeWebsite
Daniel Woods, Principal Security Researcher, and his team at Coalition analyzed forensic reports across their 100,000-policyholder base and found 50% of ransomware incidents begin with VPN or firewall exploits. But here's the twist: 40-60% of those aren't vulnerability exploits at all, they're stolen credentials bypassing perimeter devices entirely. Organizations running Cisco ASA devices show 5x higher claim rates than peers, with similar patterns across Fortinet, SonicWall, and Citrix SSL VPNs. When threat actors do exploit vulnerabilities, they're scanning and deploying shells within 24-48 hours of public disclosure, making your 72-hour patch SLAs dangerously obsolete.Daniel also surfaces the gap between security control theory and organizational reality. Microsoft claims 99.9% MFA effectiveness for individual Azure accounts, but insurance claims data shows no measurable risk reduction at the organizational level because that one service account without MFA, that legacy API integration nobody knew was enabled, or that exec who refused to enroll gives attackers everything they need. Organizations deploying threat-based training focused on social engineering tactics beyond phishing see measurably lower claim rates, suggesting we've been training for the wrong threat surface.Topics discussed:Analyzing cyber insurance claims data from 100,000 policyholders to identify which security controls actually reduce incident ratesUnderstanding why perimeter security devices like Cisco ASA, Fortinet, and SonicWall VPNs show 5x higher claim rates in insurance dataExamining the 40-60% of edge device breaches caused by stolen credentials rather than vulnerability exploitsClosing the gap between Microsoft's 99.9% individual MFA effectiveness claims and zero measurable organizational risk reductionRevealing security awareness training effectiveness through a study showing 2% phishing failure reduction versus threat-based training Comparing email security platforms where Google Workspace shows lower claims rates than Office365 due to included-by-default security featuresImplementing a zero-day alert service that notifies policyholders within hours when vulnerable perimeter devices need immediate patchingRethinking security awareness training as role-specific, finite courses targeting job risks rather than repetitive generic phishing exercisesKey Takeaways: Audit your external perimeter for exposed Cisco ASA, Fortinet, SonicWall, and Citrix SSL VPN devices.Implement hardware-based MFA enforcement across all services including legacy APIs and service accounts to close credential theft gaps.Reduce patch SLAs from 72 hours to under 24 hours since threat actors scan and deploy shells within 24-48 hours of vulnerability disclosure.Migrate email infrastructure to cloud-hosted platforms like Google Workspace that include security features by default.Replace repetitive generic phishing training with role-specific threat-based courses focused on social engineering tactics.Scan your policyholder or customer base for vulnerable perimeter devices using external scanning services to notify before exploits occur.Build identity management architecture around centralized services with hardware token enforcement.Evaluate security control effectiveness using multiple data sources rather than vendor claims alone.Listen to more episodes: Apple Spotify YouTubeWebsite
Stripe's 3-person intel team created FT3 (fraud tools, tactics & techniques), a framework modeled after MITRE ATT&CK but purpose-built for financial fraud, to eliminate the communication breakdown where "fraud" required constant reverse engineering. The structured taxonomy now powers both analyst workflows and automated fraud systems operating at transaction-millisecond speeds, with technique-based tagging that gives fraud engines the context to make informed decisions without human interpretation of vague "fraudulent" alerts.Vincent Passaro, Engineering Manager at Stripe Security, walks through their shift from reactive blocking to building infrastructure targeting packages for law enforcement prosecution. By mapping card testing, account takeovers, and money movement techniques across the full attack chain, the team now produces actionable intelligence packages. The framework drives LLM-powered classification of legacy incident reports, threat-informed red team testing by automatically mapping techniques to API capabilities, and standardized intelligence sharing with financial institutions. YT Thumbnail title: Technique Tagging at ScaleTopics discussed:Creating FT3 framework modeled after MITRE ATT&CK to establish standardized fraud technique taxonomyTransitioning from AWS tier-3 incident response to financial fraud intelligence while applying cloud security methodologiesBuilding infrastructure targeting packages that map adversary infrastructure roles for law enforcement prosecutionScaling small teams through technique-based tagging that enables fraud systems to make decisions at millisecond transaction speedsLeveraging LLMs for automated classification of historical incident reports and mapping fraud techniques to API endpoint capabilitiesIntegrating threat intelligence with red team and fraud operations to create threat-informed testing roadmaps prioritized by business impactKey Takeaways: Build fraud-specific taxonomies to eliminate communication gaps where "fraud" requires constant reverse engineering.Map fraud techniques across the full attack timeline for complete adversary behavior visibility.Create infrastructure targeting packages that identify adversary server roles and network diagrams for prosecution-ready intelligence sharing.Leverage LLMs with fraud technique context to automatically classify historical incident reports and identify new techniques.Use API documentation and fraud frameworks together with LLMs to generate threat-informed red team testing roadmaps.Prioritize threat actor tracking based on business impact and platform prevalence rather than defaulting to nation-state actors or compliance checklists.Integrate threat intelligence, red team, and fraud operations under unified leadership to enable rapid validation of observed techniques.Design fraud frameworks with extensive contextual documentation to enable adoption by non-security teams and facilitate machine-readable intelligence sharing across organizations.Listen to more episodes: Apple Spotify YouTubeWebsite
Fortinet processes telemetry from 50% of the next-generation firewall market, giving Aamir Lakhani, Global Director of Threat Intelligence & Adversarial AI Research, and his team visibility into a looming shift: threat actors moving from exploiting a small subset of proven CVEs to weaponizing the entire vulnerability landscape through AI automation. While defenders currently concentrate resources on commonly exploited vulnerabilities, Aamir warns AI will soon enable attacks across everything "just as efficiently and as fast," requiring security teams to rethink patch management strategies when they can no longer rely on focused defense. Aamir also touches on how The World Economic Forum's Cybercrime Atlas program operates through weekly sessions with 20-40 researchers who deliberately build intelligence packages using only open-source methods. This avoids proprietary data so law enforcement can recreate findings and successfully prosecute cases. He shares how his leadership approach rejects the traditional climb: stay at the bottom of the ladder and push your team up, because their public accomplishments improve both team performance and your career trajectory more than personal competition ever could.Topics discussed:A 50% next-generation firewall market share providing visibility into state-sponsored attacks and ransomware-as-a-service operations dailyAI-driven threat evolution from narrow CVE exploitation to automated attacks across vulnerability landscapes requiring new patch strategiesThreat actor professionalization, including recruitment events, training programs, and internal conferences for cybercrime operationsAdversarial AI capabilities using local LLM training with tools like Ollama to bypass jailbroken model dependencies like WormGPTNetwork-centric threat hunting using metadata and netflow analysis over full packet capture due to bandwidth and analysis constraintsWorld Economic Forum Cybercrime Atlas program methodology using open-source intel to build prosecutable law enforcement intel packagesPrioritizing team advancement over personal climbing by publicizing subordinate accomplishments to improve retention and performanceAI alert fatigue emerging from comprehensive attack cycle tracking where 10% incorrect information invalidates 90% accurate findingsKey Takeaways: Prepare for AI-enabled threat actors to exploit the entire CVE landscape simultaneously.Prioritize metadata and netflow analysis over full packet capture for threat hunting due to better manageability and analysis efficiency.Deploy open-source tools to baseline network behavior and marry telemetry data with threat intel platforms for pattern recognition.Identify your organization's critical pain points that would force ransom payment rather than focusing solely on perimeter defense tech.Join collaborative threat research initiatives like World Economic Forum's Cybercrime Atlas.Build intelligence packages using open-source methods to ensure findings can be recreated and prosecuted.Conduct CTF-based interviews focused on problem-solving approach and persistence rather than expecting candidates to know all answers.Spotlight team by publicizing accomplishments and research contributions to improve retention, morale, and your own career advancement.Mandate regular video check-ins to monitor team mental health and prevent burnout in high-stress roles.Listen to more episodes: Apple Spotify YouTubeWebsite
PayPal's fraud team catches credential stuffing before money moves by watching business intelligence signals that most organizations overlook: explosive traffic growth to legacy endpoints, mismatched phone numbers against account creation locales, and anomalies hidden in raw uncleaned data. Blake Butler, Senior Manager & Head of Fraud Threat Intelligence, applies infrastructure analysis techniques from offensive security to fraud investigations. This fills the gap most organizations face: anti-fraud teams understand scam mechanics but lack technical depth, whereas infosec practitioners know infrastructure but not how criminals monetize accounts at scale.Blake breaks down how phishing kits now bypass MFA through real-time automation. His detection philosophy: counting and explosive growth patterns beat machine learning for uncovering fraud. Data scientists clean away the signal. Topics discussed:Applying offensive security infrastructure analysis methods to fraud threat intelligence investigationsDetecting credential stuffing and account takeover campaigns through anomalies in account creation regions, phone number locales, and explosive traffic growthUnderstanding how modern phishing kits automate real-time OTP theft by integrating directly into legitimate platform APIs during password resetsTracking massive fraud operations emerging from China and South America through business intelligence signalsIdentifying fraud indicators in uncleaned data: extra spaces, unrenderable characters, and AI-generated webshop metadata artifactsBuilding security communities to enable monthly collaboration with local practitioners on emerging threats and tool developmentBridging the critical talent gap between anti-fraud teams lacking technical infrastructure skills and infosec practitioners without fraud monetization expertiseEvaluating phishing-as-a-service platforms and encrypted communication tools that lower barriers to entry for criminal actorsKey Takeaways: Monitor explosive traffic growth patterns to legacy endpoints and unusual account creation regions to detect credential stuffing.Analyze raw uncleaned data for fraud signals including extra spaces, unrenderable characters, and metadata artifacts.Apply infrastructure analysis techniques to fraud investigations to identify phishing domains and criminal tooling.Track mismatches between phone number locales and account creation regions as indicators of automated account generation.Investigate anomalies in business intelligence metrics through simple counting before deploying MLMs to uncover emerging fraud trends.Build fraud threat intelligence teams that combine offensive security backgrounds with fraud monetization expertise to fill the critical industry talent gap.Attend security community meetups to collaborate with local practitioners on emerging threats between annual conferences.Implement MFA while recognizing that advanced phishing kits now automate real-time OTP theft through direct platform API integration.Hire candidates with infosec infrastructure knowledge who understand how criminal actors use tooling to automate credential stuffing and account monetization operations.Listen to more episodes: Apple Spotify YouTubeWebsite
Tidal Cyber's Director of Cyber Threat Intelligence Scott Small reveals how his knowledge base now tracks almost 25,000 procedure-level instances across nearly 800 MITRE ATT&CK techniques and sub-techniques, capturing the command-level detail that exposes the false promise of "100% coverage" when working at technique abstraction alone. He argues that the pre-attack reconnaissance phase remains the most essential yet most ignored portion of the framework, including the recently formalized technique for purchasing and selling victim data on stealer marketplaces. Scott's AI workflow treats LLMs strictly as structured data processors that reference MITRE's written technique examples to parse unstructured threat reports, refusing to use them as intelligence sources themselves. He's seeing threat intelligence and detection engineering roles merge as individuals develop hybrid skill sets. His methodology for mapping TTPs to vulnerabilities gives security teams a data-driven rationale to deprioritize patches when strong post-exploitation defenses already cover the attack vector.Topics discussed:Tracking almost 25,000 procedure-level instances across 800 MITRE ATT&CK techniques to expose the false promise of technique-level coverage aloneDefending pre-attack reconnaissance phases including the technique for purchasing victim data on stealer marketplacesClassifying scanning activity by threat type to prioritize C2 infrastructure linked to APTs over fraud-related domainsBlending threat intelligence and detection engineering roles as analysts gain EDR skills Using AI as structured data processors that reference MITRE's written technique examples to parse unstructured threat reports without generating intelligenceMapping TTPs to vulnerabilities to create data-driven rationale for deprioritizing patches when post-exploitation defenses cover the vectorVisualizing attack narratives through the MITRE ATT&CK matrix to tell leadership about defense gaps and justify resource allocation decisionsKey Takeaways: Track adversary procedures at the command and protocol level to identify real defense gaps.Monitor stealer marketplace activity and automated dealer platforms for credential exposures tied to your domain, then reset credentials.Prioritize threat intel alerts by focusing first on APT-linked activity over fraud campaigns.Develop hybrid skill sets where CTI analysts understand EDR logging capabilities and threat hunters consistently consult adversary behavior reporting for hunt hypotheses.Implement AI workflows that use LLMs to extract structured technique data from unstructured threat reports, not as intelligence output itself.Map TTPs to specific vulnerabilities to build data-driven cases for deprioritizing patches when post-exploit defenses provide coverage.Create visual attack narratives using the MITRE ATT&CK matrix to communicate defense gaps and resource needs.
When Casey Beaumont's entire CTI team departed just before new analysts started, she found herself running threat intelligence solo for months while directing incident response, threat hunting, and red team operations. That trial by fire taught her exactly what separates tactical intelligence from strategic value, and why the best analysts invest significant personal time building trust networks that enterprise tools cannot replicate.
Casey's teams at Marsh McLennan, where she’s the Director of Advanced Cyber Practices, received warnings about Scattered Spider infrastructure 20 minutes after domains registered, before threat actors sent a single SMS phishing message to employee cell phones. That early intelligence enabled blocking domains internally and preparing communications before the first report came in. These private intel networks, built through years of trust and after-hours engagement, consistently deliver the warnings that matter most for large enterprises facing sophisticated, targeted attacks.
Beyond tactical response, Casey explains how her CTI program produces strategic intelligence that drives architectural decisions. She also shares her framework for vendor breach assessments that cuts through legal wordplay, why attribution matters far less than response speed during active incidents, and how to scope CTI mission appropriately to prevent analyst burnout in organizations with massive attack surfaces.
Topics discussed:
Managing unified teams of CTI, threat hunting, red team, and incident response to eliminate resource allocation friction during active incidents and supply chain events.
Building private intelligence networks that deliver infrastructure warnings within 20 minutes of threat actor activity.
Transitioning from tactical incident response to strategic CTI leadership and learning analyst tradecraft through necessity when running solo.
Conducting vendor breach assessments using four critical questions about control gaps, persistence, data exposure, and remediation plans.
Evaluating intelligence relevance at large enterprises with complex environments where shadow IT, acquisitions, and distributed technology create unclear exposure.
Why vendor breaches should not automatically disqualify partnerships and how strong vendor relationships enable influence over authentication improvements and security controls.
Producing strategic CTI that drives architectural investment decisions by documenting systemic risks across technology ecosystems rather than isolated incidents.
Understanding CTI stakeholder needs through deliberate interviewing to prevent analysts from producing reports that leadership ignores.
Sharing unattributed intelligence with law enforcement that enabled warnings to seven or eight fully breached companies with no awareness of compromise.
Why leadership overemphasizes attribution during active incidents when tactical response and containment should take priority.
How great CTI analysts invest significant personal time building professional brands, attending conferences, and earning trust in private intelligence communities.
Key Takeaways:
Consolidate CTI, threat hunting, red team, and incident response under unified leadership to eliminate resource allocation friction during active supply chain incidents and targeted attacks.
Conduct vendor breach assessments using four critical questions: what control gaps enabled the breach, does the actor maintain persistence, what client data was exposed, and what remediation plans address root causes.
Identify vendor evasiveness during breach discussions by listening for careful language around product names that insinuate limited scope while obscuring broader organizational compromise.
Produce strategic CTI reports that document systemic risks across technology ecosystems rather than isolated incidents to give executives justification for architectural investment decisions.
Interview CTI stakeholders systematically to understand what intelligence formats and content they need before analysts waste time producing reports that leadership ignores.
Scope CTI team mission to specific focus areas like tactical threats and supply chain rather than attempting comprehensive coverage of vulnerabilities, geopolitics, and fraud with limited staff.
Share unattributed threat intelligence with law enforcement partners when legal and privacy teams approve to enable warnings for other breached organizations unaware of compromise.
Deprioritize threat actor attribution during active incident response unless conclusive evidence enables tactical pivots, focusing instead on containment and remediation before forensic analysis.
Listen to more episodes:
Apple
Spotify
YouTube
Website
Michael Moore, CISO for the Secretary of State of Arizona's office, explains how he acts as a virtual CISO for all 15 counties by conducting physical security assessments at election facilities and providing real-time guidance during critical events. His approach treats surprise attacks as learning opportunities that should only work once, immediately sharing adversary infrastructure and TTPs across the entire election community to burn their capabilities. Michael emphasizes that misinformation, disinformation, and malinformation represent converging threat vectors that manifest as both cyber attacks and physical violence, requiring defenders to think beyond traditional security boundaries.
Ryan Murray, CISO for the State of Arizona, shares his Cybersecurity Trinity for AI framework: defend from AI-enabled attacks, defend with AI-augmented tools, and defend the AI systems organizations deploy. He explains how Arizona replicated MS-ISAC functionality through AZ ISAC, enabling 1,000+ government personnel across 200+ entities to share intelligence in real time without requiring mature security programs. Ryan stresses that organizations already generate valuable threat intelligence internally through phishing reports and security alerts, and the real challenge is communication and relationship-building rather than expensive commercial feeds.
Topics discussed:
How physical security gaps at government facilities create tactical vulnerabilities that scale across entire states.
Building sector champion models where election security and critical infrastructure specialists act as virtual CISOs for under-resourced local governments.
Why misinformation, disinformation, and malinformation represent converging cyber, physical, and reputational threat vectors that radicalize populations into kinetic attacks.
Implementing real-time threat intelligence sharing protocols that enable 1,000+ defenders to communicate via platforms like Slack during active incidents.
The evolution from receiving threat intelligence to generating intelligence internally by analyzing phishing campaigns, user reports, and infrastructure scanning patterns.
Applying the "surprise attack only works once" principle by burning adversary infrastructure and TTPs immediately through broad intelligence sharing.
Why the distinction between "intelligence" in national security contexts versus cyber threat intelligence creates executive buy-in challenges.
How to prove negative outcomes and communicate near-miss stories where intelligence prevented catastrophic breaches.
The collapsing patch window problem where automated vulnerability discovery and exploitation eliminates traditional seven-day remediation timelines.
Implementing the Cybersecurity Trinity for AI: defending from AI-enabled attacks, defending with AI-enhanced tools, and defending AI systems from prompt injection and data leakage.
Why secure-by-design pledges fail when financially motivated vendors push defensive responsibility to the least capable organizations.
Building tabletop exercise programs that prepare election officials for denial-of-service attacks disguised as physical threats.
How generative AI enables Script Kitty 2.0, where non-technical adversaries automate reconnaissance, exploitation, and data exfiltration through natural language prompts.
The challenge of deepfakes and synthetic media targeting sub-national officials who lack the visibility and resources for sophisticated reputation defense.
Key Takeaways:
Build sector champion programs where specialists act as virtual CISOs for under-resourced entities.
Implement real-time communication platforms like Slack that enable defenders to share threat indicators during active incidents.
Generate internal threat intelligence by systematically analyzing phishing campaigns, tracking top recipients, subject lines, and infrastructure patterns.
Apply the principle that surprise attacks should only work once by immediately burning adversary infrastructure and TTPs through broad community sharing.
Use tabletop exercises to prepare personnel for converged threats like bomb hoaxes that function as denial-of-service attacks on critical operations.
Frame AI strategy using the Cybersecurity Trinity: defend from AI-enabled attacks, defend with AI tools, and defend AI systems from exploitation.
Recognize that patch windows have collapsed to zero for critical edge-facing vulnerabilities due to automated discovery and weaponization.
Focus communications on near-miss stories that demonstrate how intelligence prevented catastrophic outcomes before executive awareness.
Listen to more episodes:
Apple
Spotify
YouTube
Website
Unlike CISOs who work with consistent vulnerabilities across cloud environments, CFOs face company-specific financial processes that change constantly, making automation historically complex to solve before the AI era. Ahikam Kaufman, CEO & CFO of Safebooks AI, explains why machine learning is the only viable solution to detect sophisticated embezzlement schemes that regulatory compliance demands every public company address — with no materiality threshold.
His background building fraud prevention systems at Intuit and Check has taught him how graph technology can link seemingly unrelated financial transactions to expose coordinated internal fraud attempts that would be impossible for humans to catch at scale. The challenge is compounded by the fact that most finance staff are accountants, not technologists, requiring AI tools that bridge data complexity without demanding high technical skill levels.
Topics discussed:
Sarbanes-Oxley requires fraud protection programs with no materiality thresholds, yet most organizations lack systematic detection across payroll, vendor, and expense systems.
Financial fraud detection requires unique AI models for each company using historical data, unlike consistent threats across organizations.
Advanced fraud schemes link multiple transaction types requiring graph technology to connect disparate activities that individual monitoring would miss.
Fraudsters use AI for parallel attacks, fake invoices, vendor manipulation, and executive impersonation, requiring automated defense systems for real-time processing.
Achieving 99.9% accuracy through structured enterprise data and rule-based controls where financial precision is non-negotiable.
Financial AI platforms integrate with existing systems without replacements or workflow changes, providing immediate automation value.
Key Takeaways:
Implement AI-powered fraud detection systems that monitor vendor account changes, payroll additions, and journal entry anomalies.
Build company-specific AI models using 1-2 years of historical financial data to learn unique business processes, data structures, and transaction patterns.
Deploy graph technology to link related financial transactions across different systems to identify coordinated fraud attempts.
Establish partnerships between CFOs and CISOs to combine external cybersecurity threat detection with internal financial fraud monitoring.
Focus on AI platforms that integrate with existing financial technology stacks without requiring system replacements.
Create rule-based governance frameworks for financial AI systems to eliminate hallucinations and maintain accuracy levels.
Monitor AI-amplified fraud techniques, such as sophisticated fake invoices, manipulated vendor banking information, and executive impersonation.
Develop automated systems that can demonstrate reasonable effort for fraud prevention to satisfy regulatory requirements and insurance protections.
Listen to more episodes:
Apple
Spotify
YouTube
Website
Cyber insurance has transformed from a liability-focused niche product into a comprehensive business continuity tool, but widespread misconceptions continue to prevent organizations from maximizing its strategic value. Sjaak Schouteren, Cyber Growth Leader - Europe at Marsh, offers David how they combine risk quantification with business-focused communication strategies that give security leaders the tools to speak board language about cyber threats.
Rather than the complex audit processes, modern cyber insurance acquisition can be remarkably streamlined. Sjaak's experience managing real-world incident response highlights how proper coverage creates strategic advantages beyond simple risk transfer, including immediate access to specialized negotiation teams and forensics experts who can extend decision timeframes during crisis situations.
Topics discussed:
How the 2020-2022 ransomware surge taught insurers that mid-cap companies were primary targets requiring comprehensive coverage.
The three-pillar structure of modern cyber insurance covering first-party losses, third-party liability, and immediate incident response services without deductibles for initial crisis management.
Why risk quantification through scenario analysis and financial impact modeling provides CISOs with the business language needed to communicate effectively with boards and C-suite executives.
How risk engineers from security backgrounds have eliminated technical translation barriers between IT teams and underwriters.
The strategic advantage of immediate incident response coverage that provides access to specialized forensics, legal, and negotiation teams within 48-72 hours of an incident.
Why organizations with cyber insurance actually pay ransomware demands less frequently due to professional negotiation teams and comprehensive recovery support.
The evolution from narrow data breach coverage to comprehensive business protection across all organization sizes.
The distinction between risk mitigation through security controls and risk transfer through insurance as complementary rather than competing strategies.
Key Takeaways:
Conduct cross-functional scenario planning to identify business-critical cyber risks before evaluating insurance coverage options.
Map potential cyber incidents on a risk heat map measuring probability and impact to distinguish between minor inconveniences and threats that could damage business operations.
Quantify average and maximum financial losses for each business-critical scenario to make data-driven decisions about risk.
Leverage specialized risk engineers from security backgrounds during the underwriting process to eliminate technical translation barriers.
Engage professional ransomware negotiators rather than attempting internal negotiations.
Position cyber insurance as business enablement rather than just risk transfer by demonstrating how coverage strengthens overall cyber resilience.
Listen to more episodes:
Apple
Spotify
YouTube
Website
What happens when someone who's been building AI systems for 33 years confronts the security chaos of today's AI boom? Rob van der Veer, Chief AI Officer at Software Improvement Group (SIG), spotlights how organizations are making critical mistakes by starting small with AI security — exactly the opposite of what they should do.
From his early work with law enforcement AI systems to becoming a key architect of ISO 5338 and the OWASP AI Security project, Rob exposes the gap between how AI teams operate and what production systems actually need. His insights on trigger data poisoning attacks and why AI security incidents are harder to detect than traditional breaches offer a sobering reality check for any organization rushing into AI adoption.
The counterintuitive solution? Building comprehensive AI threat assessment frameworks that map the full attack surface before focused implementation. While most organizations instinctively try to minimize complexity by starting small, Rob argues this approach creates dangerous blind spots that leave critical vulnerabilities unaddressed until it's too late.
Topics discussed:
Building comprehensive AI threat assessment frameworks that map the full attack surface before focused implementation, avoiding the dangerous "start small" security approach.
Implementing trigger data poisoning attack detection systems that identify backdoor behaviors embedded in training data.
Addressing the AI team engineering gap through software development lifecycle integration, requiring architecture documentation and automated testing before production deployment.
Adopting ISO 5338 AI lifecycle framework as an extension of existing software processes rather than creating isolated AI development workflows.
Establishing supply chain security controls for third-party AI models and datasets, including provenance verification and integrity validation of external components.
Configuring cloud AI service hardening through security-first provider evaluation, proper licensing selection, and rate limiting implementation for attack prevention.
Creating AI governance structures that enable innovation through clear boundaries rather than restrictive bureaucracy.
Developing organizational AI literacy programs tailored to specific business contexts, regulatory requirements, and risk profiles for comprehensive readiness assessment.
Managing AI development environment security with production-grade controls due to real training data exposure, unlike traditional synthetic development data.
Building "I don't know" culture in AI expertise to combat dangerous false confidence and encourage systematic knowledge-seeking over fabricated answers.
Key Takeaways:
Don't start small with AI security scope — map the full threat landscape for your specific context, then focus implementation efforts strategically.
Use systematic threat modeling to identify AI-specific attack vectors like input manipulation, model theft, and training data reconstruction.
Create processes to verify provenance and integrity of third-party models and datasets.
Require architecture documentation, automated testing, and code review processes before AI systems move from research to production environments.
Treat AI development environments as critical assets since they contain real training data.
Review provider terms carefully, implement proper hardening configurations, and use appropriate licensing to mitigate data exposure risks.
Create clear boundaries and guardrails that actually increase team freedom to experiment rather than creating restrictive bureaucracy.
Implement ongoing validation that goes beyond standard test sets to detect potential backdoor behaviors embedded in training data.
Listen to more episodes:
Apple
Spotify
YouTube
Website
Karim Hijazi’s approach to threat hunting challenges conventional wisdom about endpoint security by proving that some of the most critical intelligence exists outside organizational networks. As Founder & CEO of Vigilocity, his 30-year journey from the legendary Mariposa botnet investigation to building external monitoring capabilities demonstrates why DNS analysis remains foundational to modern threat detection, even as AI transforms both offensive and defensive capabilities.
In his chat with David, Karim explores how threat actors continue to rely on command and control infrastructure as their operational lifeline. His insights into supply chain threats, "low and slow" reconnaissance campaigns, and the evolution of domain generation algorithms provide security leaders with a unique perspective on proactive defense strategies that complement traditional security controls.
Topics discussed:
External DNS monitoring approaches that identify threat actor infrastructure before weaponization.
How AI has fundamentally disrupted domain generation algorithm prediction, creating new blind spots for traditional threat intelligence.
Supply chain threat intelligence methodologies that identify compromised partners and assess contagion risks.
The evolution of command and control infrastructure from cleartext to encrypted communications and back.
"Low and slow" reconnaissance patterns that precede ransomware attacks, operating with months-long dormancy periods.
Strategies for communicating threat intelligence value to business stakeholders without creating defensive reactions from security teams.
The limitations of current AI applications in security, particularly around nuanced threat analysis requiring human experience and pattern recognition.
Board-level cybersecurity education requirements for organizations to survive sophisticated attacks in the next 5 years.
Innovation challenges in cybersecurity where rebranding existing solutions prevents breakthrough defensive capabilities.
Non-invasive threat hunting philosophies that deliver forensic-level detail without deploying endpoint agents.
Key Takeaways:
Monitor external DNS communications to identify command and control infrastructure before threat actors weaponize domains against your organization.
Assess supply chain partners through external threat intelligence lenses to identify compromised third parties that represent contagion risks.
Develop detection capabilities for "low and slow" reconnaissance campaigns that operate with extended dormancy periods between communications.
Implement AI as a noise reduction tool rather than a primary decision maker, maintaining human oversight for nuanced threat analysis.
Establish board-level cybersecurity expertise to ensure adequate understanding and support for advanced threat hunting investments.
Focus security innovation efforts on breakthrough capabilities rather than rebranding existing solutions with new acronyms.
Correlate external threat intelligence with internal security data to validate threats and reduce false positive rates.
Build threat hunting capabilities that can operate at machine speeds to handle increasing volumes of AI-generated attacks.
Create communication strategies that present external threat intelligence as validation tools rather than indictments of existing security programs.
Maintain expertise in DNS analysis and network fundamentals as core competencies, regardless of technological advances.
Listen to more episodes:
Apple
Spotify
YouTube
Website
Psychology beats punishment when building human firewalls. Craig Taylor, CEO & Co-founder of CyberHoot, brings 30 years of cybersecurity experience and a psychology background to challenge the industry's fear-based training approach. His methodology replaces "gotcha" phishing simulations with positive reinforcement systems that teach users to identify threats through skill-building rather than intimidation.
Craig also touches on how cybersecurity is only 25 years old compared to other fields, like medicine's centuries of development, leading to significant industry mistakes. NIST's 2003 password requirements, for example, were completely wrong and took 14 years to officially retract. Craig's multidisciplinary approach combines psychology with security practice, recognizing that the industry's single-focus mindset contributed to these fundamental errors that organizations are still correcting today.
Topics discussed:
Replacing fear-based phishing training with positive reinforcement systems that teach threat identification through skill-building.
Implementing seven-point email evaluation frameworks covering sender domain verification, emotional manipulation detection, and alternative communication verification protocols.
Developing 3- to 5-minute gamified training modules that reward correct threat identification across specific categories.
Correcting cybersecurity industry misconceptions through multidisciplinary approaches.
Evaluating emerging security technologies like passkeys through industry backing analysis.
Building human firewall capabilities through psychological understanding of manipulation tactics.
Implementing pause-and-verify protocols to confirm unusual requests that pass technical email verification checks.
Key Takeaways:
Replace punishment-based phishing simulations with positive reinforcement training that rewards users for correctly identifying threat indicators.
Implement gamified security training modules instead of lengthy video sessions to maintain user engagement.
Establish pause-and-verify protocols requiring alternative communication channels to confirm unusual requests that pass technical email verification checks.
Evaluate emerging security technologies by examining industry backing and major sponsor adoption before incorporating them into training programs.
Calibrate reward systems to provide minimal incentives (like monthly lunch gift cards) that drive engagement without creating external dependency.
Train users to identify the seven key phishing indicators: sender domain accuracy, suspicious subject lines, inappropriate greetings, poor grammar, external links, questionable attachments, and emotional urgency tactics.
Build internal locus of control in security training by focusing on skill mastery rather than fear-based compliance, ensuring users understand why security practices protect them personally.
Deploy fully automated security training systems that eliminate administrative overhead while maintaining month-to-month flexibility and offering discounts to educational and nonprofit organizations.
Listen to more episodes:
Apple
Spotify
YouTube
Website
What happens when you apply economic principles like opportunity cost and comparative advantage to cybersecurity decision-making? Fernando Montenegro, VP & Practice Lead of Cybersecurity at The Futurum Group, demonstrates how viewing security through an economics lens reveals critical blind spots most practitioners miss. His approach transforms how organizations evaluate cloud migrations, measure program success, and allocate security resources.
Fernando also explains why cybersecurity has evolved from a technical discipline into a socioeconomic challenge affecting society at large. His three-part framework for AI implementation — understanding the technology, mapping business needs, and assessing threat environments — offers security leaders a structured approach to cutting through hype and making strategic decisions.
Topics discussed:
How security economics and opportunity cost analysis reshape cloud migration decisions and resource allocation strategies
The National Academies' 2025 "Cyber Hard Problems" report and its implications for cybersecurity's expanding societal impact
A three-part framework for AI implementation: technology comprehension, business alignment, and threat environment assessment
Why understanding organizational business operations eliminates the biggest blind spot in threat intelligence programs
Multi-layered professional networking strategies for separating signal from noise in threat intelligence analysis
How cloud environments fundamentally change threat intelligence workflows from IP-based to identity and architecture-focused approaches
Key Takeaways:
Apply economic opportunity cost analysis to security decisions by evaluating what you give up versus what you gain from each security investment.
Map your organization's business operations across marketing, sales, and product development to provide crucial context for technical threat intelligence.
Assess AI implementations through a three-part framework: technology limitations, business use cases, and specific threat considerations.
Measure security program success by evaluating alignment with organizational goals and influence on non-security business decisions.
Run intentional OODA loops on your security program to maintain strategic direction and continuous improvement.
Listen to more episodes:
Apple
Spotify
YouTube
Website
What does it take to transform a traditional event-driven SOC into an intelligence-driven operation that actually moves the needle? At T. Rowe Price, it meant abandoning the "spray and pray" approach to threat detection and building a systematic framework that prioritizes threats based on actual business risk rather than industry hype.
PJ Asghari, Team Lead for Cyber Threat Intelligence Team, walked David through their evolution from a one-person intel operation to a program that directly influences detection engineering, fraud prevention, and executive decision-making. His approach centers on the "what, so what, now what" framework for intelligence reporting — a simple but powerful structure that bridges the gap between technical analysis and business action.
Topics discussed:
Moving beyond event-based monitoring to prioritize threats based on sector-specific risk profiles and threat actor targeting patterns rather than generic threat feeds.
Focusing on financially-motivated actors, initial access brokers, and PII theft rather than nation-state activities that rarely target mid-tier financial firms directly.
Addressing the cross-functional challenge that spans HR, talent acquisition, insider threat, and CTI teams.
Using mise en place principles from culinary backgrounds to establish clear PIRs that align team focus with organizational needs.
Creating trackable deliverables through ticket systems, RFI responses, and cross-team support that translates intelligence work into measurable business impact.
Maintaining critical thinking and media literacy skills while leveraging automation for administrative tasks and threat feed processing.
Key Takeaways:
Implement the "what, so what, now what" reporting structure to ensure intelligence reaches appropriate audiences with clear business implications and recommended actions.
Build cross-functional relationships with fraud, insider threat, and vulnerability management teams to create measurable value through ticket creation and support requests rather than standalone reporting.
Establish sector-specific threat prioritization by mapping threat actors to your actual business model rather than following generic industry threat landscapes.
Create trackable metrics through service delivery, including RFI responses, expedited patching recommendations, and credential compromise notifications to demonstrate concrete value.
Focus hiring on inquisitive mindset and communication skills over certifications, using interviews to assess critical thinking and ability to dig deeper into investigations.
Map threat actor TTPs to MITRE framework to identify defense stack gaps and provide actionable detection engineering guidance rather than just IOC sharing.
Invest in dark web monitoring and external attack surface management for financial services to catch credential compromises and brand abuse before they impact customers.
Establish regular threat actor recalibration cycles to ensure prioritization remains aligned with current threat landscape rather than outdated assumptions.
Listen to more episodes:
Apple
Spotify
YouTube
Website
Most security leaders position themselves as guardians against risk, but Aimee Cardwell, CISO in Residence at Transcend and Board Member at WEX, built her reputation on a different approach: balancing risk to accelerate business growth. Her unconventional path from Fortune 5 CIO to CISO of a 1,200-person security team at UnitedHealth Group showcases how technical leaders can become true business partners rather than obstacles.
Managing two company acquisitions every month, Aimee tells David how she developed a shifted-left security integration process that actually accelerated deal timelines while improving security outcomes. Her framework for risk appetite conversations moves executives beyond fear, uncertainty and doubt into productive discussions about cyber resilience, changing how organizations think about security investment and business enablement.
Topics discussed:
How healthcare data regulations create complex compliance frameworks where companies must selectively forget customer information based on overlapping regulatory requirements.
The transferable advantages CIOs bring to CISO roles, particularly in software development lifecycle security and communicating complex technical concepts to non-technical stakeholders.
Shifting security strategy from risk prevention to intelligent risk balancing, enabling business growth while maintaining appropriate protection levels.
Managing large-scale acquisition security integration through pre-closing requirements that accelerate post-acquisition security improvements.
Establishing organizational risk appetite through worst-case scenario planning that moves leadership past emotional responses into rational decision-making frameworks.
Developing cyber resilience strategies that assume incident occurrence and focus on recovery speed and impact minimization rather than just prevention.
Scaling security controls based on business growth milestones, avoiding upfront overinvestment while ensuring appropriate protection as companies expand.
Building consensus-driven risk acceptance frameworks while managing competing perspectives from multiple C-level executives and board members.
Key Takeaways:
Implement pre-closing security requirements for acquisitions, shifting security integration 45 days before deal completion to accelerate post-acquisition timelines.
Frame risk conversations around worst-case scenario analysis, using real examples and stock performance data to move executives past emotional responses and build resiliency.
Develop tiered security controls that scale with business growth, implementing basic protections early and adding complexity as revenue and user bases expand.
Position regulatory compliance as a competitive advantage and trust-building mechanism rather than a business constraint.
Create "how do we get to yes" frameworks that start with business objectives and work backward to appropriate risk mitigation strategies.
Use customer trust metrics and retention data to demonstrate security's direct contribution to business growth and competitive positioning.
Leverage software development lifecycle experience to integrate security into engineering processes rather than treating it as an external validation step.
Listen to more episodes:
Apple
Spotify
YouTube
Website























