DiscoverSecurity Intelligence
Security Intelligence
Claim Ownership

Security Intelligence

Author: IBM

Subscribed: 8Played: 86
Share

Description

Security Intelligence is a weekly news podcast for cybersecurity pros who need to stay ahead of fast-moving threats. Each week, we cover the latest threats, trend, and stories shaping the digital landscape, alongside expert insights that help make sense of it all. Whether you’re a builder, defender, business leader or simply curious about how to stay secure in a connected world, you’ll find timely updates and timeless principles in an accessible, engaging format.


New episodes weekly on Wednesdays at 6am EST.

31 Episodes
Reverse
Listen to our latest episode, Can IAM handle AI? →  https://www.ibm.com/think/podcasts/security-intelligence/ai-agent-access-problem-iam-handle-ai  Does your AI agent talk too much? It’s not just an annoying habit—it’s a security concern. On this episode of Security Intelligence, Sridhar Muppidi, Claire Nuñez and Dave Bales join me to discuss Guardio’s research into “agentic blabbering,” and how attacks can use an agent’s reasoning process against it.  In experiments with the agentic Perplexity Comet browser, Guardio researchers were able to design foolproof phishing websites just by listening to agent’s running monologue as it traversed the web.  What does it mean for agentic security when sophisticated AI reasoning processes can be weaponized? Then, we chat about Microsoft Azure CTO Mark Russinovich’s discovery that Claude Opus can reverse engineer 40-year-old (practically ancient, by software standards) code. Did AI just expand the attack surface to include every compiled binary ever written? Plus: Contrast Security CISO David Lindner claims that shift left has failed. Dramatic increases in the exploitation go vulnerable code—confirmed by the IBM Threat Intelligence Index 2026, among many other reports—suggest he might be onto something. But is there more to the story? And, finally, we dig into two new pieces of research from IBM X-Force: One about a new piece of AI-generated malware, and another about reframing how we think about authentication.  All that and more on Security Intelligence. 00:00 -- Introduction 1:19 -- Perplexity Comet’s “agentic blabbering” 13:06 -- AI resurrects old vulnerabilities 21:28 -- Did shift left fail? 30:05 -- AI slop and the post-auth perimeter The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Read more about “Slopoly” → https://www.ibm.com/think/x-force/slopoly-start-ai-enhanced-ransomware-attacks 
Follow the Security Intelligence podcast on your preferred platform →  https://www.ibm.com/think/podcasts/security-intelligence Did you miss out on the [un]prompted AI security conference? So did most of us. Except our very own Dustin “Evil Mog” Heywood, who joins us today to share highlights from the event. And speaking of [un]prompted, we also discuss one of the biggest announcements to come out of the event: the Zero Day Clock. This coalition of experts is arguing that we need to radically rethink vulnerability management in the face of plummeting time-to-exploit values for new vulnerabilities.  Among their demands that might prove to be quite controversial: holding software makers liable for flaws and building more disposable architecture. Then we talk about some notably nasty AI agent behavior, including manipulating prescriptions and writing mean blog posts about human users. Finally, we round out the week with a discussion of burnout among cybersecurity pros. We’re working, on average, 10 overtime hours per week. It’s exhausting—and really, really bad for security. All that and more on Security Intelligence. 00:00 -- Introduction 01:26 -- Report back from [un]prompted  09:07 -- The zero day collapse  21:26 -- AI agents harassing humans  31:26 -- Burnout in cybersecurity The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Subscribe to the IBM Think newsletter → https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120  #zerodaysexploits #AIsecurity #AIagentsecurity #vulnerabilitymanagement
Can IAM handle AI? Find out → https://www.ibm.com/think/podcasts/security-intelligence A consumer just wanted to control his own personal robot vacuum with a PlayStation controller. He ended up controlling thousands of strangers’ vacuums, too. This week on Security Intelligence, we cover one of the wildest IoT security stories in recent memory: How one user accidentally built an army of 6,700 robot vacuums, and what it means for cybersecurity pros.   Then we turn to TOAD — telephone-oriented attack delivery — a deceptively low-tech social engineering method that's quietly becoming one of attackers' favorite tools. We talk about why it works and what defenders can actually do about an attack that skips most of your defenses entirely. And finally: healthcare's cybersecurity problems. This season of the hit medical drama The Pitt features a hospital-debilitating ransomware attack, which is perhaps one of the most realistic things to ever happen on a show known for its verisimilitude. We explore why ransomware is so prevalent in healthcare, why patching is rare and what it would actually take to change that. 00:00 -- Introduction 0:58 -- Rise of the robot vacuum army 10:02 -- Anthropic debuts Claude Code Security 24:39 -- Thwarting distillation attacks 34:23 -- Why hackers love TOADs 44:14 -- Healthcare’s cybersecurity woes The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.  Explore the Threat Intelligence Index 2026 → https://www.ibm.com/reports/threat-intelligence#sipod  #AIcodesecurity #vibecoding #securitydebt #IoTsecurity #vishing 
AI agents are coming to the enterprise—but can we actually control them? On this bonus episode of Security Intelligence, IBM Fellow and CTO IBM Security Sridhar Muppidi helps us dig into the rise of agentic AI security risks, from generative AI systems with backend access to autonomous agents that can schedule meetings, call APIs and automate workflows — often with highly privileged access. Traditionally, identity and access management has (IAM) focused on human beings. Then came service accounts and API credentials. Now? We’re facing an explosion of machine identities, including a brand-new class of AI identities that blend human and machine characteristics.  How do we manage identity and access for software systems that behave like human users? Join us for a discussion of: What makes AI identity management different from traditional IAM Why valid account abuse remains one of the top attack vectors — and how AI could amplify it The risks of giving generative AI systems the keys to the kingdom How enterprises should think about AI access control and governance Why there’s still no clear standard for securing AI and non-human identities The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.  Follow the Security Intelligence podcast on your preferred platform: https://www.ibm.com/think/podcasts/security-intelligence 
For years, stolen credentials were king—the hacker’s attack vector of choice. Until now. The 2026 IBM X-Force Threat Intelligence Index reveals a surge in the exploitation of public-facing applications, overtaking identity-based attacks as the top initial access vector.  Why are threat actors changing their tactics so dramatically—and what does it mean for defenders?  In this episode of Security Intelligence, panelists Claire Nuñez, Chris Caridi and Joe Xatruch break down the biggest findings from the latest Threat Intelligence Index, plus: Infostealers that grab AI agents’ “souls” Compromised packages that drop AI agents as malware The AI infrastructure flaws we can’t seem to fix Why threat intelligence is so siloed—and what we can do about it All that and more—on Security Intelligence. 00:00 - Intro 1:17 - Threat Intelligence Index 2026  16:22 - Stealing AI agents’ souls  28:03 - AI infrastructure flaws  36:36 - Threat intelligence made human  The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.  Follow the Security Intelligence podcast on your preferred platform → https://www.ibm.com/think/podcasts/security-intelligence Explore the Threat Intelligence Index 2026 → https://www.ibm.com/reports/threat-intelligence#sipod    
Explore the podcast → https://www.ibm.com/think/podcasts/security-intelligence Valentine’s day might be over, but love is in the air. The love a scammer has for their victim’s wallet, that is. In this special episode of Security Intelligence, host Matt Kosinski sits down with Claire Nunez, Suja Viswesan, and Dave Bales to break down how modern romance scams actually work: from the “wrong number” text that starts an innocent chat, to long-con “pig butchering” schemes that use emotion, trust and time to extract money — often through crypto investment bait. The panel explains why anyone can fall for these scams, how breaches and public records can help scammers build convincing victim profiles and how AI is making the problem worse. Finally, the team gets practical: how to talk to a loved one who may be caught in a scam, how to remove stigma so people report faster and what organizations can do when a “personal” scam becomes a corporate risk. Key takeaways: Don’t respond to unknown numbers, treat online “investment opportunities” as a red flag and remember: if this happened to you, you’re not alone.  The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Subscribe to the IBM Think newsletter → https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120
Explore the podcast → https://www.ibm.com/think/podcasts/security-intelligenceAre enterprises moving too fast with AI—and breaking security in the process? In this episode of Security Intelligence, host Matt Kosinski is joined by Sridhar Muppidi, Nick Bradley and Jeff Crume to unpack a pivotal moment in cybersecurity. The panel dives into the rapid rise of AI agents and the growing risks of shadow AI in the enterprise, comparing open-source agent platforms like OpenClaw with proprietary models such as Claude Opus 4.6 and its new agent teams. We explore how speed-first AI adoption, unsecured agent implementations and weak separation of duties are creating new attack surfaces—and why executives may be unintentionally fueling the problem. The conversation also examines the recent Notepad++ supply chain breach as a warning sign of broader software inventory and supplier risk failures, and analyzes DragonForce’s attempt to reinvent ransomware as a scalable cartel business. Along the way, we keep returning to a key theme: Have we optimized for velocity at the expense of security? 00:00 -- Intro 01:18 -- OpenClaw vs. Claude Opus 4.6 15:05 -- Move fast. Break security? 27:29 -- Notepad++ breach 38:55 -- DragonForce ransomware cartel  The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Subscribe to the IBM Think newsletter → https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120 #OpenClaw #ClaudeOpus #shadowAI #AIagentsecurity
OpenClaw and Moltbook are extremely cool. They're also extremely dangerous. And they tell us just how far AI agent security has to go. In this episode of Security Intelligence, Dave McGinnis, Seth Glasgow and Evelyn Anderson unpack how locally run AI agents are becoming a brand-new attack surface, and why defenders may be underestimating the risks. From misconfigured agent databases leaking API keys, to malicious “skills” that can quietly hijack trusted systems, we explore what happens when powerful AI tools are treated like just another app. We also dig into a growing signal problem across cybersecurity:  Why AI-generated “slop” is overwhelming bug bounty programs. Why NIST may stop enriching vulnerabilities in the National Vulnerability Database. Along the way, our panel debates a deeper question: Is AI a gift or a curse for security pros?  All that and more on Security Intelligence 00:00 - Intro 01:03 - OpenClaw and the AI agent attack surface 16:49 - Will AI slop end bug bounties? 26:49 - Big changes to NIST’s NVD 35:27 - The problem with vibe coded malware The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Subscribe for more AI and cybersecurity news → https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120 Explore the podcast → https://www.ibm.com/think/podcasts/security-intelligence
Do you think you could get scammed by a chatbot? Neither did IBM Chief People Hacker Stephanie Carruthers—until she went toe to toe with one. In this episode of Security Intelligence, we take you inside the John Henry Competition at DEF CON 2024, where Carruthers competed with an AI-powered vishing bot to see who was the better con artist. The results just might surprise you. Along the way, we explore how generative AI is transforming social engineering, why vishing and voice cloning attacks are surging and what it all means for defenders who’ve spent years training people to spot phishing emails—but not phone calls that sound exactly like their boss. All that and more—on Security Intelligence. The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.  Follow the Security Intelligence podcast on your preferred platform: https://www.ibm.com/think/podcasts/security-intelligence 
AI-generated malware has officially arrived. But does it matter all that much? This week on Security Intelligence, Suja Viswesan, Dave Bales and Dustin Heywood join us to discuss VoidLink, which might just be the first thoroughly documented case of a malware framework generated with significant AI help. The question is: What really changes when malware is no longer the handiwork of human hackers? We also explore the World Economic Forum’s Global Cybersecurity Outlook 2026, where CEOs and CISOs are split on what they fear most: cyber fraud or ransomware? Then we cover the debate over data protection vs. service resilience, and we dig into the takedown of RedVDS, a major player in the cybercrime-as-a-service supply chain. Finally, we reflect on the 40th anniversary of “The Hacker Manifesto,” asking what’s changed—and what hasn’t—in hacker culture. All that and more on Security Intelligence 00:00 -- Introduction01:40 -- CEOs vs. CISOs: 2026 cyberthreats  11:10 -- VoidLink: Documented AI malware  19:28 -- Are we too worried about our data?  27:28 -- Cybercrime supply chains  34:05 -- 40 years of hacking culture  The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.  Explore the podcast → https://www.ibm.com/think/podcasts/security-intelligence Learn more about cybersecurity → https://www.ibm.com/think/podcasts/techsplainers#tabs-fw-44e285b2cc-item-df35f5fbab-tab  
AI has changed the speed of cyberattacks. But it hasn’t changed the most important variable: people. In this episode of Security Intelligence, panelists Jake Paulson, Stephanie Carruthers and Matt Cerny dig into how AI-driven threats—phishing, deepfakes and disinformation—are reshaping the cyberthreat landscape. Organizations, too, are adopting AI tools to help detect these attacks. But even in the era of AI, people are ultimately our first and last lines of defense. And all too often, we don’t give them what they need to succeed. How do we help human beings adapt to the increased speed, scale and impact of AI threats? The answer, our panel argues, isn’t more checkbox training or prettier slides. It’s realistic, immersive training that builds muscle memory, confidence under stress and decision-making skills for moments when things don’t go according to plan. We talk about: 00:00 -- Introduction 01:48 -- AI phishing, deepfakes and modern social engineering tactics 09:19 -- Why humans are still the primary attack surface—and the strongest defense 17:03 -- The difference between tabletop exercises and cyber range training 22:00 -- How immersive simulations prepare teams for real incident response pressure 42:00 -- Why preparedness matters more than awareness in the age of AI attacks Because when AI accelerates attacks, training determines the outcome. All that and more on Security Intelligence. The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. #cybersecuritytraining #AIcyberthreats #AIphishing #AIcyberattacksExplore the podcast → https://www.ibm.com/think/podcasts/security-intelligenceLearn more about the cyber range → https://www.ibm.com/think/topics/cyber-rangeDiscover how AI training can support your business → https://www.ibm.com/services/xforce-cyber-range
Between LockBit, RansomHub and BlackSuit, law enforcement racked up some big wins against ransomware gangs last year. So why aren’t the attacks letting up?   In this episode of Security Intelligence, panelists JR Rao, Jeff Crume and Michelle Alavarez unpack what the state of ransomware in 2025 really looked like, and why things haven’t slowed things down as much as we might hope.    Then, we turn to identity security and cloud breaches as we consider the striking case of Zestix, the lone threat actor linked to breaches at 50 global enterprises. And all he needed were some passwords.    From there, we look at what the future of hacking might hold. Palo Alto’s Wendi Whitmore issued a warning about how AI agents could become devastating insider threats, and security researchers at GEEKCon demonstrated how AI-powered robots can be hijacked using voice commands alone, turning prompt injection into a physical-world security risk.   It’s a niche scenario today. But is it also a preview of what happens when AI, robotics and operational technology collide?  Listen to Security Intelligence to find out.   00:00 -- Introduction 01:05 -- Ransomware in 2026  09:26 -- Zestix linked to 50 hacks  18:42 -- AI agents as insider threats  31:20 -- Hacking humanoid robots  The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.  Subscribe to the IBM Think newsletter → https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120 Explore the podcast → https://www.ibm.com/think/podcasts/security-intelligence  
Explore the podcast → https://www.ibm.com/think/podcasts/security-intelligenceSay your cloud storage service gets hacked. Say the attackers broke in by exploiting a vulnerability in an open-source library your organization used to build the service. Who owns that vulnerability?  Microsoft is trying to clear some of the smog obscuring the software supply chain by expanding its bug bounty program to include some third-party code that affects it services. In this episode of Security Intelligence, panelists Jeff Crume, Nick Bradley and Claire Nuñez discuss what that move means for cybersecurity responsibility models going forward.  We also analyze how a three-year-old LastPass breach is still giving cybercriminals new credentials to steal. Turns out “harvest now, decrypt later” isn’t just a quantum concern. Plus: OpenAI fights prompt injections with an automated, AI-powered red team, hackers have a new tool to make ClickFix attacks even easier and we share the New Year’s Resolutions we hope organizations will make in 2026. All that and more on Security Intelligence. 00:00 -- Introduction 1:11 -- Cybersecurity resolutions 6:51 -- Microsoft’s new bug bounties 14:00 -- The LastPass breach’s long tail 26:07 -- Automated red teaming 33:22 -- ClickFix-as-a-service The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Subscribe for AI and security updates → https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120
Why does it cost so much more to get hacked in the United States than anywhere else in the world? In this special bonus episode of Security Intelligence, we sit down with Michelle Alvarez, Manager of Strategic Threat Analysis at IBM X-Force, for a deep dive into IBM’s 2025 Cost of a Data Breach report—and one of its most surprising findings: global breach costs are falling, but US breach costs just hit a record high. What’s driving the gap? In this episode, we unpack: Why faster detection and containment are lowering breach costs globally Why shadow AI is quietly increasing breach risk and driving up response costs Why regulatory fines, global operations and organizational scale hit US companies especially hard And how supply chain breaches, cloud complexity and shadow IT amplify the damage We also explore a critical inflection point ahead: AI isn’t a major attack target yet—but once adoption crosses key market concentration thresholds, attackers will follow the ROI. All that and more on Security Intelligence The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.  Follow the Security Intelligence podcast on your preferred platform: https://www.ibm.com/think/podcasts/security-intelligence Read the Cost of a Data Breach report: https://ibm.biz/BdbkLt
Explore the podcast → https://www.ibm.com/think/podcasts/security-intelligence In this special year-end episode of Security Intelligence, we reflect on 2025, a year of new attack methods (ClickFix), new vulnerabilities (vibecoding) and new worries on the horizon (shadow agents). From hijacked AI agents to massive supply chain breaches, 2025 forced security leaders to confront a sobering reality: trust might just be our biggest attack surface.  Join hosts Matt Kosinski and Patrick Austin for a jam-packed look back at the biggest cybersecurity trends and cyberattacks of 2025, the lessons we can learn from them and what the road ahead looks like. Featuring: 00:00 – Introduction4:10 – AI and data security with Michelle Alvarez and Jeff Crume 22:42 – Biggest cyberattacks of 2025 with Dave Bales and Nick Bradley 38:18 – Major lessons, innovations and failures of cybersecurity in 2025 with Suja Viswesan and Sridhar Muppidi All that and more on Security Intelligence. The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.  Learn more about cybersecurity → https://www.ibm.com/think/security 
AI browsers are neat—but are they more trouble than they’re worth?  In this episode of Security Intelligence, Austin Zeizel, Evelyn Anderson and Ryan Anschutz discuss Gartner’s recent advisory warning organizations to ban AI browsers from the workplace for the time being. Is there anything we can do to make them safe enough to use? And that leads to a broader conversation about the relationship between AI model providers and the cybersecurity community. In the wake of some high-profile attacks using AI models—like the spy ring Anthropic busted—cybersecurity pros are split on whether AI vendors are pulling their weight in threat intel circles. This one has it all: spam bombing, social engineering and malicious virtual machines. All that and more on Security Intelligence.  00:00 – Introduction 01:14 -- Gartner: No AI browsers at work 13:38 -- Should AI vendors share threat intel? 23:11 -- MITRE’s top 25 most dangerous software flaws 33:15 -- Are social logins safe? 41:54 -- Bring-your-own-VM attacks The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Learn more about cybersecurity → https://www.ibm.com/think/security 
Just how big a deal is React2Shell? Depending on who you ask, it’s either a Log4Shell-level event or just another average, everyday application security vulnerability. Patch and move on. This week, on Security Intelligence, panelists Sridhar Muppidi, Claire Nuñez and Ian Molloy weigh in on the contentious debate React2Shell has sparked. However it shakes out, one thing is for sure: The response to this vulnerability has been anything but typical. We also dive into: 13:01 -- Whether malicious LLMs like WormGPT live up to the hype 23:40 -- How hackers can lock you out of your Gmail account by changing your age 34:09 -- What happens when two different threat actors attack you at the same time 42:37 -- Why cybersecurity pros should care about solar radiation grounding 6,000 flights All that and more on Security Intelligence.  The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.  Explore the podcast → https://www.ibm.com/think/podcasts/security-intelligence Subscribe for AI and security updates → https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120 
Being a malware reverse engineer isn’t always glamorous work. You spend a lot of time digging through junk emails.   But when you find something in there—well, that’s a whole different story.   On this episode of Security Intelligence, X-Force Malware Reverse Engineer Raymond Joseph Alfonso tells us about the time he discovered a curious new malware loader in the honeypot. And that leads to a bigger conversation about how hackers hide malicious code from view—and some of the new techniques they’re cooking up to stay hidden.  The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.  Learn more about QuirkyLoader → https://www.ibm.com/think/x-force/ibm-x-force-threat-analysis-quirkyloader   Follow the Security Intelligence podcast on your preferred platform → https://www.ibm.com/think/podcasts/security-intelligence 
Do you think you’re too smart to fall for a Black Friday scam? Generative AI might knock you down a few pegs. On this episode of Security Intelligence, host Matt Kosinski and panelists Suja Viswesan, Dave McGinnis and Nick Bradley discuss how threat actors are using AI to turbocharge holiday scam season. Plus: - IBM X-Force makes malware research tools public - The dark web job market is thriving - AI fraud schemes are getting quite elaborate And the story of an enterprising insider threat who tried to turn his employer’s wind turbines into cryptojacking machines. Spoiler: He got caught. 00:00 – Introduction 02:45 – Holiday scam season 13:37 – X-Force malware research tools 19:47 – Dark web jobs report 24:41 – Factory finds an AI fraud ring 31:48 – Cryptojacking wind turbines The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Learn more about cybersecurity → https://www.ibm.com/think/securityExplore the podcast → https://www.ibm.com/think/podcasts/security-intelligence Learn more about the X-Force Malware Threat Research GitHub → https://www.ibm.com/think/x-force/introducing-x-force-malware-threat-research-public-github-repository
loading
Comments 
loading