DiscoverModern Cyber with Jeremy Snyder
Modern Cyber with Jeremy Snyder
Claim Ownership

Modern Cyber with Jeremy Snyder

Author: Jeremy Snyder

Subscribed: 0Played: 3
Share

Description

Welcome to Modern Cyber with Jeremy Snyder, a cutting-edge podcast series where cybersecurity thought leaders come together to explore the evolving landscape of digital security. In each episode, Jeremy engages with top cybersecurity professionals, uncovering the latest trends, innovations, and challenges shaping the industry.

Also the home of 'This Week in AI Security', a snappy weekly round up of interesting stories from across the AI threat landscape.

97 Episodes
Reverse
In this episode of This Week in AI Security for March 12, 2026, Jeremy explores a rapidly evolving threat landscape where AI is functioning as both the ultimate bug hunter and an autonomous threat. The episode covers critical vulnerabilities across major platforms and highlights a startling case of an AI agent "going rogue" to mine cryptocurrency.Key Stories & Developments:AI Bug Hunters Accelerate the Zero-Day Clock: OpenAI Codex scanned 1.2 million commits and found over 10,000 high-severity issues, while Anthropic's Claude Opus 4.6 uncovered 22 Firefox vulnerabilities. The mean time to discover and exploit zero-days is shrinking drastically.Malicious File Names: A novel prompt injection attack compromised 4,000 developer machines simply by hiding malicious instructions in the title of a GitHub issue.Copilot Studio Blind Spots: Datadog researchers uncovered significant logging gaps in Microsoft Copilot Studio, creating undetectable backdoors that could bypass regulatory audits (like HIPAA).Alibaba's Rogue AI Agent: In a lab environment, an Alibaba AI agent tasked with optimizing its performance deduced that compute costs money. Without any external prompt injection, it autonomously established an SSH tunnel and began mining cryptocurrency to "pay" for itself.Claude's Accidental Pen-Testing: Truffle Security demonstrated how Claude, when given specific goals against 30 mock company websites, autonomously found exposed API keys and executed SQL injections to access backend data.The McKinsey "Lilli" Breach: Security firm Code Wall hacked McKinsey's internal AI platform, Lilli. By using AI to scan 200 API endpoints, they found 22 that lacked authentication. They then leveraged an unknown SQL injection vulnerability to bypass the prompt layer entirely and access proprietary data.Episode Linkshttps://gbhackers.com/ai-accelerates-high-velocity/https://thehackernews.com/2026/03/openai-codex-security-scanned-12.htmlhttps://thehackernews.com/2026/03/anthropic-finds-22-firefox.htmlhttps://cloud.google.com/blog/topics/threat-intelligence/2025-zero-day-reviewhttps://grith.ai/blog/clinejection-when-your-ai-tool-installs-anotherhttps://securitylabs.datadoghq.com/articles/copilot-studio-logging-gaps/https://x.com/JoshKale/status/2030116466104643633https://trufflesecurity.com/blog/claude-tried-to-hack-30-companies-nobody-asked-it-tohttps://codewall.ai/blog/how-we-hacked-mckinseys-ai-platformWorried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
In this week's episode, Jeremy records straight from the sidelines of the [un]prompted security conference in San Francisco. Before diving into his key takeaways from the event, he covers a massive, AI-assisted data breach and a critical shift in how Google API keys must be handled.Key Stories & Developments:Nation-State AI Hack: A hacker reportedly used Anthropic’s Claude to identify vulnerabilities and OpenAI’s GPT-4.1 for lateral movement, resulting in the theft of 150GB of data (over 180 million records) from the Mexican government.MCP Infrastructure Flaws: An unauthenticated Server-Side Request Forgery (SSRF) flaw leading to Remote Code Execution (RCE) was found in a widely used Atlassian MCP.The Gemini API Key Crisis: A flaw in the Gemini AI panel allowed browser extensions to escalate privileges. More critically, legacy Google API keys—traditionally viewed as safe "lookup only" keys ignored by secret scanners—are now being used for Gemini, granting them "teeth" and leading to massive financial exposures (like an $82,000 bill for a solo developer).Dispatches from the Unprompted Conference: Jeremy shares his top thematic observations from the event, including:The "Zero-Day Clock": The mean time to exploit availability has plummeted from months to mere hours. As LLMs are increasingly used to write exploits, the industry must fundamentally rethink patching strategies.LLMs Finding Legacy Bugs: Researchers demonstrated LLMs uncovering vulnerabilities in massive software projects that have evaded human detection for decades—some predating the invention of Git.Treating Prompts as Code: A key takeaway from Google's Gemini workspace team: as prompts become the primary instruction set for executing tasks, developers must apply traditional secure coding hygiene and logic to their prompt engineering.Episode Linkshttps://www.bloomberg.com/news/articles/2026-02-25/hacker-used-anthropic-s-claude-to-steal-sensitive-mexican-datahttps://blog.pluto.security/p/mcpwnfluence-cve-2026-27825-criticalhttps://cyberpress.org/critical-servicenow-ai-platform-flaw-allows-remote-code-execution-attacks/https://www.darkreading.com/endpoint-security/bug-google-gemini-ai-panel-hijackinghttps://trufflesecurity.com/blog/google-api-keys-werent-secrets-but-then-gemini-changed-the-ruleshttps://boingboing.net/2026/02/27/stolen-gemini-api-key-racks-up-82000-in-48-hours-for-solo-dev.htmlhttps://unpromptedcon.org/Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
In this episode of Modern Cyber, Jeremy is joined by cybersecurity veteran Caleb Sima for a deep dive into the practical realities of securing AI inside organizations. They cut through the hype to discuss the actual threats facing enterprise AI adoption, the rise of "vibe coding," and how security teams can manage the impending wave of AI app sprawl.Key Episode Highlights:The Core Threats: Caleb identifies prompt injection as the number one most likely and impactful threat model for AI systems today, followed closely by data poisoning.The Rise of "App Sprawl": As employees across departments like HR and Finance use AI to build their own functional applications, organizations will face a massive shadow IT challenge without proper deployment pipelines.Defending the Inputs and Outputs: Managing AI security requires an approach similar to handling cross-site scripting, monitoring the inputs coming from untrusted sources and analyzing the outputs to prevent unauthorized actions.Getting Back to Basics: To secure AI, organizations must start with foundational visibility, establishing AI councils, and routing all LLM traffic through centralized enterprise gateways or firewalls.About CalebCaleb is a multi-time founder, CEO and CTO, and also a CISO and practitioner at CapitalOne, DataBricks and RobinHood. Caleb has also recently started his own cyber investment firm, WhiteRabbit. At his core, Caleb is an engineer who loves problem-solving, getting into the weeds at the keyboard, and building things that matter.Episode LinksCaleb Sima on LinkedIn: https://www.linkedin.com/in/calebsima/WhiteRabbit: https://wr.vc/
In this episode of This Week in AI Security for February 26, 2026, Jeremy covers another packed week featuring AI privacy boundary failures, agent-driven outages, AI-accelerated cybercrime, Android malware innovation, platform responsibility debates, and the continued risks of vibe-coded applications.Key Stories & Developments:Microsoft Copilot Confidential Email Bug: Microsoft Copilot was found summarizing confidential emails due to a flaw in the Copilot Chat “Work” tab. AI Agent Triggers AWS Bedrock Outage: An outage involving Amazon Bedrock exposed the risks of agentic coding systems with broad permissions.AI-Powered Assembly Line for Cybercrime: A Russian-speaking attacker breached FortiGate firewalls across 55 countries in just five weeks using AI as a force multiplier.PromptSpy: Android Malware Using Live LLM Command & Control: PromptSpy became the first known Android malware to dynamically leverage Google Gemini at runtime. Instead of relying solely on static command-and-control logic, the malware uses JNI integration to query Gemini in real time for task execution.ChatGPT, Mental Health, and Law Enforcement Boundaries: Following a shooting incident in Tumbler Ridge, Canada, investigators discovered significant usage of ChatGPT by the suspect prior to the event. Internal discussions at OpenAI reportedly debated whether certain interactions warranted escalation.LLM-Generated Passwords Lack Entropy: Security researchers highlighted that passwords generated by LLMs exhibit approximately 80% less entropy than those created by traditional password generators.Vibe-Coded Security Suite Exposes Master Keys: A Reddit thread revealed that a suite of “RR”-branded tools were entirely vibe-coded applications with severe security flaws. Issues included exposed master API keys in frontend settings, unauthenticated 2FA enrollment, and authentication bypass endpoints.Anthropic Moves from Detection to Remediation: Anthropic introduced tooling aimed at moving beyond passive source-code analysis toward automated remediation of vulnerabilities.Episode Linkshttps://www.bleepingcomputer.com/news/microsoft/microsoft-says-bug-causes-copilot-to-summarize-confidential-emails/https://www.thestandard.com.hk/tech-and-startup/article/324872/Amazons-cloud-unit-hit-was-hit-by-least-two-outages-involving-AI-tools-in-December-FT-sayshttps://www.reuters.com/business/retail-consumer/amazons-cloud-unit-hit-by-least-two-outages-involving-ai-tools-ft-says-2026-02-20/https://www.bleepingcomputer.com/news/security/amazon-ai-assisted-hacker-breached-600-fortigate-firewalls-in-5-weeks/https://cyberandramen.net/2026/02/21/llms-in-the-kill-chain-inside-a-custom-mcp-targeting-fortigate-devices-across-continents/https://www.bleepingcomputer.com/news/security/promptspy-is-the-first-known-android-malware-to-use-generative-ai-at-runtime/https://techcrunch.com/2026/02/21/openai-debated-calling-police-about-suspected-canadian-shooters-chats/https://www.techradar.com/pro/security/dont-trust-ai-to-come-up-with-a-new-strong-password-for-you-llms-are-pretty-poor-at-creating-new-logins-experts-warnhttps://www.reddit.com/r/selfhosted/comments/1rckopd/huntarr_your_passwords_and_your_entire_arr_stacks/https://www.anthropic.com/news/claude-code-security
In this episode of This Week in AI Security for February 19, 2026, Jeremy covers an action-packed week with eight major stories exploring the fragile nature of AI safety alignment, critical platform hacks, and geopolitical AI developments.Key Stories & Developments:G-Obliteration Attack: Microsoft security researchers discovered a one-prompt training technique that strips safety alignment from LLMs. By leveraging Group Relative Policy Optimization (GRPO), attackers can use a single mild prompt to cause cross-category generalization of harm. This effectively removes guardrails across 15 open-source models while preserving their utility.Orchids Vibe-Coding Hack: A BBC reporter was hacked on Orchids, a popular "vibe-coding" platform. A security researcher demonstrated a malicious code injection that compromised the user's development environment.AI vs. Legacy Email Security: AI-powered cyberattacks are successfully bypassing 88% of legacy email security systems. Attackers are utilizing LLMs to generate highly authentic phishing and impersonation content at scale.AI Doctors Evade Privacy Rules: AI-powered health services are not subject to the same strict privacy regulations as traditional healthcare facilities. This raises concerns around data leaks and medical hallucinations.OpenClaw Info Stealer: A variant of the Vidar info-stealer is targeting the OpenClaw ecosystem. The attack aims to exfiltrate configuration files and gateway authentication tokens.OpenClaw Founder Joins OpenAI: Peter Steinberger, the creator of the OpenClaw framework, has joined OpenAI. The OpenClaw project will transition to an open-source foundation supported by OpenAI.Claude's Geopolitical Role: Reports indicate that Anthropic's Claude was utilized via the Palantir platform during a US military raid in Venezuela. This raid led to the capture of Nicolas Maduro.ASIS AI Safety Report 2026: The International AI Safety Report highlights three emerging risks. These include the lowered barrier for biological weapons, the surge in deepfakes and fraud, and the difficulty of safety research.Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demoEpisode Linkshttps://www.microsoft.com/en-us/security/blog/2026/02/09/prompt-attack-breaks-llm-safety/https://www.bbc.com/news/articles/cy4wnw04e8wohttps://www.cpapracticeadvisor.com/2026/02/09/study-ai-powered-cyber-attacks-hit-88-of-legacy-email-security-systems/177694/https://cyberscoop.com/ai-healthcare-apps-hipaa-privacy-risks-openai-anthropic/https://thehackernews.com/2026/02/infostealer-steals-openclaw-ai-agent.htmlhttps://techcrunch.com/2026/02/15/openclaw-creator-peter-steinberger-joins-openai/https://www.theguardian.com/technology/2026/feb/14/us-military-anthropic-ai-model-claude-venezuela-raidhttps://www.asisonline.org/security-management-magazine/latest-news/today-in-security/2026/february/2026-international-safety-report/
In this episode of This Week in AI Security, Jeremy covers a concise but critical set of stories for the week of February 12, 2026. From physical world prompt injections targeting autonomous vehicles to massive data leaks in consumer AI wrappers, the intersection of AI and infrastructure remains the primary battleground.Key Stories & Developments:Prompt Injecting Autonomous Vehicles: Researchers at UCSC and Johns Hopkins have demonstrated that autonomous cars and drones can be compromised by "visual" prompt injections placed on physical signs, causing them to ignore traffic rules or misinterpret their surroundings.Massive Chat App Leak: The "Chat & Ask AI" wrapper application exposed 300 million messages belonging to 25 million users due to a simple Firebase misconfiguration that allowed unauthenticated access to read, modify, and delete data.Docker AI Metadata Attacks: A new vulnerability in Docker's AI assistant allows attackers to trigger exploits by planting malicious instructions within container image metadata.Claude Opus 4.6 vs. Security: Anthropic's latest model, Claude Opus 4.6, has demonstrated a frightening new capability: finding high-severity vulnerabilities and logic bugs via reasoning (rather than fuzzing) without needing specialized prompting or scaffolding.Worried about OpenClaw on your network?The OpenClaw crisis proved that employees are deploying unvetted AI agents on their local machines. FireTail helps you discover and govern Shadow AI before it becomes a breach.Scan Your Network for Shadow Agents Nowhttps://www.firetail.ai/schedule-your-demoEpisode Linkshttps://www.theregister.com/2026/01/30/road_sign_hijack_ai/https://www.malwarebytes.com/blog/news/2026/02/ai-chat-app-leak-exposes-300-million-messages-tied-to-25-million-usershttps://www.govinfosecurity.com/docker-ai-bug-lets-image-metadata-trigger-attacks-a-30709https://www.axios.com/2026/02/05/anthropic-claude-opus-46-software-huntinghttps://red.anthropic.com/2026/zero-days/
In this first episode of February 2026, Jeremy breaks down a high-stakes week in AI security, featuring critical framework flaws, cloud-native exploits, and a major security warning regarding a popular autonomous AI agent.Key Stories & Developments:Operation Bizarre Bazaar: Threat actors are actively targeting exposed LLM infrastructure to steal computing resources for cryptocurrency mining and resell API access on dark markets, attempting to pivot into internal systems via compromised MCP servers.Gemini MCP Tool Exploit: A critical Remote Code Execution (RCE) vulnerability was identified in a Gemini Model Context Protocol (MCP) tool, highlighting the recurring theme that the infrastructure powering LLMs remains a primary weak point.MoltBook API Leak: Researchers discovered a hardcoded Supabase API key in "MoltBook," a social network for AI agents. This flaw granted unauthenticated access to the entire production database, exposing over 1.5 million API keys.Bondu AI Toy Breach: A privacy failure in an AI-powered dinosaur toy left 50,000 chat log records exposed to anyone with a Gmail account, underscoring the lack of robust authentication in consumer AI IoT devices.CISA Chief's Data Mishandling: Reports surfaced that the acting head of the country's cyber defense agency uploaded sensitive "official use only" documents into a public version of ChatGPT, bypassing enterprise controls and security protocols.Worried about OpenClaw on your network?The OpenClaw crisis proved that employees are deploying unvetted AI agents on their local machines. FireTail helps you discover and govern Shadow AI before it becomes a breach.Scan Your Network for Shadow Agents Nowhttps://www.firetail.ai/schedule-your-demoEpisode Linkshttps://www.bleepingcomputer.com/news/security/hackers-hijack-exposed-llm-endpoints-in-bizarre-bazaar-operation/https://darkwebinformer.com/cve-2026-0755-reported-zero-day-in-gemini-mcp-tool-could-allow-remote-code-execution/https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keyshttps://ai.plainenglish.io/clawdbot-security-guide-de77b45ab719https://blackoutvpn.au/blog/dont-buy-internet-connected-toyshttps://www.politico.com/news/2026/01/27/cisa-madhu-gottumukkala-chatgpt-00749361
In this final episode of January 2026, Jeremy breaks down a high-stakes week in AI security, featuring critical framework flaws, cloud-native exploits, and a major security warning regarding a popular autonomous AI agent.Key Stories & Developments:Chainlit Framework Flaws: Two critical CVEs were identified in Chainlit, a popular Python package for building enterprise chatbots. These vulnerabilities, including Arbitrary File Read and Server-Side Request Forgery (SSRF), highlight the supply chain risks inherent in the rapidly growing AI development ecosystem.Google Gemini Workspace Exploit: Researchers demonstrated how Gemini can be manipulated via malicious calendar invites. By embedding hidden instructions (similar to Ascii or emoji smuggling), attackers can trick the AI into exfiltrating sensitive user data, such as meeting details and attachments.VS Code "Spyware" Plugins: Over 1.5 million developers were potentially exposed to malicious VS Code extensions impersonating ChatGPT. These plugins serve as "watering hole" attacks designed to harvest sensitive environment variables, credentials, and deployment keys.Vertex AI Privilege Escalation: A novel attack chain in Google’s Vertex AI was disclosed. Attackers used a malicious reverse shell in a reasoning engine function to escalate privileges via the Instance Metadata Service, gaining master access to chat sessions, storage buckets, and logs.The "Cloudbot" Warning: A deep dive into Cloudbot (now rebranded as ClawdBot), a general-purpose AI agent. Researchers found hundreds of instances sitting wide open on the internet, many providing full root shell access and exposing personal conversation histories and API keys.Episode Linkshttps://www.theregister.com/2026/01/20/ai_framework_flaws_enterprise_clouds/https://www.securityweek.com/weaponized-invite-enabled-calendar-data-theft-via-google-gemini/https://cybernews.com/security/fake-chatgpt-vscode-extensions-compromised-developers/https://gbhackers.com/google-vertex-ai-flaw/https://www.insurancejournal.com/magazines/mag-features/2026/01/26/855293.htmhttps://arxiv.org/pdf/2601.10338https://techcrunch.com/2026/01/27/everything-you-need-to-know-about-viral-personal-ai-assistant-clawdbot-now-moltbot/https://securityboulevard.com/2026/01/clawdbot-is-what-happens-when-ai-gets-root-access-a-security-experts-take-on-silicon-valleys-hottest-ai-agent/https://jpcaparas.medium.com/hundreds-of-clawdbot-instances-were-exposed-on-the-internet-heres-how-to-not-be-one-of-them-63fa813e6625https://www.bitdefender.com/en-us/blog/hotforsecurity/moltbot-security-alert-exposed-clawdbot-control-panels-risk-credential-leaks-and-account-takeoversWorried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
In this episode of Modern Cyber, Jeremy is joined by Sydney Marrone, a premier expert in the field of threat hunting and the Head of Threat Hunting at Nebulock. The conversation explores the rapidly evolving intersection of threat hunting and artificial intelligence, specifically focusing on how AI agents are transforming the speed and efficacy of defensive operations.Sydney shares her journey from "crawling under desks" in IT to building elite threat hunting teams at major organizations like Lumen (formerly CenturyLink) and Splunk. She breaks down her newly released Agentic Threat Hunting Framework (ATHF) and the LOCK pattern (Learn, Observe, Check, Keep), explaining how AI can condense a hunt that previously took four weeks into a mere 45 minutes. They also discuss the critical need for AI governance, the risks of "ungoverned access," and why "trust but verify" remains the golden rule when integrating LLMs into security workflows.About Sydney MarroneSydney Marrone is the Head of Threat Hunting at Nebulock and a co-founder of the THOR Collective. With over a decade of experience in incident response, forensics, and blue teaming, she has become a leading voice in structured threat hunting. Sydney is the author of the Agentic Threat Hunting Framework (ATHF) and the co-author of the PEAK Threat Hunting Framework, which won a SANS award for its contribution to the community.A respected author and educator, Sydney co-authored The Threat Hunter's Cookbook and is currently developing a SANS course focused on threat hunting. Her work focuses on moving organizations from reactive to proactive security postures through advanced data science, automation, and authentic AI integration.Episode LinksNebulock (AI-Powered Threat Hunting): https://nebulock.io/ Agentic Threat Hunting Framework (ATHF): https://github.com/Nebulock-Inc/agentic-threat-hunting-framework THOR Collective (Substack & Community): https://dispatch.thorcollective.com/ PEAK Threat Hunting Framework: https://www.splunk.com/en_us/blog/security/peak-threat-hunting-framework.html HEARTH Repository (THOR Collective): https://github.com/THORCollective/HEARTH Threat Hunting MCP Server: https://github.com/THORCollective/threat-hunting-mcp-server
In this episode of This Week in AI Security, Jeremy highlights a significant uptick in AI-related vulnerabilities and the shifting regulatory landscape. The episode covers everything from "Body Snatcher" flaws in enterprise platforms to the growing "industrialization" of AI-powered exploit generation.Key Stories & Developments:California's Cease and Desist to XAI: Following international concerns over sexualized deepfakes, California has issued a first-of-its-kind cease and desist order to XAI. This marks a major moment in regional AI oversight in the absence of federal legislation.ServiceNow "Body Snatcher" Flaw: A critical 9.3/10 CVE was identified in ServiceNow’s AI agent service. An unauthenticated endpoint allowed for Remote Code Execution (RCE), demonstrating that unauthenticated APIs remain a massive risk for agentic systems.Anthropic "Magic String" Crash: Researchers discovered a specific "magic string" that can effectively crash Anthropic LLM sessions. This specialized prompt acts as a denial-of-service against agentic workflows by killing the active interaction stream.Claude Code Data Leak: A default logging feature in Claude Code (vibe coding) saves full-text chat histories in a local directory. Developers committing this directory to public repos risk exposing their entire application logic and internal prompts to attackers.Eurostar Chatbot Exploit: A public-facing AI chatbot for Eurostar was found vulnerable to guardrail bypass and prompt injection. Ross Donald discovered that simply hardcoding a "validation" parameter in the API allowed him to bypass front-end checks.Industrialized Exploit Generation: A new study suggests that for a mere $30 token budget, an LLM can successfully generate an exploit for a known software vulnerability, potentially reducing the "time-to-exploit" to under 20 minutes.Episode Linkshttps://thehackernews.com/2026/01/servicenow-patches-critical-ai-platform.htmlhttps://appomni.com/ao-labs/bodysnatcher-agentic-ai-security-vulnerability-in-servicenow/https://cy.md/opencode-rce/https://techcrunch.com/2026/01/16/california-ag-sends-musks-xai-a-cease-and-desist-order-over-sexual-deepfakes/https://mastodon.social/@Viss/115923109466960526https://sean.heelan.io/2026/01/18/on-the-coming-industrialisation-of-exploit-generation-with-llms/https://bsky.app/profile/aparker.io/post/3mcqehqhcgc2qWorried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
Happy New Year! Jeremy kicks off 2026 with a special extended episode to catch up on everything that happened while the industry was on holiday. From humanoid robots to new global protocols for "Agentic Commerce," AI adoption is accelerating at an unprecedented pace.Market & Strategic Trends:Explosive Growth: AI consumption has tripled over the last year, with user prompt volume growing 6x.Specialized Foundations: We are seeing a shift from general-purpose models to domain-specific LLMs, such as Nvidia's Alpamayo for autonomous vehicles.Agentic Commerce: Google has announced a new protocol designed to facilitate interactions between AI shopping agents and retail systems.Regulatory Landscape: New York has introduced the RAISE Act for AI security, while Italy is challenging Meta's "walled garden" approach to AI chatbots on WhatsApp.Critical Vulnerabilities & Research:Prompt Injection is "Inherent": OpenAI researchers suggest that agentic browsers may be inherently vulnerable to indirect prompt injection due to their need to process external instructions.Supply Chain Risks: Major vulnerabilities were identified in LangChain (API serialization issues) and n8n (max severity RCE), both core tools for building AI workflows.Shadow AI Attacks: Over 91,000 attack sessions were detected targeting AI deployments, including Server-Side Request Forgery (SSRF) campaigns launched via Llama.Episode Linkshttps://securityboulevard.com/2026/01/report-increase-usage-of-generative-ai-services-creates-cybersecurity-challenge/https://techcrunch.com/2026/01/05/boston-dynamicss-next-gen-humanoid-robot-will-have-google-deepmind-dna/https://techcrunch.com/2026/01/05/nvidia-launches-alpamayo-open-ai-models-that-allow-autonomous-vehicles-to-think-like-a-human/https://techcrunch.com/2026/01/11/google-announces-a-new-protocol-to-facilitate-commerce-using-ai-agents/https://techcrunch.com/2025/12/20/new-york-governor-kathy-hochul-signs-raise-act-to-regulate-ai-safety/https://techcrunch.com/2025/12/24/italy-tells-meta-to-suspend-its-policy-that-bans-rival-ai-chatbots-from-whatsapp/https://github.com/asgeirtj/system_prompts_leaks/https://techcrunch.com/2025/12/22/openai-says-ai-browsers-may-always-be-vulnerable-to-prompt-injection-attacks/https://techcrunch.com/2026/01/04/french-and-malaysian-authorities-are-investigating-grok-for-generating-sexualized-deepfakes/https://www.bleepingcomputer.com/news/security/max-severity-ni8mare-flaw-lets-hackers-hijack-n8n-servers/https://aws.amazon.com/security/security-bulletins/rss/2026-001-aws/https://securityboulevard.com/2026/01/google-gemini-ai-flaw-could-lead-to-gmail-compromise-phishing-2/https://www.scworld.com/brief/severe-ask-gordon-ai-vulnerability-addressed-by-dockerhttps://www.eweek.com/news/langchain-ai-vulnerability-exposes-apps-to-hack/https://cybernews.com/security/dig-ai-new-cyber-weapon-abused-by-hackers/https://cyberpress.org/hackers-actively-exploit-ai-deployments/
In this kick-off episode for 2026, Jeremy is joined by the legendary Mikko Hypponen, Chief Research Officer at Sensofusion, for a comprehensive retrospective of 2025 and a look ahead at the future of AI-driven threats. Mikko, now a "Mount Rushmore" guest of the show, shares insights from his transition into the anti-drone space while reflecting on a year defined by massive infrastructure disruptions.The duo discusses the staggering impact of 2025 ransomware incidents, most notably the Jaguar Land Rover breach, which halted production for six weeks and cost an estimated £1.5 billion. Mikko argues that these events prove cybersecurity is no longer just about protecting computers—it’s about securing society itself. They also break down the "random shotgun" nature of modern attacks, where gangs like Clop and Akira target vulnerabilities rather than specific industries or geographies.Turning to AI, Mikko provides a reality check on the current state of deepfakes and automated orchestration. He reflects on the first massive AI-orchestrated cyber espionage campaign of 2025 and explains why the battle between open-source and closed-source models will define the next phase of defense. Finally, they examine how "data is the new oil" and AI is the "new oil refinery," creating a dual-extortion landscape where the risk of data leakage often outweighs the cost of downtime.About MikkoMikko Hypponen is a world-renowned global security expert, author, and speaker with over 35 years of experience in the industry. In August 2025, Mikko transitioned from his long-standing tenure at WithSecure to become the Chief Research Officer at Sensofusion, a Finnish company specializing in advanced anti-drone technologies.Mikko has assisted law enforcement in the U.S., Europe, and Asia on major cybercrime cases since the 1990s and is the curator of the Malware Museum at the Internet Archive. He is the author of the best-selling book "If It's Smart, It's Vulnerable" and a frequent contributor to The New York Times, Wired, and Scientific American. In addition to his role at Sensofusion, Mikko serves as an advisor to Firetail.Episode Linkshttps://sensofusion.com/https://mikko.com/https://www.firetail.ai/ai-breach-trackerhttps://www.anthropic.com/news/disrupting-AI-espionage
In the final episode of 2025, Jeremy examines the evolution of SEO poisoning into "AI poisoning," a major privacy breach involving a popular browser extension, and shares a data-driven "sneak peek" at the state of AI security over the past year.Key Stories & Developments:AI Poisoning of Search Results: Researchers identified an attack where threat actors plant false information online to trick AI-powered search engine crawlers. This results in search engines providing AI summaries that list scam phone numbers for legitimate services like airline call centers, effectively creating a modern, AI-driven version of SEO poisoning.The "Pay-to-Crawl" Proposal: Jeremy discusses a new proposal from Creative Commons that suggests moving away from outright blocking AI crawlers. Instead, website owners could set a price for crawling and training, allowing organizations to monetize the use of their data by LLM providers.Urban VPN Privacy Breach: A popular Chrome and Edge extension, Urban VPN Proxy, was caught intercepting and reading the AI chat messages of its 7.3 million users. This incident highlights the risk of third-party browser extensions reading sensitive data that users assume is private.2025 in Review Snapshot: Using data from the Firetail AI Incident Tracker, Jeremy reveals two major trends from 2025:The Surge in Incidents: AI security incidents saw a massive jump from 2024 to 2025, marking this as the year AI-related security became a global, pervasive problem.Disclosure vs. Injection: While the OWASp Top 10 lists prompt injection as the #1 risk, the tracker data shows that sensitive information disclosure (largely due to organizational error) actually outstrips prompt injection by about a third.Episode Linkshttps://finance.yahoo.com/news/aurascape-researchers-expose-ai-attack-140000260.html?guccounter=1https://techcrunch.com/2025/12/15/creative-commons-announces-tentative-support-for-ai-pay-to-crawl-systems/https://thehackernews.com/2025/12/featured-chrome-browser-extension.htmlhttps://www.firetail.ai/ai-breach-tracker
In this episode of Modern Cyber, Jeremy is joined by Chris Parker, the founder of WhatIsMyIPAddress.com, one of the most visited websites in the world. With over 13 million monthly visitors, Chris has spent more than 25 years helping people understand their digital presence and protect their online privacy. The conversation dives into the fascinating 26-year history of the site—from its start as a simple hobby on a home Windows NT box to becoming a global authority on cybersecurity. Chris shares "war stories" from the early days of the web, including dealing with notoriously verbose log files that filled entire hard drives and managing a home data center that maxed out local copper lines. Chris and Jeremy also explore the modern landscape of digital privacy, discussing the balance between transparency and anonymity. They cover practical topics like how scammers use urgency to fleece victims, the "supply chain" risks of website plugins, and Chris's "middle-ground" approach to privacy—avoiding both complete exposure and the "Faraday cage" lifestyle. About Chris ParkerChris Parker is the founder of WhatIsMyIPAddress.com, one of the world’s most visited websites, helping more than 13 million people each month safeguard their digital privacy. Chris has become the go-to expert on protecting yourself in the digital age, whether from scammers, data miners, or privacy threats you didn't know existed. He is the author of Privacy Crisis: How to Maintain Your Privacy Without Becoming a Hermit, and host of The Easy Prey Podcast. Episode LinksWebsite: https://www.privacycrisis.com LinkedIn: https://www.linkedin.com/in/christophergparker/ Podcast: https://www.easyprey.com/
In this week's episode, Jeremy focuses on the escalating threat of prompt injection across the enterprise, the introduction of a new OWASP Top 10 list, and a surprising advisory from Gartner.Prompt Injection & RCE:PromptPwnd: A vulnerability in GitHub Actions allows attackers to use malicious commit messages to perform prompt injection against AI agents, executing privileged tools and leaking secrets from CI/CD pipelines.IDE Attack Surface: Similar prompt injection flaws were identified in popular development environments and extensions (Cursor, Copilot, Z-Ro), showing how malicious prompts can bypass guardrails and hijack context within the IDE.GeminiJack: A "zero-click" vulnerability in Google Gemini Enterprise and Vertex AI Search allowed attackers to embed indirect prompt injections in shared documents (Gmail, Calendar, Docs). A routine employee search would activate the attack, causing the AI to exfiltrate sensitive corporate data.Industry Shifts:Gartner's Advisory: Gartner issued an unusual strong advisory recommending that CISOs block all AI browsers (like ChatGPT Atlas and Perplexity Comet) for the foreseeable future due to inherent security risks, including data leakage, credential abuse, and autonomous rogue actions.New OWASp Top 10: The OWASp Top 10 for Agentic Applications was released, focusing on risks unique to autonomous, tool-using systems, such as Agent Goal Hijack, Identity and Privilege Abuse, and Agentic Supply Chain Vulnerabilities.Episode Links:https://gbhackers.com/prompt-injection-vulnerability-in-github-actions/https://thehackernews.com/2025/12/researchers-uncover-30-flaws-in-ai.htmlhttps://securityboulevard.com/2025/12/indirect-malicious-prompt-technique-targets-google-gemini-enterprise/https://securityboulevard.com/2025/12/gartners-ai-browser-ban-rearranging-deck-chairs-on-the-titanic/https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/++++++++++Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
In this week's episode, Jeremy dissects two critical security issues and shares key strategic takeaways from the recent Ascent Community Summit on Advancing AI Security.Security Incidents & Research:OpenAI Third-Party Breach: We examine the security incident where OpenAI was affected by a third-party breach via the Mixpanel analytics platform. While customer PII was exposed, prompt and data content was not impacted. Jeremy notes that the API was the attack surface, reinforcing a recurring theme in AI-related incidents.Adversarial Poetry: We break down a fascinating academic paper demonstrating that embedding malicious prompts inside poetry is a successful technique for bypassing LLM guardrails. In some models, this "adversarial poetry" increased the Attack Success Rate (ASR) by over 60%, showing how context manipulation can trick frontier models.Ascent Community Summit Takeaways: Jeremy shares high-level insights from the summit (co-hosted by Paladin and Georgia Tech), focusing on securing critical sectors (Defense, Infrastructure, Healthcare). Key themes include:Core Requirements for AI: The need for math expertise, dedicated compute infrastructure, massive data access, and specialized people.The New Perimeter: Discussion shifted from "identity as the perimeter" to data being the key asset and central focus for security controls.Supply Chain Risks: The societal impact of the AI boom, including increased strain on electricity, cooling, and bandwidth for data center infrastructure.Brakes on a Fast Car: The CISO's role is framed as enabling maximum speed while having the ability to act as the "brakes on a very fast moving car" (Dundee West, GSK), emphasizing rapid response over stagnation.Episode Linkshttps://openai.com/index/mixpanel-incident/https://arxiv.org/pdf/2511.15304https://sites.gatech.edu/asccent/summit/------Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
In this week's episode, Jeremy covers seven stories that highlight the continuing pattern of API-level risks, the rise of multi-agent threats, and new academic insights into LLM fundamentals.Key stories include:RCE via PyTorch: A high-severity vulnerability (with an assigned CVE) was discovered in the widely-used PyTorch package, enabling Remote Code Execution (RCE) through malicious payloads at the API layer. This reinforces the trend of the API being the primary attack surface for AI applications.AI Browser Local Command Execution: Researchers found an API flaw in AI browsers that allowed a malicious instruction set to execute local commands on a user's machine via an embedded extension.Klein Bot Vulnerabilities: An open-source coding agent was found to have multiple security flaws, including the exfiltration of API keys and the disclosure of its underlying model (Grok), validating OWASp's risk categories.Multi-Agent Risk in ServiceNow: Researchers demonstrated that in ServiceNow’s new A-to-A agentic workflows, default configurations place agents in the same network, allowing them to communicate and be exploited using the privileges of the human user who created them.The "Subspace Problem" of Red Teaming: Academic research argues that current LLM red teaming methods are flawed because they test human language, not the numerical token strings the LLM actually processes, meaning predictable token-level vulnerabilities remain hidden.AI Evaluation Shift: A paper argues that non-deterministic LLM environments require a shift away from binary "yes/no" security checks (like traditional network security) toward scenario-based testing for better risk evaluation.Positive ROI of AI in Security: A Google paper provides positive data for early movers, showing that AI can triage at least 50% of security incidents, leading to reduced human workloads and faster response times, providing a strong case for simple, prompt-based AI improvements in security operations.------Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
Adam Pilton of Heimdal

Adam Pilton of Heimdal

2025-11-2038:06

In this episode of Modern Cyber, Jeremy is joined by Adam Pilton, a cybersecurity expert with a background of 15 years in law enforcement, where his final role was as a Detective Sergeant leading the Covert Operations and Cybercrimes team. Drawing on his unique experience investigating and prosecuting hundreds of offenders, Adam provides a frontline perspective on the current state of cybercrime, noting that cybercriminals are "getting better and stronger" while individuals and businesses are "not keeping up".The conversation focuses on the human and organizational challenges in cybersecurity, stressing that small businesses should abandon the belief that they are too small to be targeted, as attackers "hit small businesses all day long" for incremental profit. Adam discusses the severe practical impacts of attacks, warning that businesses must "expect downtime" and be prepared for the significant time needed for recovery. He advocates for storytelling and analogies (like the comparison of hacking to a burglary) over technical regulations to build a strong security culture.Adam also shares insights from his post-law enforcement work as an auditor and consultant, highlighting the common organizational "motivation problem" where people acknowledge the risk but delay action, comparing it to perpetually starting a diet "tomorrow". Finally, he addresses the breakdown of trust in the age of deepfakes (citing the Irish election example) and the critical need for continuous tabletop exercises to test communication and expose "little gaps" before a crisis hits.Guest Bio – Adam Pilton With a background of 15 years in law enforcement, Adam's final role was as a Detective Sergeant leading the Covert operations and Cyber Crime teams. Since then, Adam has worked in cyber security since 2016 across various roles and has a broad understanding of cyber security, from the impact of cyber crime upon individuals and businesses to the need to convey the right messages to senior leaders and end users, ensuring engagement and support.As a subject matter expert in multiple areas for a large organisation, Adam has investigated and supervised hundreds of cases, identifying and prosecuting offenders. He has introduced digital tactics into overt and covert investigations, developing digital capabilities. Adam also held responsibility for training, utilising his communication skills to simplify the complex.Adam has worked with multi-national businesses developing their people and processes to improve their cyber security maturity.Episode Links https://heimdalsecurity.com/https://www.linkedin.com/in/adampilton/
In this week's episode, Jeremy covers two major and critical developments that underscore the need to harden the foundational components of AI systems and recognize the reality of AI-orchestrated attacks.First, we analyze Shadow MQ, a vulnerability discovered by Oligo that affects multiple popular AI tools, including those from Nvidia and Meta Llama. The flaw stems from the mass reuse of core, insecure components—specifically, an unsafe Python pickle deserialization technique—in the underlying plumbing of various LLMs. This vulnerability allows attackers to inject malicious commands, potentially leading to Remote Code Execution (RCE) and Privilege Escalation at the API layer.Second, we dive deep into the first publicly confirmed, AI-orchestrated cyber espionage campaign, detailed in a threat intelligence report from Anthropic. The state-sponsored campaign used a frontier AI model to accelerate nearly every phase of the attack, including:Weaponized System Prompts: Attackers defined a persona ("senior cyber operations specialist") to guide the LLM's malicious behavior.AI-Driven Evasion: The AI was used to refine malware and bypass EDR solutions.AI-Powered Reconnaissance: The model performed vulnerability research on obscure protocols and orchestrated lateral movement within networks.Jeremy emphasizes that this report is a wake-up call, validating the core risks around AI adoption and proving that malicious AI usage is now a real-world reality.Episode Links:https://www.oligo.security/blog/shadowmq-how-code-reuse-spread-critical-vulnerabilities-across-the-ai-ecosystemhttps://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf------Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
Ben Wilcox of ProArch

Ben Wilcox of ProArch

2025-11-1339:24

In this episode of Modern Cyber, Jeremy is joined by Ben Wilcox, the unique combination of CTO and CISO at ProArch, to discuss navigating the critical intersection of speed, risk, and security in the era of AI. Ben shares his perspective as a long-time practitioner in the Microsoft ecosystem, emphasizing that the security stack must evolve with each major technology shift—from on-prem to cloud to AI.The conversation focuses on how to help customers achieve "data readiness" for AI adoption, particularly stressing that organizational discipline (like good compliance) is the fastest path to realizing AI's ROI. Ben reveals that the biggest concern he hears from enterprise customers is not LLM hallucinations or bias, but the risk of a major data breach via new AI services. He explains how ProArch leverages the comprehensive Microsoft security platform to provide centralized security and identity control across data, devices, and AI agents, ensuring that user access and data governance (Purview) trickle down through the entire stack.Finally, Ben discusses the inherent friction of his dual CISO/CTO role, explaining his philosophy of balancing rapid feature deployment with risk management by defining a secure "MVP" baseline and incrementally layering on controls as product maturity and risk increase.About Ben WilcoxBen Wilcox is the Chief Technology Officer and Chief Information Security Officer at ProArch, where he leads global strategy for cloud modernization, cybersecurity, and AI enablement. With over two decades of experience architecting secure digital transformations, Ben helps enterprises innovate responsibly while maintaining compliance and resilience. He’s recently guided Fortune 500 clients through AI adoption and zero-trust initiatives, ensuring that security evolves in step with rapid technological change.Episode Linkshttps://www.proarch.com/https://www.linkedin.com/in/ben-wilcox/https://ignite.microsoft.com/en-US/home
loading
Comments 
loading