DiscoverCybersecurity TodayAgentic AI Security Is Broken and How To Fix It: Ido Shlomo, Co-founder and CTO of Token Security
Agentic AI Security Is Broken and How To Fix It: Ido Shlomo, Co-founder and CTO of Token Security

Agentic AI Security Is Broken and How To Fix It: Ido Shlomo, Co-founder and CTO of Token Security

Update: 2026-02-21
Share

Digest

The podcast discusses the inherent security vulnerabilities of rapidly adopted agentic AI, including issues with open-source solutions. It introduces Token Security and its co-founder Edo Shlomo, who explains their mission to protect assets by limiting AI agent input/output and implementing guardrails. The conversation emphasizes that preventing AI adoption is futile, and the focus should be on managing its integration and risks. Shlomo shares insights into the Israeli cybersecurity ecosystem and the fundamental insecurities of AI agents, likening them to a new operating system with amnesia. The unpredictability of AI inputs/outputs and the challenge of granting necessary permissions are highlighted. The discussion shifts to security as "blast radius management" and a strategic approach to AI agent access, focusing on defining operational areas rather than controlling every move. Challenges in insulating AI from external influences and the risks associated with products like Cloud Code are examined. Solutions proposed include treating AI as an untrusted process, implementing a robust identity layer, and using AI to protect AI. A case study illustrates the security gaps created by rapid AI adoption due to over-provisioned permissions. The episode stresses the need for secure-by-default AI agents and advises cybersecurity professionals to advocate for AI adoption while establishing inventories, boundaries, and governance. Future advancements like agent teams and the vision of AI as a personal assistant are discussed, concluding with Token Security's recognition at the RSA Innovation Sandbox.

Outlines

00:00:00
Introduction and Sponsor Message

The podcast begins by thanking sponsor METER for their integrated networking solutions.

00:00:19
The Insecurity of Agentic AI and Open Source Concerns

The discussion highlights the rapid adoption and inherent security flaws of agentic AI, including vulnerabilities in open-source solutions like "Wildfire" and "Went Viral," noting their complexity and the struggles major AI companies face in addressing these issues.

00:02:58
Introducing Token Security and Intent-Based Permissions

Edo Shlomo, co-founder and CTO of Token Security, joins to discuss practical solutions for AI agent security. Token Security's mission is to protect assets as AI agents gain access to environments by limiting input/output and implementing guardrails, emphasizing that AI adoption is inevitable and should be managed.

00:06:29
Understanding AI Agent Insecurities and Security Strategies

Edo Shlomo shares his background and discusses the fundamental insecurities of AI agents, likening them to a new operating system with amnesia. The unpredictability of AI inputs/outputs and the challenge of managing agent permissions are explored. The perspective on security shifts to "blast radius management," focusing on defining an agent's access and operational areas rather than controlling every move, and the difficulty of insulating AI from external influences is noted.

00:18:22
Cloud Code, AI Security Solutions, and Identity Layers

The rapid adoption and extensive permissions of Cloud Code are examined. Solutions for AI security are discussed, emphasizing the need to treat AI as an untrusted process with controlled access and monitoring. A critical need for a robust identity layer to map and audit every agent and its actions is proposed, leading to Token Security's "intent-based permission management."

00:25:30
Implementing Intent-Based Permissions and AI Protecting AI

Practical implementation of intent-based permissions involves creating an identity layer that identifies agents, correlates credentials, and monitors actions. The solution lies in using AI to protect AI, with smart agents correlating data and integrating with identity layers.

00:28:23
Real-World AI Adoption Challenges and Future Outlook

A case study illustrates security gaps from rapid AI agent adoption due to over-provisioned permissions and a lack of connection between AI strategy and IAM. The episode stresses the need for secure-by-default AI agents, advises cybersecurity professionals on managing AI adoption, and looks towards future advancements like agent teams and AI as a personal assistant, concluding with Token Security's RSA Innovation Sandbox recognition.

Keywords

Agentic AI


AI systems designed to act autonomously, making decisions and taking actions in the real world without direct human intervention. This involves complex decision-making processes and potential security risks due to their independent operation.

Open Source Agentic Solutions


AI agent frameworks and tools that are publicly available, allowing for modification and distribution. While fostering innovation, they often present significant security challenges due to rapid, less-controlled development.

Token Security


A company focused on securing AI agents by implementing intent-based permission management. They aim to bridge the gap between an agent's purpose and its actual access, ensuring safer AI integration.

Intent-Based Permission Management


A security approach that aligns an AI agent's permissions with its intended purpose and role. This contrasts with traditional identity management by focusing on the "why" behind an agent's actions, not just its identity.

Blast Radius Management


A security strategy focused on limiting the impact of a security incident. Instead of solely preventing breaches, it aims to contain the damage when an incident occurs, particularly relevant for AI systems with broad access.

Identity Layer for AI Agents


A system for assigning and managing non-human identities for AI agents. This allows for tracking, auditing, and controlling the actions of AI entities, similar to how human identities are managed in cybersecurity.

Cloud Code


A product generating significant revenue, used by many Fortune 100 companies. It runs with extensive developer permissions, posing risks related to access to environment files and pre-authenticated tools.

Zero Trust Approach to AI


Applying the principle of "never trust, always verify" to AI agents. This means continuously validating an agent's actions, even if it has authorized access, to prevent misuse.

RSA Innovation Sandbox


A prestigious startup competition within the RSA Conference, showcasing innovative security technologies. Token Security's selection as a finalist highlights their advancements in AI security.

Q&A

  • What are the primary security concerns with agentic AI?

    Agentic AI systems are inherently insecure because they operate autonomously, making decisions with potentially vast permissions. Their non-deterministic nature and extensive input/output spaces make traditional security controls difficult to apply, and even major AI companies struggle to address these vulnerabilities.

  • How does Token Security propose to secure AI agents?

    Token Security introduces "intent-based permission management." This approach focuses on aligning an AI agent's permissions with its intended purpose, rather than just its identity. It involves creating a robust identity layer to track agent actions, correlate credentials with permissions, and ensure agents operate within defined boundaries.

  • Why are traditional security measures insufficient for AI agents?

    Traditional security relies on predictable inputs and outputs, which AI lacks. AI's input space is vast (e.g., the entire English language), and its outputs can be non-deterministic. This makes it impossible to build a machine smart enough to control all potential inputs and outputs, necessitating a different security paradigm.

  • What is the concept of "blast radius management" in AI security?

    Blast radius management views security not just as prevention but as controlling the impact of a security incident. For AI agents, this means understanding and limiting the potential damage they could cause by carefully managing their access and autonomy, acknowledging that failures are inevitable.

  • How can organizations effectively manage the risks associated with AI agent adoption?

    Organizations should advocate for AI adoption while establishing clear boundaries. This involves creating a central inventory of AI usage, defining acceptable access levels and actions for agents, and implementing governance processes for discovery, monitoring, and decommissioning of AI agents to prevent unmanageable technical debt.

  • What are the next significant advancements expected in agentic AI security?

    Future advancements include agent teams collaborating on complex tasks and multi-day autonomous operations. These developments aim to increase efficiency and potentially improve human work-life balance by offloading more demanding tasks to AI.

Show Notes

Jim Love discusses how rapid adoption of agentic AI is repeating the industry pattern of shipping technology without security, citing issues like vulnerabilities in Anthropic's MCP and insecure open-source agent tools. He interviews Ido Shlomo, co-founder and CTO of Token Security, who argues AI agents are fundamentally hard to secure because they are non-deterministic, have infinite input/output space, and often require broad permissions to be useful. 

Cybersecurity Today  would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale.  You can find them at Meter.com/cst

Shlomo proposes focusing security on access, identity, attribution, least privilege, and auditability rather than trying to filter prompts and outputs, and describes Token's "intent-based permission management" approach that maps agents and sub-agents as non-human identities tied to their purpose and allowed actions. The conversation covers real-world risks such as developer tools like Claude Code running with extensive access, widespread over-provisioning of admin permissions and API keys, exposure of unencrypted local token files, and misconfigurations that leak data publicly. Shlomo recommends organizations build governance processes for agents—discovery/inventory, boundary setting, continuous monitoring, and secure decommissioning—and says AI is needed to help police AI. He also highlights emerging trends like agent teams and multi-day autonomous tasks, and notes Token Security is a top-10 finalist in the RSA Innovation Sandbox 2026, planning to present an intent-and-access-focused security model for AI agents.

00:00 Sponsor: Meter's integrated networking stack
00:19 Why agentic AI security is breaking (MCP & open-source chaos)
02:53 Meet Token Security: practical guardrails for AI agents
04:57 Why you can't just ban agents at work (shadow AI reality)
06:24 Tel Aviv's cybersecurity pipeline: gaming, military, and startups
08:57 Why AI/agents are fundamentally hard to secure (new OS + 'human spirit')
13:44 Trust, autonomy, and permissions: managing the blast radius
18:17 Real-world exposure: Claude Code and the developer identity attack surface
20:16 A workable approach: treat agents as untrusted processes with identity + least privilege
22:33 Zero Trust for Agents: Access ≠ Permission to Act
23:27 Token's "Intent-Based Permission Management" Explained
25:29 Building the Identity Map: Tracing What Agents Touch
26:52 The Secret Sauce: Using AI to Secure AI in Real Time
28:10 Real-World Case: 1,500 Agents and Wildly Over-Provisioned Access
30:57 CUA 'Computer-Use' Agents: Exciting, Personal… and Terrifying
34:44 Secure-by-Default & Sandboxing: Fixing 'Always Allow' Dark Patterns
35:36 What Security Teams Should Do Now: Inventory, Boundaries, Governance
37:59 What's Next: Agent Teams and Multi-Day Autonomous Work
40:10 Tony Stark Vision: Agents That Improve the Human Experience
41:02 RSA Innovation Sandbox: Token's Big Bet on Intent + Access
43:01 Wrap-Up, Audience Q&A, and Sponsor Message

Comments 
In Channel
loading

Table of contents

00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Agentic AI Security Is Broken and How To Fix It: Ido Shlomo, Co-founder and CTO of Token Security

Agentic AI Security Is Broken and How To Fix It: Ido Shlomo, Co-founder and CTO of Token Security