DiscoverTechSpective Podcast
TechSpective Podcast
Claim Ownership

TechSpective Podcast

Author: Tony Bradley

Subscribed: 6Played: 501
Share

Description

The TechSpective Podcast brings together top minds in cybersecurity, enterprise tech, AI, and beyond to share unique perspective on technology—unpacking breakthrough trends like zero trust, threat intelligence, AI-enabled security, ransomware’s geopolitical ties, and more. Whether you’re an IT pro, security exec, or simply tech‑curious, each episode blends expert insight with real-world context—from microsegmentation strategies to the human side of cyber ethics. But we also keep it fun, sometimes riffing on pop‑culture debates like Star Wars vs. Star Trek or Xbox vs. PS—so it’s not all dry and serious.
178 Episodes
Reverse
It’s getting harder to have a “normal” conversation about content, social media, or visibility anymore—mostly because the rules keep changing while you're still mid-sentence. Just a few years ago, you could create a blog post, optimize it for SEO, promote it on Twitter (back when it was still Twitter and not a dumpster fire of right-wing conspiracy lunacy rebranded as X), and expect a decent number of eyeballs to land on it. That’s not the game anymore. Now we’re living in a world of algorithmic gatekeeping, AI-generated content slop, and platforms that are slowly morphing into echo chambers of their own making. And as someone who spends a lot of time thinking, writing, and talking about tech, marketing, and cybersecurity, I wanted to have an actual conversation about what this means—beyond the usual recycled talking points. So, I invited Evan Kirstel onto the TechSpective Podcast to dig in. If you’re not familiar with Evan, you should be. He’s one of the more influential voices in B2B tech media—part content creator, part live streamer, part analyst, part TV host, depending on the day. He’s also been doing this for a while, and more importantly, doing it well. That makes him a great sounding board for the increasingly murky topic of digital thought leadership. One of the first things we talked about was the rise of formulaic, AI-generated content. You know the kind—it reads like it was built from a checklist of “engagement best practices,” and while it may technically be “on brand,” it’s rarely interesting. The irony, of course, is that the platforms boosting this kind of content are simultaneously rewarding quantity over quality, while drowning users in sameness. From there, we explored how visibility really works in 2025. Hint: it’s no longer about who you know—it’s about which large language model knows you. If you’re not showing up in ChatGPT summaries or Google’s new generative answers, you’re basically invisible to a big chunk of your potential audience. Which raises the question: how do you actually earn mindshare in a world where traditional SEO has been replaced by AI synthesis? We didn’t land on a one-size-fits-all answer—but we did agree on a few things. First, content that sounds like content for content’s sake? It’s dead. Thought leadership that merely echoes what 20 other people are already saying? Also dead. What works now is originality, consistency, and credibility—backed by actual lived experience. Another key theme we unpacked: platforms. Everyone likes to say “meet your audience where they are,” but it’s harder than it sounds when the audience is splintered across LinkedIn, Reddit, YouTube, TikTok, and a dozen other niche platforms—each with its own expectations and formats. Evan shared how he tailors his content for each platform without diluting the message, and why companies that try to be “cool” without context usually fall flat. I’ll also say this—this episode reminded me that high-quality conversations are still one of the most underutilized forms of content out there. When it’s not scripted or polished within an inch of its life, a good conversation can cut through the noise and resonate on a level most polished op-eds or templated videos never will. So if you’re feeling stuck, wondering why your content isn’t landing like it used to, or trying to figure out how to show up where it matters—this episode is worth your time. Check out my conversation with Evan Kirstel on the TechSpective Podcast. And yes, we get into Gary Vaynerchuk, TikTok, zero-click search, and why it might be time to completely rethink your content strategy.
The cybersecurity landscape never sits still—and neither do the conversations I aim to have on the TechSpective Podcast. In the latest episode, I sit down with Etay Maor, Chief Security Strategist at Cato Networks and a founding member of Cato CTRL, the company’s cyber threats research lab. Etay brings a rare mix of technical depth and practical perspective—something increasingly necessary as we navigate the murky waters of modern cyber threats. This time, the conversation centers on the rise of Shadow AI—a topic gaining urgency but still underappreciated in many organizations. If Shadow IT was the quiet rule-breaker of the past decade, Shadow AI is its unpredictable, algorithmically supercharged cousin. It’s showing up in boardrooms, workflows, and marketing departments—often without security teams even knowing it’s there. Here’s the thing: banning AI tools or blocking access doesn’t work. People find a way around it. We’ve seen this play out with cloud storage, collaboration tools, and other “unsanctioned” technologies. The same logic applies here. Etay and I explore why organizations need to move beyond a binary yes/no mindset and instead think in terms of guardrails, visibility, and enablement. We also get into the tension between innovation and risk—how fear-based decision-making can put companies at a disadvantage, and why the bigger threat might be not using AI at all. That may sound counterintuitive coming from two people steeped in cybersecurity, but context matters. The risk of falling behind could be greater than the risk of exposure—if companies don’t take a strategic approach. Naturally, the conversation expands into how threat actors are adapting AI for offensive purposes—crafting more convincing phishing emails, automating reconnaissance, and even gaming defensive AI tools. Etay shares sharp insights into how attackers use our own tools against us and what that means for the future of cybersecurity. There’s also a philosophical thread woven throughout—questions about whether AI can truly be “original,” how human creativity intersects with machine learning, and what kind of ethical or regulatory frameworks might be needed (if any) to keep things from going off the rails. Etay brings both technical fluency and historical perspective to the discussion, making it a conversation that’s as grounded as it is thought-provoking. This episode doesn’t veer into fear-mongering or hype. It stays real—examining where we are, where we’re headed, and how to make better decisions as the ground keeps shifting. Whether you’re in security, tech leadership, policy, or just curious about how AI is reshaping the digital battleground, this one’s worth your time. Tune in to the latest TechSpective Podcast—now streaming on all major platforms. Share your thoughts in the comments below.
I’ve had a lot of conversations about AI over the past couple years—some insightful, some overhyped, and a few that left me questioning whether we’re even talking about the same technology. But every now and then, I get the opportunity to sit down with someone who not only understands the technology but also sees its broader implications with clarity and honesty. This episode of the TechSpective Podcast is one of those moments. Jeetu Patel, President and Chief Product Officer at Cisco, joins me for an unscripted, unfiltered conversation that covers more ground than I could have outlined in a set of pre-written questions. Actually, I did draft a set of pre-written questions. We just didn't follow or use them at all. Jeetu and I have known each other for a while, and this episode reflects the kind of conversation you only get with someone who’s deeply immersed in both the strategic and human sides of tech. It’s thoughtful. It’s philosophical. And it doesn’t pull punches. At the center of our discussion is the concept of “agentic AI”—a term that’s being used more frequently, sometimes without much clarity. We unpack what it actually means, what it can realistically do, and how it differs from the wave of chatbots and content generators that came before it. More importantly, we talk about how these AI agents might change not just the tasks we automate, but how we think about work itself. Of course, with any conversation about AI and the future of work comes the inevitable tension: what gets lost, what gets reimagined, and what still requires distinctly human judgment. Jeetu brings a nuanced take to this, rooted in his experience leading product innovation at one of the world’s largest tech companies. It’s not a conversation filled with predictions so much as it is a reframing of the questions we should be asking. What stood out to me is how quickly we normalize the extraordinary. A technology that felt magical two years ago is now embedded in our daily workflows. That speed of adoption changes the stakes. It means we need to be more deliberate—not just about what AI can do, but what we want it to do, and what we risk offloading too quickly. We also touch on the philosophical implications. If AI agents really can handle more of the cognitive heavy lifting, what’s our role in the loop? Do we become editors? Overseers? Explorers of new frontiers? And how do we prepare for jobs that don’t exist yet, using tools that are evolving faster than we can document them? I think this episode will resonate with anyone trying to navigate this moment—whether you’re in product development, policy, marketing, or just someone who likes to think a few moves ahead. It’s about more than AI. It’s about how we adapt, how we define value, and what we choose to hold onto as the landscape shifts. Give it a listen. And as always, I’d love to hear your thoughts.
There’s a question I’ve been sitting with lately: Are we prepared for what AI is about to expose in our organizations—not just technically, but operationally? In this episode of the TechSpective Podcast, I sit down with Kavitha Mariappan, Rubrik’s Chief Transformation Officer, to unpack some of the less flashy but arguably more urgent questions about enterprise security, AI readiness, and business continuity. If your organization is still treating identity as a login issue or AI as a future-state conversation, you might be missing the bigger picture. Kavitha doesn’t speak in clichés. She’s been in the trenches—engineering, scaling go-to-market teams, and now helping steer one of the fastest-evolving players in the data security space. Her perspective is shaped by decades of experience, but her focus is very much on the now: how to operationalize resilience at a time when every system, process, and even person has become a potential attack vector. One of the threads we pull on is the idea that resilience isn’t a fallback plan anymore—it’s the front line. And identity? That’s not just a security issue. It’s a dependency. If you can’t log in, you can’t recover. You can’t operate. You can’t pivot. The conversation touches on what it really means to build for resilience in a landscape where downtime isn’t just costly—it’s existential. We also explore what I’ll loosely call “AI exposure therapy”—not in the sense of experimenting with new models or shiny tools, but in understanding how AI is forcing companies to confront their structural weaknesses. What used to be considered internal inefficiencies are now potential vectors of attack. Technical debt isn’t just a performance issue—it’s a risk multiplier. Kavitha brings data to the table too—sharing insight from Rubrik Zero Labs on the alarming surge in identity-based attacks and why the majority of companies are still playing catch-up when it comes to securing what they can’t always see. It’s a wake-up call, but not a hopeless one. What made this conversation stand out to me wasn’t just the subject matter, but the way Kavitha frames the questions we should be asking: How do we architect for a world that’s already in flux? How do we define AI transformation when most businesses are still digesting digital transformation? And perhaps most critically, what needs to change inside the organization before the tech can even do its job? I won’t give away the full arc of the discussion, but here’s my pitch: If you’re leading, advising, or building for a company that handles sensitive data (hint: that’s all of us), this episode will challenge you to think differently about where resilience really begins—and what it’s going to take to build it into the DNA of your org. Listen to or watch the full episode here:
There’s something happening in cybersecurity right now that’s both exciting and a little disorienting. As generative and agentic AI take over headlines, conference keynotes, and investor decks, it’s easy to assume we’re on the verge of some great leap forward. The reality is more complicated—and more interesting. In the latest episode of the TechSpective Podcast, I had the chance to sit down with Sachin Jade, Chief Product Officer at Cyware, for a conversation that cuts through the buzzwords. We cover a lot of ground—from AI’s place in the SOC to the underrated power of relevance in threat intelligence—but what stuck with me most was this: the most transformative work happening in security right now doesn’t look like a revolution. It looks like simplification. Not simplification in the marketing sense—fewer dashboards, “single pane of glass,” etc.—but simplification where it actually matters: filtering noise, streamlining analysis, helping human analysts do their jobs better and faster. There’s a growing recognition among smart security leaders that “flashy” features might demo well, but if they don’t reduce burnout, improve signal-to-noise, or give analysts time back in their day, they’re missing the point. We’re at a moment where AI can—and should—do more than just surface alerts. The goal isn’t to impress anyone with a cool interface or to simulate a brilliant security expert. The goal is to embed intelligence into the places that grind analysts down: filtering irrelevant threat intel, connecting disparate data points, recommending next steps based on context. Mundane, unsexy tasks—yes. But transformative when done well. Sachin offered a useful framework for thinking about agentic AI that goes beyond the surface definitions most people are using. We talk about where true decision-making autonomy begins, how it fits into layered workflows, and what it really looks like to “mimic” human reasoning in a SOC environment. Spoiler: it’s not about replacing people. It’s about enabling them. Another theme that emerged: relevancy. Not in a vague, feel-good way, but in the deeply practical sense of “does this matter to me, my company, my infrastructure, right now?” For all the AI talk, too many tools still struggle to answer that question clearly. Cyware’s approach, which Sachin outlines in the episode, puts a premium on reducing noise and increasing clarity. There’s no magic wand—but there is a very intentional shift toward making intelligence actionable, digestible, and contextual. That matters more than whatever buzzword is trending on social media this week. We also explore the idea of functional decomposition in AI—a concept that mirrors how most human security teams are structured. Instead of building a monolithic super-intelligent assistant, Cyware has developed a multi-agent model where each AI agent is focused on a specific task, like malware triage or incident correlation. It’s less hive-mind, more specialized team—just like the best human teams. That architectural choice has significant implications for accuracy, explainability, and trust. The full conversation dives deeper into how these ideas show up in real-world security operations, what CISOs are actually looking for in AI-driven tools, and why strategic use of “boring” automation may be the real game-changer for the next decade. If you’re someone who’s tired of the AI hype but still deeply curious about where it’s actually moving the needle, I think you’ll find this episode worth your time. We don’t spend 45 minutes tossing around acronyms—we get into how AI can help analysts cut through the clutter, why relevancy is the next frontier, and what it means to design intelligence that works the way humans actually think. Listen to or watch the full episode here:
Every once in a while, a conversation forces you to stop and rethink something you thought you already understood. Recording this latest TechSpective Podcast episode with Semperis CEO Mickey Bresman did exactly that—and it has everything to do with how AI is quietly rewriting the rules of identity security. If you’ve been following the industry for a while, you know the story: hybrid environments are the norm, identity is the new perimeter, and permissions hygiene is the decades-old chore nobody has enough time—or patience—to do well. None of that is breaking news. What is new is what happens when you drop modern AI into the middle of that reality. We’re not talking about sci-fi leaps or theoretical risk models. We’re talking about something much more immediate: AI tools that can surface old data, forgotten data, and misconfigured access paths you didn’t even know existed. Years of “we’ll fix that later” suddenly become a living, breathing attack surface the moment AI starts connecting dots faster than any human ever could. Mickey and I unpack why this shift is so significant and why organizations often misunderstand the real implications. We also get into the emerging gray zone of agentic AI—systems that operate like users, make decisions like users, and introduce a whole new category of identity no one had to account for before. It’s an area where the guardrails are still being built, even as the tools accelerate. I won’t spoil the conversation here, because part of the fun is hearing how Mickey frames the problem—and the opportunities—through the lens of someone working directly with organizations grappling with this right now. Let’s just say the old assumptions don’t hold, and the path forward involves more than bolting AI onto existing processes. If you care about identity, security, or the rapidly approaching future where AI plays a central role in both offense and defense, this is a conversation worth your time. Check out the full episode here: And as always, stay tuned. At the pace things are evolving, this probably won’t be the last time we revisit the topic—and the next wave may hit sooner than any of us expect.
Every once in a while, I end up in a conversation that hits at exactly the right moment—when the industry is shifting, the vocabulary is changing, and everyone is quietly circling the same questions. This new episode of the TechSpective Podcast is one of those. Art Poghosyan, CEO and co-founder of Britive, joined me on this episode of the TechSpective Podcast for a fluid and surprisingly energizing dive into where identity security meets agentic AI. If you’ve followed the podcast this year, you know the pattern: gen AI defines the early hype cycle, but 2025 belongs to agents. Not the fantasy version where they automate your whole life, but the real-world scenario where they reshape what “digital responsibility” even means. Art has more than two decades of identity and access management experience, which gives him a grounded way of thinking about the moment we’re in. As we start talking, the first big theme that emerges is how fast the definition of “identity” is expanding. Identity used to be about people—employees, contractors, admins—and the occasional service account someone documented at 4:59 p.m. on a Friday. Now? Agents complicate all of that. A non-human autonomous system with access to a SaaS platform or a data lake behaves a lot like a user, even if it isn’t one on paper. Treating it as “just software” is exactly how we recreate the same exposures that powered the breach headlines of the last decade. One of the threads we tug on is the question of trust—not the fuzzy philosophical kind, but trust as an operational decision. An agent making decisions on your behalf needs to be verified every time it touches something sensitive. You need visibility into what it’s doing, controls around how long it can do it, and a way to shut it down when it starts operating outside its lane. These aren’t hypotheticals anymore. They’re the next generation of identity security problems, and Art offers a sharp perspective on what modern tooling needs to look like to keep up. The conversation also wanders into the human side of this shift. Everyone loves to frame the future as “AI versus AI,” but the real tension right now sits in the messy handoff between human intent and autonomous execution. Most organizations are easing into agents the same way you learn to drive a car: one cautious tap of the brakes at a time. That slow acclimation matters as much as any new feature or model. And yes, without giving anything away, we do acknowledge the part people sometimes treat like an afterthought: attackers get the same toys. They’re using them already. Ignoring that reality doesn’t make it go away. What I appreciate about this episode is how it holds the middle ground. It’s not hand-wringing about a dystopian future, and it’s not an AI pep rally. It’s a pragmatic, curious look at a technology that’s maturing faster than the guardrails around it. Art brings a thoughtful, steady view of where identity security is heading and what happens when autonomous systems stop playing by human rules. If you’re trying to understand how agentic AI fits into your world—or how identity security has to evolve to keep pace—this is a conversation worth hearing. Watch the full episode on YouTube and see where the discussion takes your own thinking next.
One thing I’ve learned after years of covering cybersecurity is that the “state of the threat landscape” rarely sits still long enough to fit neatly into a headline. Every time you think you’ve understood the latest trend, something shifts under your feet. That’s part of the fun—and part of the challenge. That dynamic energy is exactly why I invited Brad LaPorte onto the TechSpective Podcast for this latest episode. Brad has lived just about every angle of cybersecurity you can think of: military intelligence, consulting, analyst work at Gartner, and now CMO of Morphisec. He’s been in the room for many of the big transitions—tooling changes, strategic changes, and the increasingly blurry line between human-driven attacks and AI-driven ones. Our conversation went much deeper than a simple “state of ransomware” update. Ransomware itself has grown so far beyond the old definition that it feels strange to keep calling it that. The classic “encrypt everything and demand crypto” playbook isn’t what defines the modern threat. The real story now is how fast attackers adapt, how quickly new tactics spread, and how criminal groups behave more like full-fledged businesses than hobbyist hackers. We dig into all of that, but in a conversational way rather than a technical lecture. The thread that kept coming up is how small pieces of data—details that seem harmless on their own—can snowball into serious compromises when attackers start connecting the dots. Brad shared experiences that underscore how those tiny cracks get leveraged in ways most people never consider. It’s a reminder that cybersecurity is not only about the tools in place, but about the environment those tools live in. Another theme we circled around is the growing presence of AI in both defense and offense. AI-driven attacks aren’t a distant theory anymore. They’re active, adaptive, and often unsettling in how quickly they shift tactics mid-stream. Brad and I talked about what that means for defenders, why “preemptive” approaches are gaining traction, and how companies are trying to outpace threats that no longer behave like traditional malware at all. We also talked about the human side—something that doesn’t always make it into technical coverage. Cyberattacks aren’t abstract events. They’re personal. They exploit habits, patterns, and moments of distraction. Anyone who has ever clicked something out of instinct rather than scrutiny will relate to some of the scenarios we discuss. One thing I love about hosting this podcast is the space it creates for unscripted, honest discussion. Brad and I covered a lot—ransomware economics, polymorphic attacks, data exposure, the “funhouse mirror” problem of deception technologies, and even the strange comfort of knowing that pizza orders can still give away national secrets. Yes, really. And no, I’m not explaining it here; you’ll have to listen. If you work in cybersecurity, follow cybersecurity, or simply exist in a world shaped by cybersecurity, this episode is worth your time. It’s lively, candid, and packed with insight without requiring a glossary on the side. And if past experience is any guide, the things we talk about today may feel very different six months from now. That’s part of why these conversations matter. Give it a listen, subscribe if you enjoy it, and let me know what topics you want to hear explored next.
The latest episode of the TechSpective Podcast dives straight into one of the most pressing questions in cybersecurity right now: what happens when the vast majority of identities in your environment aren’t human anymore? I sat down with Danny Brickman, co-founder and CEO of Oasis Security, for a wide-ranging conversation about the future of identity, the rise of agentic AI, and why enterprises may be sprinting into an AI-powered future without realizing just how much risk they’re accumulating along the way. Danny brings a background that blends offensive experience, deep identity expertise, and a pragmatic understanding of what security teams actually need—not just in theory, but in the messy reality of modern cloud environments. We covered a lot of ground. Some of it gets philosophical. Some of it gets unsettling. None of it is boring. A few themes we talk about (without giving the episode away): Identity is no longer about people. If you’re still thinking of identity as usernames and passwords, you’re roughly a decade behind. The overwhelming majority of identities in an enterprise belong to machines, services, workloads, keys, tokens—digital “keycards” with no owner attached. And that was before agentic AI entered the picture. AI agents behave like employees… just much faster. This creates opportunity. It also creates chaos if you don’t know what your agents can access, what they can do, or how quickly they can do it. The idea of an AI system accidentally wiping out a database is no longer hypothetical. Access is becoming the currency of the AI era. The value an agent delivers directly correlates to the access it’s granted. That tension—between capability and control—is now central to modern security strategy. Governance frameworks for AI agents aren’t optional. Danny and his team have been working with industry leaders to build a framework that defines what’s acceptable, what’s risky, and how enterprises can put real guardrails around AI systems. It may be the first time you’ve heard the term “agentic access management,” but it won’t be the last. We also dig into the AI bubble, the trust problem, and why ‘do your own research’ is becoming less meaningful in an AI-shaped world. These tangents got lively, but they all tie back to a core idea: when machines act on our behalf, we need to understand the implications. Why this episode matters AI is reshaping cybersecurity faster than any shift we’ve seen in years. But it’s also blurring lines—between humans and machines, autonomy and oversight, innovation and risk. We don't go out of our way to try to package neat answers. Instead, we raise the questions every security leader should be asking right now: What should agents be allowed to do? Who’s accountable when something goes wrong? How do we maintain trust in systems that move faster than we can supervise? And what does identity even mean in a world where humans are the minority? If you want a thoughtful, candid exploration of these issues—and a look at how one company is thinking about securing the future—give the episode a listen. The full episode is now live on the TechSpective Podcast. Let the conversation challenge your assumptions.
Cybersecurity has a long memory—and an even longer list of recurring frustrations. Chief among them: alert fatigue. For as long as security teams have existed, they’ve been drowning in notifications, dashboards, and blinking red lights. Each new platform promises to separate signal from noise, and yet, years later, analysts are still buried under an avalanche of “critical” alerts that turn out to be anything but. In the latest episode of the TechSpective Podcast, I sat down with Raghu Nandakumara, VP of Industry Strategy at Illumio, to explore why this problem refuses to die—and whether the rise of agentic AI could finally change the equation. Raghu describes Illumio as a “breach containment company,” focused on limiting the damage when (not if) attackers break through. Their philosophy is simple but powerful: you can’t prevent every intrusion, but you can prevent the blast radius from spreading. That means reducing lateral movement risk—the ability for attackers to move freely once they’re inside a network—and building what he calls “true cyber resilience.” But our conversation quickly veered into a broader question about the human side of the SOC (Security Operations Center). Analysts are expected to triage thousands of alerts per day—one every 40 seconds on average. Most are false alarms. A few are genuine threats. The real challenge isn’t visibility; it’s focus. How do you know which alerts matter when every tool is screaming for your attention? That’s where AI comes in. And not just any AI—the kind that thinks and acts like a teammate. As we discussed, agentic AI represents a shift from passive pattern recognition to autonomous decision support. Instead of merely identifying potential threats, agentic systems can prioritize them, contextualize them, and even recommend (or execute) response actions. If that sounds like science fiction, it’s not. As Raghu points out, many of the prescriptive tasks assigned to Level 1 SOC analysts—correlating events, escalating cases, and following playbooks—are ideal for automation. An agentic system doesn’t get tired, doesn’t lose focus, and doesn’t fear missing an alert that might end up on the evening news. It simply does the job, at scale, with consistency. In the episode, we talked about how this approach might reshape the traditional SOC hierarchy. Rather than replacing humans, AI could specialize in specific “personas” that complement human expertise. You might have one agent trained as a first-tier analyst, another tuned to compliance monitoring, and another to executive-level risk analysis. Together, these agents form a collaborative mesh that filters, enriches, and interprets data before it ever hits a human’s desk. That’s not just a technology upgrade—it’s an operational shift. It redefines how teams think about detection, response, and ultimately resilience. Because resilience isn’t just about blocking attacks or patching vulnerabilities; it’s about ensuring the business continues to function even when something breaks. What struck me most about our discussion was how seamlessly this connects back to Illumio’s roots in segmentation. For years, the company has helped organizations visualize and contain movement within their environments. Now, by layering intelligent agents into that framework, they’re taking the next logical step: using automation not just to observe risk, but to act on it. We also talked about how the traditional boundaries between security disciplines—vulnerability management, threat detection, breach simulation—are beginning to blur. In a future shaped by agentic systems, those silos start to dissolve. Tools, agents, and human operators all contribute to a shared understanding of exposure, risk, and response. The result could be a more unified, adaptive form of cybersecurity—one built not on isolated alerts, but on intelligent, contextual awareness. That’s the promise of agentic AI. It’s not about replacing human judgment; it’s about amplifying it. And as Raghu notes, the sooner organizations embrace that shift, the closer we get to a world where “alert fatigue” is finally a thing of the past.
Cybersecurity has always been a race against time—but in the era of artificial intelligence, it’s become a race against the machine. In this episode of the TechSpective Podcast, I sit down with Ankur Singla, founder and CEO of Exaforce, to explore what it really means to build an AI-powered SOC. We talk about the shift from manual detection and response to automation at machine speed, and what happens when AI agents begin to take on specialized roles in security operations—an idea that sounds futuristic, but is already unfolding across the industry. Singla brings deep experience from years at companies like F5, Juniper, and Cisco, and he’s seen firsthand how much inefficiency still lingers inside security operations. His view is that AI isn’t just an enhancement—it’s a necessity. Attackers are already using automation to scale their efforts, and defending against them requires the same level of speed and precision. But as we discuss, the rise of AI in cybersecurity isn’t just about capability—it’s about control. What happens when your defensive AI gets hijacked? How do we maintain human oversight in an environment increasingly dominated by machine logic? And at what point does the pursuit of efficiency start to blur the line between autonomy and accountability? Our conversation stretches from the practical realities of AI-driven threat detection to the philosophical questions of trust, identity, and human relevance in the next generation of cybersecurity. It’s a candid look at both the promise and peril of a world where digital defenders never sleep—and where the same tools that protect us can also be turned against us. If you’re curious about how security operations will evolve over the next year—and what it really takes to fight machines with machines—this is one you won’t want to miss.
For years, phishing has been the king of cyberattacks. It’s simple, cheap, and it works. Most of us have learned to spot the obvious red flags in email—strange senders, misspelled domains, suspicious links. But the threat has started to evolve. And it’s moving to places where we’re far less prepared. Think about how you handle email versus text messages. With email, you might let a dozen questionable messages pile up before sorting through them. You scan headers, hover over links, and delete anything that feels off. With text messages, though, the reaction is different. You hear the notification, glance down, and reply almost instantly. That’s human nature. Attackers know it. And they’re exploiting it. In the latest episode of the TechSpective Podcast, I sat down with Jim Dolce, CEO of Lookout, to talk about what this shift means for cybersecurity. Lookout has spent years protecting mobile devices, but its newest focus takes aim at a very different attack surface: us. Instead of guarding the machine, the challenge now is guarding the human behind it. We explore why the human layer is such an irresistible target for attackers. Email filters and security gateways have raised the bar, but SMS, messaging apps, voice calls, and even QR codes remain wide open. And unlike email, where skepticism has become second nature, people are far more trusting when a text or call comes through on their phone. That trust—combined with distraction and urgency—makes mobile messaging a perfect delivery channel for scams. Jim explains how these “omnichannel” attacks are multiplying. Smishing (SMS phishing), vishing (voice phishing), and quishing (QR code phishing) may sound like buzzwords, but they’re real and growing fast. Each relies on the same core weakness: our willingness to believe and respond without hesitation. Of course, the obvious question is what to do about it. Traditional defenses aren’t built for this world. There’s no email gateway to filter your texts. Caller ID can be spoofed. QR codes can be swapped. It requires a different way of thinking about security—one that accounts for the psychology and behavior of people, not just the vulnerabilities of machines. That’s where AI enters the picture. Jim and I discuss how large language models can analyze the context and intent of a message, spotting subtle cues that humans might miss. It’s not just about catching malicious links anymore. It’s about recognizing when a message is crafted to spark an emotional response—whether that’s urgency, fear, or curiosity. The idea is to give people an early warning before they engage. We also touch on the balance between privacy and protection. For any AI system to work, it needs data to learn from. But nobody wants their personal messages sitting in some company’s training set. How that tension gets resolved could make or break adoption of these kinds of solutions. The bigger takeaway from the conversation is that we’re at an inflection point. Cybersecurity has always evolved alongside attackers, but the ground is shifting. As threats move beyond the inbox and onto the devices we rely on most, defenses have to follow. That means new technologies, yes, but it also means rethinking the role of people in their own security. I won’t spoil the details of how Lookout is approaching this challenge—you’ll have to listen to the episode for that. But I will say this: the days of thinking of phishing as an “email problem” are over. The frontlines have moved. And if you haven’t thought about what that means for you, your employees, or your business, now is the time. Listen to the full conversation on the TechSpective Podcast to hear where phishing is headed next—and how security needs to catch up.
Security teams know the pressure all too well: attackers move faster, the attack surface expands every year, and the tools meant to protect enterprises often create more friction than clarity. Traditional SOAR platforms promised efficiency but often delivered complexity, inflexibility, and frustration. Now, a new wave of AI-driven automation is reshaping the conversation—and the stakes couldn’t be higher. In the latest episode of the TechSpective Podcast, I sat down once again with Ajit Sancheti of CrowdStrike to dig into what this next chapter of automation really looks like. If you’ve listened to Ajit before, you know he has a talent for breaking down complex cybersecurity challenges into practical, human-focused insights. This time, our discussion centered on the intersection of agentic AI and the modern SOC—a space where innovation and risk run side by side. Why Old SOAR Models Fell Short We start off with a reality check on traditional SOAR solutions. Many organizations invested heavily, only to find themselves burdened by rigid workflows, brittle integrations, and tools that couldn’t keep up with evolving threats. The issue often revolves around whether security teams can adapt responses in real time without breaking the system. Ajit offers a perspective on why legacy approaches struggled to gain traction and how attackers’ increasing use of AI has made flexibility and speed non-negotiable. That tension—between what defenders need and what their tools can actually deliver—sets the stage for where agentic AI enters the picture. Agentic AI: Promise and Caution If generative AI brought us new ways of working with text and language, agentic AI goes a step further: it doesn’t just generate, it acts. That opens doors for SOCs to automate targeted, granular responses at machine speed. But it also introduces a new kind of trust problem. How much autonomy are you comfortable handing over to an AI agent? What happens when it makes the wrong call? Ajit and I explore the idea of “earned trust”—why human oversight will remain essential and why AI “performance reviews” might become as routine as employee evaluations. It’s a fascinating parallel: treating these agents not just as tools, but as teammates that require accountability. The Human Factor in Automation One theme we return to often in our discussion is simplicity. For too long, security technology has required deep expertise just to ask the right question or interpret the right output. That has to change. Future SOC tools need to feel less like command-line puzzles and more like natural conversations—where context, clarifying questions, and intuitive design make security accessible to more people across the organization. The democratization of security is one of the most exciting trends on the horizon. Smaller companies that never imagined deploying advanced detection or response tools are suddenly finding themselves able to do so—without a staff of experts on hand. Ajit points out how this shift could level the playing field for businesses of all sizes. Looking Ahead We don't go so far as to try to predict a perfect AI-secured future. Instead, we talk about what’s realistic over the next 12 to 24 months. Expect more narrowly focused AI agents, more orchestration challenges, and an evolving role for humans in the loop. There will be setbacks, and likely some very public failures, but also tremendous opportunities for organizations willing to adapt. As always, Ajit brings an optimistic yet grounded perspective. Security is a constant cat-and-mouse game, but this new generation of automation might just give defenders the flexibility and speed they’ve been missing. Why You Should Listen This episode is a candid exploration of where automation stands today, where it needs to go, and how organizations can prepare themselves for an AI-driven future without losing sight of human judgment. If you want a glimpse into the future of SOC operations, and if you’ve ever wondered whether AI can truly lighten the load for overworked security teams, this is a conversation you’ll want to hear.
Artificial intelligence is transforming nearly every industry, and cybersecurity is no exception. On the latest episode of the TechSpective Podcast, I spoke with Kevin Simzer, COO of Trend Micro, about how generative and agentic AI are reshaping development and defense strategies. Kevin shared why AI should be seen as neither magic nor snake oil, but as a powerful tool that can accelerate innovation while still requiring human expertise. From code generation to enterprise-scale deployment, the opportunities are immense—but so are the risks. That’s why security must be built in from the start, not bolted on after the fact. One of the most fascinating parts of our discussion centered on digital twin technology. Traditionally used in fields like manufacturing or engineering, digital twins are now emerging as a game-changer for cybersecurity. By creating a virtual replica of an organization’s environment, enterprises can continuously run simulations, red-team scenarios, and experiment with different defenses—without putting live systems at risk. Instead of waiting for quarterly tests, organizations can stress-test their infrastructure constantly, learning and adapting in real time. As Kevin explained, this shift could fundamentally change how enterprises think about resilience. Combined with the rapid rise of AI-driven agents, digital twins offer a way to stay ahead of evolving threats while navigating the complexity of modern IT environments. Cybersecurity has always been about anticipating the next move. With AI and digital twins in play, the game board itself is changing—and those who embrace these tools early will be far better prepared for what comes next.
Ransomware has been part of the cybersecurity conversation for years, but if you think it’s yesterday’s problem, think again. The headlines might be dominated by AI these days, yet behind the scenes, ransomware continues to disrupt organizations of every size — from small businesses to multinational enterprises. In this episode of the TechSpective Podcast, I sat down with Rob Harrison, Senior Vice President of Product Management at Sophos, for a wide-ranging conversation about findings from the recent Sophos State of Ransomware Report, ransomware’s persistent threat, the critical role of Managed Detection and Response (MDR), and how AI is reshaping the security landscape. Fortunately, it was not a typical “cyber doom” discussion. Rob brings a unique perspective, blending his experience leading Sophos’ MDR business with a career that’s spanned everything from defending national security to protecting critical cloud workloads. Our talk dives into the trends shaping both the technical and human sides of ransomware response — and why some organizations emerge stronger while others don’t survive at all. Why This Conversation Matters While ransomware hasn’t disappeared, the tactics have evolved. The game is no longer just about encrypting data and demanding payment. The threat landscape is shifting toward double extortion, data exfiltration, and in some cases, skipping encryption altogether. Rob and I explore how this evolution is forcing organizations to rethink their approach to prevention, detection, and response. We also discuss how MDR can be a game-changer, particularly for organizations without the resources or expertise to run a 24/7 security operation in-house. It’s not just a question of technology — it’s about having the right people, processes, and visibility to act decisively when every second counts. But what about AI? It’s easy to assume that “AI in security” is just another buzzword. We unpack how AI — especially in its more agentic and automation-focused forms — is already making a real impact in the SOC. From handling tedious, repetitive tasks to providing richer context for human analysts, AI is becoming a force multiplier for security teams. The Human Factor One of the most compelling parts of our conversation focuses on the human cost of ransomware — the stress, burnout, and organizational disruption it leaves behind. Rob offers insights on how to prepare for worst-case scenarios, not just from a systems and data standpoint, but from a leadership and team perspective. We also touch on the importance of preparation and practice. Just as pilots run flight simulations and first responders drill for emergencies, organizations need to rehearse their incident response. That way, when the heat is on, muscle memory kicks in, roles are clear, and decisions are made with confidence. Why You Should Listen If you’re a security leader, business owner, IT professional, or simply someone interested in how technology, strategy, and human decision-making intersect in the fight against ransomware, this episode is for you. We cover: The changing tactics of ransomware operators How MDR can extend or even replace in-house capabilities The role of AI in modern security operations Strategies for reducing the human toll of cyber incidents The importance of preparation, communication, and trust in response efforts This is not a doom-and-gloom story. It’s a conversation about resilience, about making smarter security decisions, and about ensuring that when — not if — an incident occurs, your organization is ready. Listen to the full episode now to hear the full discussion and take away actionable insights you can apply today.
Cybersecurity strategy has evolved over the years—first focusing on keeping the bad guys out, then on detecting and responding to threats faster, and now on cyber resilience and the notion of ensuring business continuity no matter what happens. In the latest episode of the TechSpective Podcast, Druva Chief Security Officer Yogesh Badwe joined me to talk about why the next phase of security maturity must be built around a single, non-negotiable truth: data is the real crown jewel. The Shift to Data-Centric Security Historically, organizations poured resources into protecting networks and identities, often treating data as a secondary concern. “Breaches are inevitable,” Badwe explained. “Detection is a lagging indicator. Organizations need to be ready to respond and recover from bad scenarios—and that starts with the data itself.” With sprawling hybrid environments, complex supply chains, and AI agents introducing new attack vectors, prevention alone isn’t enough. Security teams need full visibility into what data exists, where it resides, and who can access it. Backups: From IT Tool to Security Backbone Most companies think of backups as an IT disaster recovery resource. Badwe argues they must be elevated to a frontline security capability. Recovering from ransomware isn’t as simple as restoring a snapshot—you need to identify clean copies, remove malicious artifacts, and, in some cases, blend files from different points in time to minimize business disruption. “Security recovery is completely different than IT recovery,” he noted. Attackers know this, too. Modern ransomware campaigns often target backup systems directly to remove a company’s safety net. Preparing for Emerging Risks The conversation also touched on two looming challenges: Double-extortion ransomware, where attackers both encrypt and exfiltrate data to increase leverage. Post-quantum cryptography, and the “harvest now, decrypt later” risk that stolen encrypted data could be cracked in the future. Organizations should begin mapping their encryption landscape now to prepare for a PQC transition within the next few years. The Visibility and Classification Challenge Centralizing all corporate data is unrealistic. Instead, companies need tools that can provide visibility where the data lives—whether that’s in SaaS apps, multi-cloud environments, or third-party systems. Badwe sees automated classification as essential, not just for prevention but for rapid incident response. Knowing which 20% of your data is truly sensitive allows you to focus security controls where they matter most. AI’s Real Role AI in security is often overhyped, but Badwe sees practical value in tier-one SOC triage, automating runbooks, and enhancing secure software development processes. AI can’t replace sound security architecture, but it can accelerate analysis and decision-making. Looking Ahead As AI agents and integrated corporate search platforms become more common, traditional authentication and authorization models will be tested. Security leaders will need to rethink access controls for human-to-agent and agent-to-agent interactions. For Badwe, resilience isn’t just about bouncing back—it’s about making data the centerpiece of prevention, detection, response, and recovery. Because in the end, it’s not the network or the identity we’re protecting—it’s the information that keeps the business running. Check out the full podcast for more:
When it comes to cybersecurity, it’s easy to fall into the trap of thinking in binaries—good guys and bad guys, black hats and white hats, defenders and attackers. But the reality is far more complex, especially in an age where artificial intelligence is changing the rules for everyone, whether they like it or not. In the latest episode of the TechSpective Podcast, I sat down with Myke Lyons, CISO of Cribl, for a conversation that spans a lot of ground. And I mean a lot of ground. From retail fraud and social engineering to ransomware economics and the future of AI-powered search, we explore how cybercriminals are using the same tools defenders have access to—but with very different goals in mind. We kick things off by unpacking Cribl’s unique role in the world of IT and security telemetry. At one point, I draw the comparison of Cribl as a sort of Rosetta Stone for log data—helping organizations normalize, route, and optimize data flows to the right places for the right reasons. Myke shares how this kind of architectural flexibility isn't just convenient—it’s becoming essential in a world where data is growing at breakneck speed and attackers are using AI to move just as fast. Then we shift into a broader discussion about why retail—especially during high-stakes periods like Prime Week or Black Friday—is such a tempting target for attackers. The emotional nature of shopping, the scale of operations, and the deeply trusted brand names all make retail a ripe hunting ground for bad actors. But it’s not just old-school fraud or phishing anymore. We get into how AI is helping attackers spoof websites, impersonate brands, and even fake their way through job interviews to infiltrate organizations from the inside. One particularly eye-opening thread: the evolving ransomware playbook. Threat actors are now using AI to research their victims more thoroughly—tailoring ransom demands based on insurance coverage, revenue cycles, and organizational pain points. It’s strategic, it’s efficient, and yes, it’s unsettling. But this conversation isn’t just doom and gloom. We also talk about how security teams can flip the script by using AI themselves—developing muscle memory with new tools, leveraging prompt engineering, and building infrastructure that adapts in real time. Myke makes the case for experimentation, curiosity, and staying a step ahead—not just with tech, but with mindset. If you’re a security leader, a practitioner, or even just a curious listener trying to make sense of this rapidly evolving landscape, you’ll find a lot to chew on here. And if you think the line between helpful AI assistant and risky attack vector is starting to blur… you’re not alone. Listen to the full episode now and hear why your AI should be more like JARVIS—and what happens when the bad guys figure that out first.
The ever-expanding world of cybersecurity is full of big promises, bold claims, and—if we’re being honest—a lot of noise. As security leaders face mounting pressure to do more with less, it’s no longer enough to simply buy the newest tool or chase the latest trend. What organizations really need is a trusted advisor—someone who knows the landscape, understands the stakes, and can help make sense of it all. That’s exactly the theme of the latest episode of the TechSpective Podcast. John Hurley, Chief Revenue Officer at Optiv joins me in a wide-ranging, candid discussion of the real challenges facing CISOs today: managing tool sprawl, justifying investments, cutting through cybersecurity jargon, and understanding where artificial intelligence fits into the modern security stack. At the heart of our conversation is Optiv’s unique approach to helping organizations rationalize their security environments. John shares how Optiv leverages a decade’s worth of data and experience to guide clients through the decision-making process—moving from a transactional vendor model to a genuinely consultative partnership. The analogy I came up with for Optiv's role is that it essentially positions itself as a “pharmacist” in the cybersecurity ecosystem—helping organizations make sense of countless overlapping solutions and potential “side effects.” The episode also addresses some timely questions: What does it mean to be a true advisor in an industry obsessed with buzzwords? How can AI be leveraged to bring real value, rather than just more noise? And what steps should organizations take when rethinking their security architecture in the face of continuous change? Whether you’re a security leader looking for fresh perspective, a vendor navigating a crowded marketplace, or just a tech enthusiast fascinated by the challenges of enterprise security, this episode promises plenty of food for thought. Curious? Give it a listen (or watch it on YouTube)—and hear firsthand how the conversation is evolving from selling tools to solving real business problems.
Cloud security is one of the most talked-about issues in cybersecurity today—but are we talking about the right things? In the latest episode of the TechSpective Podcast, I sat down with Cristian Rodriguez, Field CTO for the Americas at CrowdStrike, to explore the evolving landscape of cloud threats and how defenders need to adapt. With over a decade at CrowdStrike and more than 20 years in the cybersecurity space, Cristian brings a seasoned perspective on how adversaries have shifted their tactics—and how security teams can respond effectively. The Comfort Trap of Posture Management A major theme of our conversation is the current overreliance on cloud security posture management (CSPM). While CSPM tools play a critical role in identifying misconfigurations, compliance gaps, and other baseline security issues, Cristian points out that they are inherently limited by their snapshot-in-time nature. They’re valuable for hygiene, but they don’t give you a dynamic view of what’s happening in your environment right now. And that’s a problem—because attackers aren’t waiting for your next scan. They’re actively probing, logging in with stolen credentials, and moving laterally through cloud environments in ways that traditional security tooling often fails to detect. Living Off the Land, Evolved for the Cloud We also touch on a concept many security professionals know well: “living off the land.” This is when attackers use legitimate tools and processes already present in an environment to evade detection. What’s changing, Cristian explains, is how these techniques are now being used within cloud-native services—hiding in plain sight within container workloads, serverless functions, and IAM policies. This shift demands a new level of runtime visibility. You can’t just know what resources exist and how they’re configured—you need to understand who is accessing them, when, from where, and why. Behavioral analysis, real-time anomaly detection, and identity-based insights are becoming table stakes in defending modern cloud architectures. AI as a Force Multiplier for the SOC Naturally, no conversation about modern cybersecurity would be complete without discussing AI. Cristian shares how CrowdStrike’s AI assistant, Charlotte, is changing the game for SOC analysts by helping them triage incidents faster, guide investigations, and even orchestrate responses across multiple systems using natural language commands. But AI isn’t just about automation—it’s about augmentation. AI doesn’t replace the analyst; it frees them up to focus on what really matters. In a world where adversaries can break out and cause damage in under an hour, that time savings is crucial. Preparing for What’s Next We also touch on what has become a focus for me. It is one of the biggest questions for the future of AI: What happens when the next generation of cybersecurity professionals enters the field having never worked without AI? If level-one SOC roles are increasingly automated, how do tomorrow’s defenders gain the experience needed to make critical decisions in high-stakes situations? It’s a thought-provoking discussion that blends current challenges with a forward-looking lens on where the industry is headed—and what that means for the people defending it. Tune In to Learn More If you're a security leader, cloud architect, SOC analyst, or anyone trying to keep pace with the changing threat landscape, this is a must-listen episode. We explore not just the threats themselves, but the mindset shift required to defend against them—and the technologies that can help tip the scales in our favor. Listen now on your favorite podcast platform or watch the full conversation on YouTube. Have thoughts on this episode or topics you'd like to see covered in future discussions? Let me know on LinkedIn—I’d love to hear what’s on your mind.
Artificial intelligence may be the headline, but data is the story. In this episode of the TechSpective Podcast, I sat down with Todd Moore, VP of Data Security at Thales, to unpack the newly released 2025 Thales Data Threat Report. Our conversation explored the increasingly complicated intersection of data, AI, and cybersecurity—and why enterprises may be sprinting into transformation before securing their foundation. Spoiler: It’s all about the data. GenAI Is Booming—And So Are the Risks According to the report, one-third of organizations are already in the integration or transformation phase of GenAI adoption. And while that sounds like progress, Todd and I both agreed it mirrors past tech hype cycles—cloud, Wi-Fi, mobile—where enthusiasm far outpaced security planning. “The horse has left the barn,” Todd said. And that urgency to keep up with AI adoption is creating a familiar blind spot: data security. In fact, the fast-evolving GenAI ecosystem ranked as the top concern among respondents (69%), followed closely by risks to data integrity (64%) and trustworthiness (57%). Enterprises are waking up to the reality that AI isn’t just a new technology—it’s a new attack surface. Shadow AI, Prompt Injection, and Data Leakage One recurring theme from our conversation was the rise of "shadow AI"—where employees use public tools like ChatGPT without guardrails. While it might boost productivity, it also introduces serious risk if sensitive internal data gets fed into public models. We talk about how many organizations are adopting internal LLMs to mitigate this, but we acknowledge that enforcement is tough. The reality is that just like with shadow IT, if you don’t give people an approved tool that meets their needs, they’ll find workarounds. That’s where security posture management becomes crucial. Visibility into who’s using what data—and where it’s going—is no longer optional. Data Classification: Still a Work in Progress You can’t protect what you don’t know you have. Yet the report found that only one-third of organizations can fully classify their data, while 61% are juggling five or more data discovery tools. The inconsistency leads to fragmented policies, conflicting controls, and ultimately, more exposure. Todd and I agreed: classification has to be automated and context-aware. AI can help here—ironically—by understanding not just what a file says, but what it means based on surrounding data. Still, as Todd pointed out, AI is also the biggest creator of new data. “It’s a feedback loop,” he said. “AI is creating more unstructured data than ever before, which just makes the classification challenge even bigger.” Quantum Computing Is Closer Than You Think Another headline from the report—and our conversation—was the growing urgency around post-quantum cryptography (PQC). The threat of “harvest now, decrypt later” is very real, especially for regulated industries that store data long-term. Thales found that 63% of organizations are already concerned about future decryption of today’s data, and many are beginning to prototype PQC solutions. Todd emphasized that we now have a deadline: NIST and other global bodies are calling for a deprecation of classical algorithms by 2030. “This isn’t Y2K,” Todd warned. “We don’t know when Q-day will arrive. But when it does, if you haven’t prepared, it’s already too late.” Check It Out This episode dives deep into AI, PQC, classification, and the cultural challenges of balancing innovation with risk. If you're a CISO, security leader, or just trying to make sense of the data security landscape in 2025, you won’t want to miss it.
loading
Comments