DiscoverThe Security Strategist
The Security Strategist
Claim Ownership

The Security Strategist

Author: EM360Tech

Subscribed: 7Played: 41
Share

Description

Stay ahead of cyberthreats with expert insights and practical security .

Led by an ensemble cast of industry thought leaderss offering in-depth analysis and practical advice to fortify your organization's defenses.
187 Episodes
Reverse
Scaling technology globally is one of the most complex challenges for Chief Technology Officers and enterprise leaders. It requires balancing infrastructure, operations, regulatory compliance, and user trust, all while delivering systems that are reliable, secure, and effective across diverse regions.In this episode of Security Strategist, host Trisha Pillay explores these challenges with Grant McWilliam, Chief Technology Officer at Aura. They discuss how enterprises can overcome regulatory compliance, technology infrastructure, and operational challenges while delivering trusted, reliable systems globally.Understanding Regulatory Compliance in Global ScalingGlobal expansion introduces different regulatory landscapes, from data privacy laws to communications standards. While some see these as hurdles, they can become strategic advantages: As Grant says, “Regulatory challenges can be opportunities.” He further explains that building a global framework with room for local adaptation, “design globally, implement locally,” ensures compliance while maintaining operational flexibility.Building Resilient Technology InfrastructureReliable technology infrastructure is just as important for platforms operating across regions with varying telecom networks, mapping systems, and technical capabilities. In mission-critical contexts such as emergency response, reliability is non-negotiable, and technology should never limit service. Redundancy, failovers, and multi-region deployments ensure platforms remain responsive under pressure.Operational Excellence and TrustGary notes that operational pressures grow as organisations scale. Teams need to act efficiently while respecting local regulations and cultural contexts. He emphasises: “Trust is essential in emergency response and emergency response must prioritise user needs.” By embedding processes as backups to the backups and adapting technology to local conditions, organisations build resilience and maintain user confidence. He adds, “Collaboration enhances operational efficiency.”Key Principles for Scaling Cybersecurity GloballyGlobal standards and local adaptation: Establish frameworks that scale but allow local execution.Reliability and trust: Ensure mission-critical systems function under any circumstances.Cultural and operational alignment: Integrate local knowledge and collaboration to make technology sustainable and effective.Scaling technology globally requires balancing cybersecurity, infrastructure, regulatory compliance, and operational agility. In this episode of Security Strategist, the discussion highlights that success comes from combining technical excellence with strategic empathy, ensuring platforms are trusted, resilient, and effective for every user, in every region.TakeawaysScaling technology globally requires navigating regulatory complexity and...
In this final episode with N-able, the guests answer a pressing challenge for today’s MSPs: How to transform security operations into genuine cyber resilience.In this episode of The Security Strategist podcast, Jim Waggoner, VP of Product Management at N-able, and Lewis Pope, CISSP and N-able Head Nerd, sit down with host Jonathan Care, the Lead Analyst at KuppingerCole. MSPs have typically focused on technology layers, like backups, EDR, and MDR. However, as both Waggoner and Pope point out during the conversation, achieving resilience requires a bigger change – in operations, culture, and strategy.Cyber Resilience is Being Prepared For Any AttackWhen asked about redefining resilience, Pope underscores the need to move away from a classic technician mindset. He explains that MSPs should adopt a business-focused approach:“You have to drop your technician glasses and put on your business glasses for a lot of these matters.”Why is this important? MSPs often have a better understanding of their clients’ workflows than the clients themselves. This puts MSPs in a powerful position, but they must look both inward and outward, Pope further explains. He emphasises the need for internal threat modelling, risk registers, and long-term business planning with clients: “You need to have that seat there so you can help them, guide them, and put your fingers on the scales of which direction they plan to take.”Supporting this shift in tackling threats, Waggoner cites an example of tabletop exercises performed at N-able internally. Imagine “you just got a call that someone believes that they've been compromised by ransomware. What do you do?”The exercise didn’t focus on antivirus tools. Instead, it uncovered operational blind spots—like who to call, what steps to take, and how to keep the business running. The key lesson is that resilience is not about preventing every attack; it's about being prepared for the one that will happen.Also Watch: How Can MSPs Stay Competitive with Managed Detection and Response (MDR)?‘Automation Should Strengthen Security Teams, Not Replace Them’AI and automation is the rage in the cyber technology industry at the moment. While AI offers speed and scale, Waggoner warns it can lead to serious overreactions if not managed carefully: “If you're seeing something that looks suspicious and the automated response is to cut off these services, that can be great.” The only way to balance a rogue AI and automation situation is “the human,” he added. The VP of Product Management asserts the importance of safeguards such as manual confirmation prompts, human-initiated rollbacks, and analyst reviews. Ultimately, automation should strengthen security teams, not replace them.“You treat anything and everything that it does as something that a highly clever intern brought to you, but you still have to double-check it,” Pope added to the conversation. The Head Nerd emphasises a key detail often overlooked in AI discussions – precision. MSPs need to distinguish between LLMs, machine learning,...
In the recent episode of The Security Strategist Podcast, host Richard Stiennon, Chief Research Analyst at IT-Harvest, sat down with John Amaral, Co-Founder and CTO of Root. They discussed how automation, AI agents, and a new approach called “Shift Out” are changing vulnerability management. Amaral, who has decades of experience in security leadership, argues for moving beyond the industry’s traditional “shift left” concept. He believes organisations should focus on systems that prioritise scale, speed, and effective fixes.Why Shift Left FailedAmaral says the “shift left” promise never came true. Even with positive intentions, sending vulnerability lists back to developers created overloaded backlogs and slow remediation times, resulting in frustration for everyone involved.Engineers are experts in their application code, but not in the vast and complex open-source libraries their software relies on. When security scanners present hundreds of CVEs, “roadmap wins out over security,” Amaral explains. Often, maintainers only patch newer versions, leaving production teams stuck with outdated releases and no safe upgrade options.Read: What is DevSecOps? and Why It MattersShift Out is Root’s solution to this flawed workflow. Instead of adding to developers' workloads, organisations can assign the entire fix process—including patch creation, testing, and delivery—to an automated system led by domain experts. “Don’t give it to developers, give it to us,” The Root co-founder states. “We’ll take it.”A New Standard for Open-Source MaintenanceWhen discussing the idea of an external system modifying customer code, Amaral clarifies that Root doesn’t alter first-party code. Instead, they fix the open-source libraries that customers use—libraries that are already out of their control.Amaral points out that the current industry practice of blindly upgrading to new maintainer versions is much riskier. With the rise of supply-chain attacks, and maintainers often unable to apply fixes to older versions, companies increasingly face a troubling maintenance gap.To build trust and transparency, Root publicly shares all of its backported patches in a GitHub repository. This allows maintainers, independent developers, and the broader open-source community to examine, use, or build upon Root’s work. “If people want to use them, they can,” Amaral states. “It’s our responsibility to make that available.”Amaral’s message for technology leaders is that as AI changes the software landscape, organisations should adopt a remediation-first mindset. They should begin development with secure, pre-fixed libraries instead of rushing to address CVEs later. With AI-driven remediation now feasible at scale, maintaining secure software should become a standard practice, not an urgent afterthought.TakeawaysAI is revolutionising vulnerability management.Shift Out is...
As AI tools proliferate inside enterprises, often faster than security teams can track or govern them, a new class of risks are emerging. In this episode of the Security Strategist Podcast, IT-Harvest Chief Research Analyst Richard Stiennon sits down with Art Gilliland, CEO of Delinea, to discuss the explosive adoption of AI, the rise of shadow AI, and why identity-centric governance is becoming an urgent priority. Gilliland emphasises the importance of managing AI risks, particularly with machine identities, and the need for intelligent authorization systems to enhance security operations. When Gilliland joined Delinea, he believed in focusing on identity, along with policies that govern it. “In the cloud, you share responsibility. In SaaS, you delegate. But you always own your users—human or machine—and your data.”AI Reduced Inbound Call Volume by 60%Delinea has been integrating AI internally for years. One of the most transformative outcomes was the launch of Delinea Expert, an AI assistant built directly into the product interface. Users can upload screenshots, logs, or questions and receive precise guidance on how to fix or configure the product. It acts, Gilliland says, like a support person on your shoulder.“We shipped it about a year ago, and it reduced our inbound call volume by 60%.”This dramatic result mirrors what Gilliland sees across customers: rapid adoption, often through quick toy implementations that still deliver massive value.AI Prompts Wider Exposure SurfaceBut with the rapid adoption of AI comes a wider exposure surface. Business teams are driving AI, CEOs are demanding it, and developers are already using it. However, Gilliland believes security is still trying to catch up—again.“There's this huge gap between the consumption and use of AI and a company’s ability to get in front of it.”This gap is what many call shadow AI. Unlike historical shadow IT, this version is often approved—business leaders want it. But they lack visibility, policy, or governance structures to ensure it’s secure. Delinea’s recent survey found that “95 per cent of customers are already using or planning to use AI,” spotlights Gilliland, while just “40 per cent have any governance in place.”Gilliland warns that the dynamic resembles earlier waves—laptops, mobile, Wi-Fi, cloud—but has accelerated dramatically. “It’s inevitably going to be used because it’s so powerful. You can’t hold it back—and you wouldn’t want to.”Use AI to Manage AIThis is the future: using AI to manage AI. “AI behaves differently than traditional machine identities. It can make decisions. It has intent.” Because machine-to-machine connections now operate quickly and with shifting intent, organisations need systems capable of evaluating every single request in real time.Traditional machine identities are predictable—like a robot performing the same task endlessly. Attacks happened when credentials were stolen and misused by humans with intent, as in the Salesforce/Drift breach.“AI is going to have a machine connection, which tends to be overprivileged. But it can also make decisions on its own.”Companies must not only build governance, inventory systems, and manage credentials—they must evolve toward understanding intent.“AI is not static. There’s intent behind the connection. Your controls must be able to interpret that intent. That’s where AI is taking us,” Delinea CEO tells Stiennon.TakeawaysAI is a hot topic in cybersecurity today.There is a significant gap between AI
In the recent episode of The Security Strategist podcast, Jim Waggoner, VP of Product Strategy at N-able, and Joe Ferla, one of N-able’s Head Nerds, speak to host Chris Steffen, Vice President of Research at Enterprise Management Associates (EMA). They addressed one of cybersecurity’s biggest misconceptions – while organizations might be getting better at spotting threats, most still struggle to respond to them in real time.“We live in a time where the threat landscape is changing instantly,” Steffen said. With threat actors speeding up their tactics, Waggoner and Ferla insist that the only way forward is constant reassessment.When the ‘Response Action’ Doesn’t DeliverSteffen began by asking the IT leaders about a key challenge faced by many CISOs. He says that the industry often talks about “EDR, MDR, XDR,” yet the promise of real-time response frequently remains unfulfilled.Ferla identified a major problem here: the wrong people are making purchasing decisions. “In the small to mid market, I often see decision-makers who aren’t security experts, and they’re the ones driving the purchasing,” he explained. These executives “trust that the product works as they want, but they don’t know what they really need in the field,” which leads organizations to buy advanced tools they cannot actually use.Even more troubling, Ferla noted that many customers request capabilities that no MDR could or should handle. “I have people at N-able coming to me thinking that we can manage backups as a response. And that's simply not possible.”Waggoner, who spent years developing incident response tools, sees another side of the issue. Vendors often downplay the “response” aspect. “When it came to the R,” he said, “it was a little R.” True MDR has to go well beyond automated blocking. “Can we disable accounts? Can we prevent ransomware from affecting other systems or stop lateral movement?”Also Read: N-able Annual Threat Report 2025Where AI and Cybersecurity Go NextWhen asked about the future of detection and response, Ferla shed light on the increasing complexity. He remembered running an MSP alone just a few years ago. “Nowadays, I could not come anywhere near close to doing this,” he said. “It's impossible.”Waggoner stated that AI will shape the next phase—not just for attackers, but also for defenders who face ongoing staffing shortages. Threat actors are already using AI to change tactics and automate reconnaissance. Defenders need to keep up: “Look at companies like us, using AI for detection models and for responses to address the people shortage.”Waggoner encouraged IT decision-makers to find ways AI can strengthen their security, not make it more complicated. “Get ahead of it. See how you can truly use AI's capabilities to better protect yourself,” he stated.TakeawaysDetection and response tools are evolving rapidly.Organizations often have unrealistic expectations of their security capabilities.Continuous review of security strategies is essential.MSPs play a crucial role in enhancing security for small to mid-sized businesses.Proactive measures are necessary to stay ahead of threats.AI is transforming the cybersecurity...
Modern enterprises face a growing challenge in managing thousands of devices, applications, and identities across increasingly complex IT environments. Unmanaged assets, shadow IT, and inconsistent processes can quietly create security gaps that expose organisations to significant risk.In this episode of Security Strategist, Syed Ali, Chief Executive Officer of EZO, joins host Chris Steffen to discuss how IT asset management (ITAM) and deterministic automation provide the visibility and control organizations need to secure their digital landscapes.“There are risks lying around in the organization,” Ali explains. “A good ITAM solution helps identify them and forms the foundation for maturing IT operations.” His insights are drawn from years of experience as both an IT practitioner and entrepreneur, offering practical lessons for IT leaders and security professionals alike.Strengthening Cybersecurity and Risk Management with ITAMEffective ITAM solutions are more than just inventory tools; they are central to reducing cybersecurity risk. By automating asset discovery, monitoring, and reporting, organizations gain accurate, up-to-date visibility into every device, software, and identity across their networks.“Automation runs in IT; it has always run in IT,” Ali notes. Deterministic automation ensures processes are consistent and predictable, allowing IT teams to spot vulnerabilities, reduce the likelihood of human error, and respond quickly to emerging threats. This approach also makes data reliable for decision-making, supporting procurement, patch management, and risk mitigation strategies.Unmanaged assets, he warns, can “come back to bite us,” particularly when organizations lack visibility into endpoints, applications, and shadow identities. Implementing ITAM proactively rather than reactively is crucial for organizations that want to minimize risk rather than simply meet compliance requirements.Future ITAM TrendsLooking forward, agentic AI and identity-aware assets are shaping the future of ITAM. While AI introduces powerful capabilities, keeping it separate from deterministic automation ensures organizations maintain control while enhancing visibility and security.Investing in effective ITAM and automation is an enduring strategy for securing assets, reducing risk, and enabling informed decision-making. As Syed explains, “The system is meant to work for you, but that only happens if there’s discipline and awareness in identifying risks. The danger of not having a solid item in place is very real. I also understand that an understaffed IT team, constantly reacting to issues, can’t always prioritise these risks effectively. Unfortunately, that’s the reality we’re dealing with.”To learn more about effective ITAM solutions, visit EZO.For more insights follow EZO:X: @ezosolutionsInstagram: @ezo.solutionsFacebook: https://www.facebook.com/EZOsolutions/LinkedIn: 
“People love the idea that an agent can go out, learn how to do something, and just do it,” Jeff Hickman, Head of Customer Engineering, Ory, said. “But that means we need to rethink authorization from the ground up. It’s not just about who can log in; it’s about who can act, on whose behalf, and under what circumstances.”In the latest episode of The Security Strategist Podcast, Ory’s Head of Customer Engineering, Jeff Hickman, speaks to host Richard Stiennon, the Chief Research Analyst at IT-Harvest. They discuss a pressing challenge for businesses adopting AI: managing permissions and identity as autonomous agents start making their own decisions.They particularly explore the implications of AI agents acting autonomously, the need for fine-grained authorization, and the importance of human oversight. The conversation also touches on the skills required for effective management of AI permissions and the key concerns for CISOs in this rapidly changing environment.The fear that AI agents can go rogue or exceed their bounds is very real. They are not just tools anymore; instead, they can now negotiate data, trigger actions, also process payments. Without the right authorisation model, Hickman warns that organizations will encounter both security gaps and operational chaos.Also Watch: Is Your CIAM Ready for Web-Scale and Agentic AI? Why Legacy Identity Can't Secure Agentic AIHuman Element Vital to Prevent AI Agent from Going WildTraditional IAM frameworks aren’t designed for agents that think, adapt, and scale quickly. Anticipating a major shift, Hickman says, “It’s not just about role-based access anymore. We’re moving toward relationship-based authorization—models that understand context, identity, and intent among users, agents, and systems.”Citing Google’s Zanzibar model, the Ory lead customer engineer says that it’s a starting point for this new era. Unlike static roles, it outlines flexible, one-to-one relationships between people, tools, and AI systems. This flexibility will be crucial as organizations deploy millions of autonomous agents operating under various levels of trust.But technology alone won’t solve the issue. Hickman stresses the importance of the human element, saying, “We need humans to define the initial set of permissions. The person who creates an agent should be able to establish the boundaries—in plain language, if possible. The AI should understand those instructions as a core part of its operating model.”This leads to a multi-pronged identity system where humans, agents, and services all verify authorization on behalf of the user before any action takes place—ensuring accountability even when AI acts autonomously.The New Organisational Skill Stack for AI SecurityAs AI systems grow more sophisticated, the people managing them must also evolve. Hickman outlines a three-part skill structure every organization should develop:Identity and Access Architects: To define how agents authenticate, represent and act on behalf of users, and scale securely.AI Behaviour Analysts: A new role that bridges technical and business insights, understanding how LLMs make decisions and how to align that behaviour with enterprise goals.Business...
With more and more organisations adopting AI as part of their operations, a new layer of data risk has begun to emerge. In the recent episode of The Security Strategist Podcast, guest Gidi Cohen, CEO and Co-Founder of Bonfy.AI, sat down with host Richard Stiennon, Chief Research Analyst at IT Harvest. They discussed the reasons traditional data loss prevention (DLP) systems fail. Cohen stressed that understanding data context is now crucial for securing AI-driven enterprises.What Happens When “Trusted” AI Tools Become Paramount RiskThe rise of generative AI, from chatbots to embedded assistants in SaaS platforms, has created a complex web of data interactions that many organisations do not fully grasp. Cohen argues that this new reality has made legacy DLP technologies completely irrelevant.“Even before generative AI, DLP never really worked well,” he told Stiennon. “It relied on static classification and outdated detection models that created noise and false positives. Now, with dynamic content generated and shared instantly — and humans often out of the loop — those tools can’t keep up.”While “shadow AI” applications have gained much attention, Cohen believes the larger threat lies in the trusted tools organisations already use. “We’re using Microsoft 365, Google Workspace, Salesforce — all of which now embed AI models,” the Bonfy CEO explains. “They process vast amounts of sensitive data every day. Yet most companies have no control or visibility over how that data is accessed, transformed, or shared.”This lack of visibility creates a perfect storm for data exposure. “You might use an LLM to summarise a customer meeting, which is fine,” Cohen says. “But if that summary is later shared with the wrong client or synced with another app, you’ve just leaked confidential information — and no one will even notice.”The main issue, he adds, isn’t about whether AI vendors misuse data. “The model itself isn’t the main problem. It’s what happens afterwards — how the data and outputs move through the organisation.”How to Create a New Model for AI-Aware Data SecurityCohen’s solution to this growing complexity is what he calls a context-driven, multichannel architecture. Such a way perceives data protection as an ecosystem rather than a single checkpoint.“The flows are too complex for simple guardrails,” he explains. “You can’t just block uploads of credit card numbers and call it a day. You need to understand the context — who’s sharing the information, through what channel, for what purpose, and whether it’s leaving the organisation.”Bonfy’s approach looks across multiple communication layers — from email and file sharing to APIs, AI agents, and web traffic. They create a complete view of how information moves. Cohen says it’s essential for spotting risky behaviour, whether it comes from a careless employee or an autonomous AI agent working in the background.As organisations start using multimodal AI — incorporating text, images, audio, and video — this overall visibility becomes even more important. Browser extensions or regex-based filters, he notes, simply won’t catch everything. “An AI agent isn’t using a browser. It’s running somewhere on your network, processing sensitive data on its own. You need a...
In this episode of The Security Strategist podcast, host Jonathan Care, Lead Analyst at KuppingerCole Analysts, speaks with Sudhir Reddy, the Chief Technology Officer (CTO) of Esper, about how to build trust in ‘Zero Trust.’. They explore this paradox in Zero Trust systems, where human trust is essential for the system to function effectively. Reddy emphasises the need for intelligent friction in security measures, allowing for a balance between security and business operations. The conversation also highlights the importance of understanding user needs and building trust within security systems to ensure effective implementation of Zero Trust strategies.How to Build Trust in a "Zero Trust" World?“Security should be a seatbelt, not a straightjacket,” Esper CTO said, describing the nature of zero trust in cybersecurity. For Reddy, zero trust isn’t just about “trust no one.” It’s about verifying everything while still allowing people to do their work.“Zero Trust is really about verification,” he explains. “But the paradox is that it’s built to create trust among the people using it.” As systems, devices, and AI tools grow, security can’t just mean adding more barriers. “The number of people interacting with systems has increased a lot,” Reddy adds. “But if the system doesn’t support the business, people will find a way around it.” That, he says, poses a risk where extremely rigid security could defeat its own purpose.From “Friction” to “Intelligent Friction”The Esper CTO explains Intelligent Friction designs systems that adjust security based on the situation. “You want the least friction where there is friction,” he says. “Add friction where it matters most, and make it disappear when it doesn’t.”Alluding to an example of banking apps, Reddy explains intelligent friction as a simple login for checking balances and extra verification for large transfers. “That’s intelligent design — progressive, contextual, and trusted.”When asked about the key message for CISOs, CEOs and IT decision-makers, he urges them to “stop measuring adherence to rules.” Instead, “start measuring where people are bypassing them — that’s where your friction is hurting the business.”At Esper, this approach guides everything from device management to enterprise policy design: security that protects without slowing you down. Discover how Esper is redefining Zero Trust through Intelligent Friction. Learn more at Esper.io.TakeawaysZero Trust is fundamentally about verification at every step.The shift to Zero Trust is driven by increased exposure and sophisticated attack vectors.Human trust is essential for Zero Trust systems to function effectively.Intelligent friction allows for security measures that adapt to user needs.Security should not hinder business operations; it should support them.CISOs should measure rebellion against security rules, not just adherence.Progressive security checks can enhance user trust in systems.Cultural change is necessary for effective security implementation.Feedback...
Can your organization truly trust every identity, human, machine, and AI?The traditional security perimeter is no longer a reliable boundary. As enterprises adopt hybrid infrastructures, cloud services, and autonomous AI systems, identity has emerged as the central element of effective cybersecurity.In the latest episode of The Security Strategist Podcast, Richard Stiennon speaks with StrongDM’s Chief Executive Officer Tim Prendergast about how organizations can secure human users, machines, and agentic AI through identity-based controls.Identity at the Center of Zero TrustBoth Stiennon and Prendergast believe identity has become the true control plane for modern cybersecurity. While Zero Trust frameworks are widely promoted, they often remain theoretical until grounded in strong identity governance. By continuously verifying and managing every identity—human, machine, and AI—organizations can strengthen access control, reduce the risk of credential theft, and enforce clear operational boundaries across their environments.As Prendergast explains, “No one wants to go out of business tomorrow, no matter how good their security is. You have to balance the needs of the business, the needs of your user or customer populations, and practical security.Securing Human UsersFor human users, particularly those with privileged access, identity management must strike a balance between security and productivity. CISOs need visibility into who is accessing critical assets, when, and under what context. StrongDM’s approach emphasizes just-in-time access, ensuring users receive only the permissions they need, precisely when they need them.Implementation ConsiderationsDeploying identity-based security requires a strategic, phased approach. Prendergast stresses that security measures must align with business priorities to minimize disruption. By treating users, machines, and AI agents as identities rather than simply devices or services, organizations can enforce dynamic policies, respond to threats more effectively, and maintain compliance in increasingly distributed IT environments.StrongDM’s approach demonstrates that the future of security lies in identity-first models where humans, machines, and AI agents are governed under the same principles, ensuring that the right identities have the right access at the right time.TakeawaysIdentity is the new control plane for security.Zero Trust is often theoretical; real progress lies in identity-based security.Stolen credentials are the primary attack vector.A Renaissance in identity security...
In today’s cybersecurity industry, Managed Service Providers (MSPs) who do not adapt risk falling behind. In the recent episode of The Security Strategist podcast, host Richard Stiennon, Chief Research Analyst at IT-Harvest, talks with Stefanie Hammond, Head Nerd at N-able, and Jim Waggoner, Vice President of Product Management at N-able. They discuss how MSPs can tackle rising threats, bridge the talent gap, and maintain profitability in a quickly evolving market.The speakers particularly explore the critical need for MSPs to adopt Managed Detection and Response (MDR) services, the importance of internal security investments, and how AI can enhance efficiency. The conversation also touches on compliance challenges and future trends in pricing strategies for MSPs, emphasising the need for continuous adaptation in a rapidly changing threat environment.When Stiennon asked, “How quickly must an MSP change their entire model to a managed detection and response offering to stay competitive?” Hammond's answer was straightforward: “If an MSP hasn’t done that yet, I don’t know how much longer they can wait.” This sets the stage for the podcast.MDR Is No Longer Optional but Critical for MSPsFor MSPs serving clients in tightly regulated fields like finance, healthcare, government, or education, Managed Detection and Response (MDR) is a necessity.“Organisations in those sectors face a greater risk,” says Hammond. “Managed Service Providers (MSPs) need to incorporate MDR into their security offerings and make it standard for their customers to stay competitive.”However, Hammond cautions against selling MDR as a standalone solution.“We shouldn’t sell any security tools as a separate service.” Instead, she suggests packaging MDR with other prevention, detection, and recovery options—like backup and data protection—to create a layered cybersecurity package.Agreeing, Waggoner steps in and describes this as a natural growth process for MSPs: “It becomes a maturity lifecycle. You start by managing hardware and software, move on to daily security, and eventually cover full detection and response. If MSPs don’t want to develop that in-house, N-able can assist—we can co-manage it or handle it for them as they grow.”MSPs for Smarter Security and AI-Backed EfficiencyThe speakers also talked about howtalked how AI and automation are changing cybersecurity, not just for spotting threats but also for improving operations and driving sales. “We automatically handle 90 per cent of security alerts using AI,” expressed Waggoner. “If you’re not automating, you’re falling behind,” the Vice President of Product Management at N-able added.For Hammond, AI is equally beneficial in marketing and communication. She recommends MSPs not to manage sales and marketing on their own but to use AI to support themselves. Both experts agree that compliance, identity protection, and education are essential parts of a resilient security framework. “It always comes down to identity,” Waggoner emphasises. “Use unique logins, change passwords regularly, and set up...
"You have to think about how the online world really operates and how we make sure that data is secure. How can we trust each other in the digital world?" Robert Rogenmoser, the CEO of Securosys, asks. The answer is "encryption and digital signature."According to Robert Rogenmoser, the CEO of Securosys, storing keys insecurely creates immediate risk. This makes it crucial to maintain strong key security. "If it's just in a software system, you can easily get hacked. If I have your encryption key, I can read your data. If I have your Bitcoin keys, I can spend your money,” says Rogenmoser.In the recent episode of The Security Strategist podcast, host Richard Stiennon, Chief Research Analyst at IT-Harvest, speaks to Robert Rogenmoser, the CEO of Securosys, about safeguarding the digital world with cryptographic keys. Rogenmoser puts up a case to rally Hardware Security Modules (HSMs) as the best solution for this critical challenge.In addition to discussing how hardware security modules (HSMs) protect encryption keys, they also talk about the evolution of HSMs, their applications in financial services, the implications of post-quantum cryptography, and the integration of AI in security practices. Are Hardware Security Modules (HSMs) the Ultimate Solution?The conversation stresses the importance of key management and the need for organisations to adapt to emerging technologies while ensuring data security.In order to mitigate the cybersecurity risks, the priority is to securely store the keys, control access, and generate impenetrable keys that cannot be easily guessed by cyber criminals. HSMs are the ultimate solution to the key issue, believes Rogenmoser. Firms tend to shift their data to the cloud, making it even more essential to secure keys. The main challenge arises when both the data and the keys are managed by the same cloud provider, as this setup can compromise the integrity of key control and raise concerns about data sovereignty. However, Securosys approaches this challenge differently. Rogenmoser explains that organisations can keep their data encrypted in the cloud. At the same time, they keep the key somewhere else, where only they have control over it.Multi-Authorisation System for High-Stakes TransactionsRogenmoser pointed out the company's patented system for multi-authorisation of Bitcoin keys. This system is essential because blockchain transactions are high-stakes and irreversible."Crypto custody for bitcoins or any cryptocurrency is a major business for our HSM," he said. Banks that hold large amounts of customer crypto cannot afford a single point of failure. "A blockchain operation is a one-way thing. You sign a transaction, and the money is gone."The multi-authorisation system addresses this issue by requiring a "quorum" of people to approve each transaction. Rogenmoser explained, "You can say this transaction can only be signed and sent to the blockchain if one out of three compliance officers signs this, plus two out of five traders." This approach creates a "more secure system" because "the HSM then checks, do we have a quorum? Did everyone actually sign the same transaction?" Only after verification is "the actual key for the blockchain […] used to sign a...
"With any new technology, there's always a turning point: we need something new to solve the old problems,” states Jeffrey Hickman, Head of Customer Engineering at ORY, setting the stage for this episode of The Security Strategist podcast.The key challenge enterprises face today, pertaining to identity and security, particularly, is the quick rise of AI agents. Many organisations are trying to annex advanced AI features into old systems, only to realise, post-cost investment, that serious issues have come to the surface. The high number of automated interactions could easily overload the current infrastructure. "The scale of agent workloads will be the weak spot for organisations that simply try to apply current identity solutions to the rapidly growing interaction volume,” cautions Hickman. In this episode of The Security Strategist podcast, Alejandro Leal, Host, Cybersecurity Thought Leader, and Senior Analyst at KuppingerCole Analysts AG, speaks with Jeffrey Hickman, Head of Customer Engineering at ORY, about customer identity and access management in the age of AI agents. They discuss the urgent need for new self-managed identity solutions to address the challenges posed by AI, the limitations of traditional Customer Identity and Access Management (CIAM), and the importance of adaptability and control in identity management. The conversation also explores the future of AI agents as coworkers and customers, emphasising the need for secure practices and the role of CISOs in pulling through these changes.AI Agents – The Achilles Heel of Legacy IdentityHickman explains that many companies face an immediate and serious issue at the moment. He said: "The scale of agentic workloads will be the Achilles heel for organisations that simply try to map existing identity solutions onto the drastically ballooning interaction volume."This scale not only overwhelms current systems but also creates perilous complexity. AI agents, acting on their own or on behalf of humans, lead to a huge increase in authentication events. This is called an "authentication sprawl." Such strain on old technology often positions security as an afterthought.The main unresolved technical issue is context: figuring out what an individual agent is allowed to do and what specific data it can access, Hickman tells Leal. "The problem is defining the context—what an agent is allowed to do and gather. Legacy IM solutions don't address this well; it's an unsolved area."To gain the necessary control, organisations must move beyond complicated scope chains and rethink how granular permissions function. Meanwhile, the risk of AI-driven phishing targeting human users, fueled by manipulated prompts, will grow until we can ensure the authenticity of human-in-the-loop moments using technologies like Passkeys.Also Read: OpenAI leverages Ory platform to support over 400M weekly active usersTakeawaysThe rise of AI agents is reshaping customer identity management.Traditional SIAM systems struggle with the scale of AI interactions.Adaptability is crucial for organisations facing new identity challenges.Control over identity solutions is essential for enterprises.Security must not be sacrificed for user experience.AI agents can amplify existing identity management...
"The harsh reality is the site wasn't real. The ad was fake. The reality is you've clicked through to a steward ad that's taken you to a fake site. That fake site then has taken your details, your credit card,” articulated Lisa Deegan, Senior Director, UK and International Growth at EBRAND, in the recent episode of The Security Strategist podcast.Host Richard Steinnon, Chief Research Analyst at IT-Harvest, sits down with Deegan to talk about cybersecurity in brand protection against online fraud. They explore how AI is being used by criminals to create convincing fake shops, the impact of these scams on consumer trust, and the need for a comprehensive approach to brand protection. Deegan emphasises the importance of understanding consumer behaviour, the mechanics of online scams, and the necessity for organisations to adopt proactive strategies to combat these threats. The Alarming Rise of AI Fake ShopsWhile the digital world seems like a boon to most, about two-thirds of humanity (five billion people), to be precise. This online community, heavily relying on mobile devices, have become prey for savvy cybercriminals. These criminals are now using Generative AI to create highly convincing, yet entirely fake, online retail experiences.Deegan, a cybersecurity and brand protection expert at EBRAND, illustrates the situation trapping the digital community. She asks the audience to imagine a consumer scrolling through social media, sees an ad for a favourite brand offering a deep discount. The consumer clicks, is taken to a professional-looking website that appears legitimate, enters payment details, and loses their money. The product never existed, and the consumer's data is stolen. The speed and scale of these attacks are unprecedented; single campaigns can target over 250,000 people in a single day, points out Deegan.The EBRAND senior director proposes a massive change in brand protection strategy. Instead of just dealing with surface-level violations, she wants to target the underlying criminal infrastructure. "It's no longer about firefighting individual infringements. It's about looking at the domains, the ads and the payment channels cyber criminals are using. And it's also the bad actors before that.”“It's bringing that all together and making sure that you're taking it down the infrastructure at source so that it's leaving them no opportunity to rebuild again," added Deegan.The speakers agree that the traditional method has become a continuous "whack-a-mole" game against sites that instantly reappear due to AI. To be effective, brands "need to embed monitoring with intelligence and rapid enforcement" to break down the entire operation, making it too costly and difficult for the criminals, who will "eventually get fed up and move on to some other soft target."TakeawaysThe landscape of online fraud is rapidly evolving due to AI.Two-thirds of humanity is now online, increasing vulnerability.Fake shops can deceive consumers with convincing ads and websites.Trust in brands is significantly impacted by online scams.Organisations need to dismantle the networks behind scams, not just individual sites.AI can be used for both scams...
Identity fabric, a contemporary, flexible identity and access management (IAM) architecture, should “be involved at every stage of authentication and authorisation,” says Stephen McDermid, CSO, EMEA at Okta Security. According to CISCO’s VP, 94 per cent of CISOs believe that complexity in identity infrastructure decreases their overall security. In this episode of The Security Strategist podcast, Alejandro Leal, podcast host and cybersecurity thought leader, speaks with McDermid about Identity Fabric, the modern threats to identity security, the role of AI in cybersecurity, and the importance of collaboration among industry players to combat these novel threats. Stephen emphasises the need for organisations to adopt a proactive approach to identity governance and to recognise that identity security is a critical component of overall cybersecurity strategy.Poor Identity GovernanceEnterprises today face a complicated web of users, applications, and data. Identity, once hailed as a small IT problem, is now at the forefront of cyberattacks, and they are becoming highly lucrative targets for cybercriminals. Alluding to recent high-profile breaches on the UK high street, McDermid points out the financial impact estimated in hundreds of millions of dollars. The common feature observed among these cyber incidents is the misuse of “poor identity governance.” This happens when users’ old login information lacks multi-factor authentication (MFA) or when attackers use social engineering to reset passwords. The reality today is that attackers now use automation and AI to find valid identities, which makes their work easier than ever, owing to a vast number of compromised credentials available online. The scale of the threat is massive. McDermid noted that "fraudulent sign-ups actually outnumbered legitimate attempts by a factor of 120." This indicates that organisations need to accept that "a breach is inevitable."Ultimately, McDermid's message was clear and pressing. He urged CISOs to understand where their identities are throughout their businesses. Furthermore, he stressed on the need to assume a breach and consider how to respond. The CSO also called for them to challenge their SaaS vendors to commit to the new standards. In his opinion, only through this type of collective action can the security community hope to make a difference in what seems to be a losing battle right now.TakeawaysIdentity Fabric is a framework for managing identities at scale.Modern attacks...
Enterprises can no longer afford the old trade-off between speed and safety. Developers are under constant pressure to release code faster. At the same time, security teams face an endless stream of new threats. The middle ground is clear, and that is software must be secure and resilient from the start, without slowing innovation.This is the philosophy Ian Amit, CEO of Gomboc AI, shared in a recent conversation with Dana Gardner, Principal Analyst at Interarbor, on the Security Strategist podcast. Amit argues that the next era of DevSecOps depends on rethinking how engineering and security come together.Moving Beyond Shift-Left FatigueThe traditional push to “shift security left” has often backfired. Developers face alert fatigue, drowning in warnings that obscure the real issues. Security teams end up chasing vulnerabilities rather than preventing them. Amit reframes the goal as engineering excellence:“I want to be proud of my code. It should be secure, resilient, efficient, and fully optimized. That’s what I call engineering excellence.” — Ian Amit, CEO, Gomboc AIAttackers only need to succeed once; defenders must be right every time. By closing the gap between development and operations, organizations can cut MTTR and reduce risk exposure.Balancing AccuracyGenerative tools can accelerate development, but they introduce instability.“With that 10x code, you’re also getting 10x the bugs,” Amit explains.Deterministic approaches, by contrast, deliver repeatability and precision. Neither alone is a silver bullet. As Amit puts it:“Use generative to cut through tedious work. Use deterministic approaches to align output to your own standards. You don’t want someone else’s standards creeping into your environment.”Seamless DevSecOpsThe future of enterprise security isn’t about more checkpoints. It’s about weaving security into development pipelines, enabling distributed teams to collaborate without friction. Gomboc AI’s approach centres on reducing engineering toil and empowering enterprises to achieve fast, safe, and automated development.Key TakeawaysTraditional shift-left security can create alert fatigue.Generative tools speed development but may increase bugs.Deterministic approaches offer accuracy and repeatability.Mean time to remediate (MTTR) is the most critical success metric.Collaboration across distributed teams is essential.Security must integrate seamlessly with DevOps processes.Chapters00:00 Introduction to DevSecOps and Its Importance03:08 Challenges in Traditional Shift Left Approaches06:07 The Role of AI in Development and Security08:58 Balancing Generative and Deterministic AI11:52 Automation and Metrics of Success in Security14:44 Collaboration in Distributed Teams17:59 Integrating SecOps into Existing Processes20:56 Future of AI in DevSecOps23:53 Gomboc AI's Approach to Bridging GapsAbout Gomboc AIGomboc.ai is a cloud infrastructure security platform built to simplify and strengthen security at scale. By connecting directly to cloud environments it provides complete visibility and protection across risks. Its deterministic engine automatically detects and fixes policy deviations in Infrastructure as Code (IaC), delivering tailored,...
In an era of AI, it’s no longer a question of whether we should use it, but instead, we need to understand how it should be used effectively, conveys Sam Curry, the Chief Information Security Officer (CISO) at Zscaler. He believes that the growth of agentic AI is not meant to replace human security teams; rather, it aims to improve the industry as a whole.In this episode of The Security Strategist podcast, host Richard Stiennon, an author and the Chief Research Analyst at IT-Harvest, speaks with Curry, Zscaler CISO, about the need for a shift to a model derived from authenticity, the role of agentic AI in security operations, and the criticality of awareness in adopting to changes brought by AI.The conversation also touches on the necessity of establishing trust and accountability in AI systems, as well as the implications for cybersecurity professionals in an increasingly automated world.AI Allows Easy Transition to Complex & Strategic Work The cybersecurity industry is constantly warring against malicious actors. As attackers become more skilled, especially with AI in the picture now. Security professionals must step up their skills just to keep pace with the advancements brought by AI. Instead of taking away jobs, it enables security experts to break free from repetitive manual tasks. Such a transition allows them to focus on more complex and strategic work."We spend a lot of our time in the SOC doing manual tasks repetitively and trying to glue things together," Curry says. "When you manage not to think about the tools, your ability to perform a task improves drastically."AI adaptations bring other changes that also help IT teams find better ways to perform their jobs. They move from simple detection and response to a more proactive approach to security. Curry believes that in this new environment, there will still be plenty of jobs; they'll just be more engaging and valuable.Ethics & Logic are Crucial to Work With AIFor universities and educational institutions, the rise of AI in cybersecurity poses a significant challenge. The traditional emphasis on technical certifications like Certified Ethical Hacking and Security+ is no longer adequate. Future jobs will demand a deeper understanding of fundamental principles."They're going to have to walk over to the philosophy department," Curry explains. "They'll probably need to engage with the social sciences department. Understanding ethics and logic is crucial because they have to work with AI and assess whether the information it provides is logical."The key is in coding, running scripts, but most importantly, it’s in learning to collaborate with AI as a partner....
It has been eight years since the NIST Special Publication 800-190: Application Container Security Guide was published, and its recommendations remain central to container security today. As cloud-native applications have become the foundation of modern enterprise IT, securing containers has shifted from an afterthought to a critical priority.In this episode, Richard Stiennon, Chief Research Analyst at IT-Harvest and host of Security Strategist, discusses container security with John Morello, CTO and Co-Founder of Minimus, and Murugiah Souppaya, Former Computer Scientist at the National Institute of Standards and Technology (NIST). Together, they focus on NIST Special Publication 800-190, exploring its role in providing best practices for securing containers, the recommendations outlined in the guide, and the approach required for effective container security. The conversation also examines current best practices and the future of container security, emphasizing the importance of compliance and the integration of security throughout the development lifecycle.Why NIST SP 800-190 Still MattersNIST’s framework was designed for both government and industry, offering guidance on how to:Integrate security early in the application lifecycle.Apply a holistic approach from hardware to workload.Build with minimalistic and secure container images.Maintain compliance with regulations and standards.Continuously monitor and update security practices.Understand the full container lifecycle from creation to retirement.As Murugiah Souppaya explains:“We want to make sure that people think of container security holistically, and also think about the full lifecycle management of the container itself. Like anything else in the enterprise, you want to look at this end-to-end and fill those gaps.”Insights on the Development of Container SecurityNIST SP 800-190 arrived at a time when containers were new to most organizations. Now, they have become the standard way to deploy applications at scale.John Morello recalls:“Around 2016 or so, containers were pretty new in the world. Containers and containerization in other forms had existed in the past, but it was really becoming a mainstream technology that was commonly used across many organizations.”This fast-paced adoption forced organizations to rethink their security culture. Containers required not only new technical controls, but also a shift in mindset: security had to be built-in from the start.TakeawaysContainer security became critical with the rise of cloud-native applications.NIST aims to provide guidance for both government and industry.The 800-190 guide offers a framework for securing containers.Security must be integrated early in the application lifecycle.Containers require a shift in security culture and practices.Holistic security involves securing hardware to workload.Best practices include using minimalistic and secure images.Compliance with regulations is essential for container security.Continuous monitoring and updating of...
AI is rapidly changing how cybercriminals operate. Social engineering, once easy to spot, has entered a new era. Phishing emails that used to be riddled with spelling mistakes and clumsy language are now polished, persuasive, and tailored using data scraped from social media and other online sources. The result? Messages that look legitimate enough to trick even the most security-aware employees.In this episode of Security Strategist, host Trisha Pillay sits down with Director of Threat Research at N-able, Kevin O’Connor to unpack how AI is reshaping phishing and what it means for businesses, especially small and medium-sized organizations that often lack the resources to keep up. Drawing on insights from the N-able Threat Report, O’Connor explains why traditional defenses and old-school user training aren’t enough to stop today’s AI-crafted scams.O’Connor says:“In the past, phishing emails were easy to spot, you’d see clumsy grammar mistakes, generic wording, they were just very obvious. But with the new wave of AI-enabled phishing emails, we’re seeing tailored attacks that pull from social media profiles and other sources. These messages are highly polished, they look convincing, and the worrying part is that attackers can now do this at scale. That means even IT professionals and security pros are at risk.”Why Even Experts Are Falling for AI-Powered PhishingDrawing on insights from the latest N-able Threat Report, this is why the shift is so dangerous:AI is changing the landscape of social engineering. Messages are tailored, credible, and increasingly difficult to block or filter.Phishing emails are now more convincing than ever. Attackers can create unique, targeted scams instead of blasting out obvious mass emails.Even experts are vulnerable. IT teams and security professionals are no longer immune.User training must evolve. Old advice like “look for spelling mistakes” won’t cut it anymore. Employees need new skills to recognize modern threats.The conversation also looks ahead at what enterprises can do now to strengthen defenses, updating training, and preparing for a future where AI will play a role on both sides of the cybersecurity battle.TakeawaysAI is changing the landscape of social engineering.Phishing emails are now more convincing than ever.Even tech-savvy employees can fall for scams.SMBs are increasingly targeted due to their vulnerabilities.User training must evolve to address modern threats.Two-factor authentication is critical for financial transactions.Organizations need to know their data exposure.Incident response planning is essential for preparedness.Automated responses can enhance security measures.The threat of compromise is a matter of when, not if.Chapters00:00 Introduction to AI-Driven Threats02:09 The Evolution of Phishing with AI05:42 The Rise of Attacks on SMBs08:56 Preventative Measures for Organizations12:36 The Future of AI in CybersecurityAbout Kevin O’ConnorKevin O’Connor is the Director of Threat Research at a...
"What we're seeing as a response to coding agents is one of the biggest risks in security vulnerabilities to date,” said Jaime Jorge, Founder and CEO of Codacy. “It's almost like a game to see how fast we can exploit vulnerabilities in some of these applications that are created so quickly."In this episode of The Security Strategist Podcast, Richard Stiennon, Chief Research Analyst at IT-Harvest, speaks with Jaime Jorge, the Founder and CEO of Codacy, about secure software development in the age of AI. The speakers talk about how quickly coding is evolving due to AI tools, the rise of autonomous coding agents, and the major security issues that come from this faster development. Jorge emphasised the importance of maintaining security practices and highlighted Codacy's role in providing thorough security analysis to ensure that AI-generated code is safe and reliable. The discussion also looks at the future of AI in software development and what IT leaders need to do to manage these changes.Software Development in an Era of AIThe world of software development is changing dramatically, the Codacy founder conveyed on the podcast. With AI tools like GitHub Copilot and Cursor becoming mainstream, developers are writing code faster than ever. Host Stiennon refers to this new era as "vibe coding," meaning the ability to create code at an incredible speed.However, this speed can bring serious and risky consequences. Data has shown that AI-generated code often has vulnerabilities. Some studies have found that these vulnerabilities can reach as high as 30-50 per cent. A Front Big Data study reported that 40% of the code suggested by Copilot had vulnerabilities. “Yet research also shows that users trust AI-generated code more than their own.”This trend is widening the gap between quick development and secure, enterprise-grade software.How to Keep up With Autonomous Coding Agents?“Without a doubt, one of the most significant trends that we're seeing is coding agents,” the CEO of Codacy told Stiennon. “Autonomous coding agents are becoming extremely skilled at taking a prompt and creating full-fledged products, getting even to the intentions that users have.”However, the challenges of autonomous agents cannot be denied. Jorge believes this is more than just a technical issue. It reflects a basic misunderstanding of how to use these powerful new technologies.He pointed out that it's dangerous to assume we can completely hand over decisions about the code generated by AI. Important software development practices, such as building security into the design and having human code reviews, shouldn't be overlooked. The convenience of using AI to quickly generate code for a project means we have a greater responsibility to review the code ourselves, to evaluate it, or to ensure that other people approve it.Jorge’s key message to CISOs, CTOs and IT decision-makers is that AI is here to stay and that their teams are already likely using it. This wave is hard to ,ride but “you have a choice in how to ride it.”"AI-generated code can secure our tools, and our agents are empowered with security capabilities. You can move fast if you have the right guardrails."The best practices Codacy developed over decades, such as...
loading
Comments