Application Security Weekly (Video)

About all things AppSec, DevOps, and DevSecOps. Hosted by Mike Shema and John Kinsella, the podcast focuses on helping its audience find and fix software flaws effectively.

Design Errors in Entra ID, Design Defenses in iOS, Design Difficulties in DeepSeek - ASW #349

In the news, Microsoft encounters a new cascade of avoidable errors with Entra ID, Apple improves iOS with hardware-backed memory safety, DeepSeek demonstrates the difficulty in reviewing models, curl reduces risk by eliminating code, preserving the context of code reviews, and more! Show Notes: https://securityweekly.com/asw-349

09-23
58:43

How OWASP's GenAI Security Project keeps up with the pace of AI/Agentic changes - Scott Clinton - ASW #348

This week, we chat with Scott Clinton, board member and co-chain of the OWASP GenAI Security Project. This project has become a massive organization within OWASP with hundreds of volunteers and thousands of contributors. This team has been cranking out new tools, reports and guidance for practitioners month after month for over a year now. We start off discussing how Scott and other leaders have managed to keep up with the crazy rate of change in the AI world. We pivot to discussing some of the specific projects the team is working on, and finally discuss some of the biggest AI security challenges before wrapping up the conversation. If you're neck-deep in AI like we are, I highly recommend checking out this conversation, and consider joining this OWASP project, sponsoring them, or just checking out what they have to offer (which is all free, of course). Segment Resources: Get started with the OWASP GenAI Security Project Register for the GenAI Application Security & Risk Summit on October 9th, 11am - 4pm EST This segment is sponsored by The OWASP GenAI Security Project. Visit https://securityweekly.com/owasp to learn more about them! Show Notes: https://securityweekly.com/asw-348

09-16
01:08:00

Limitations and Liabilities of LLM Coding - Ted Shorter, Seemant Sehgal - ASW #347

Up first, the ASW news of the week. At Black Hat 2025, Doug White interviews Ted Shorter, CTO of Keyfactor, about the quantum revolution already knocking on cybersecurity’s door. They discuss the terrifying reality of quantum computing’s power to break RSA and ECC encryption—the very foundations of modern digital life. With 2030 set as the deadline for transitioning away from legacy crypto, organizations face a race against time. Ted breaks down what "full crypto visibility" really means, why it’s crucial to map your cryptographic assets now, and how legacy tech—from robotic sawmills to outdated hospital gear—poses serious risks. The interview explores NIST's new post-quantum algorithms, global readiness efforts, and how Keyfactor’s acquisitions of InfoSec Global and Cipher Insights help companies start the quantum transition today—not tomorrow. Don’t wait for the breach. Watch this and start your quantum strategy now. If digital trust is the goal, cryptography is the foundation. Segment Resources: http://www.keyfactor.com/digital-trust-digest-quantum-readiness https://www.keyfactor.com/press-releases/keyfactor-acquires-infosec-global-and-cipherinsights/ For more information about Keyfactor’s latest Digital Trust Digest, please visit: https://securityweekly.com/keyfactorbh Live from BlackHat 2025 in Las Vegas, cybersecurity host Jackie McGuire sits down with Seemant Sehgal, founder of BreachLock, to unpack one of the most pressing challenges facing SOC teams today: alert fatigue—and its even more dangerous cousin, vulnerability fatigue. In this must-watch conversation, Seemant reveals how his groundbreaking approach, Adversarial Exposure Validation (AEV), flips the script on traditional defense-heavy security strategies. Instead of drowning in 10,000+ “critical” alerts, AEV pinpoints what actually matters—using Generative AI to map realistic attack paths, visualize kill chains, and identify the exact vulnerabilities that put an organization’s crown jewels at risk. From his days leading cybersecurity at a major global bank to pioneering near real-time CVE validation, Seemant shares insights on scaling offensive security, improving executive buy-in, and balancing automation with human expertise. Whether you’re a CISO, SOC analyst, red teamer, or security enthusiast, this interview delivers actionable strategies to fight fatigue, prioritize risks, and protect high-value assets. Key topics covered: - The truth about alert fatigue & why it’s crippling SOC efficiency - How AI-driven offensive security changes the game - Visualizing kill chains to drive faster remediation - Why fixing “what matters” beats fixing “everything” - The future of AI trust, transparency, and control in cybersecurity Watch now to discover how BreachLock is redefining offensive security for the AI era. Segment Resources: https://www.breachlock.com/products/adversarial-exposure-validation/ This segment is sponsored by Breachlock. Visit https://securityweekly.com/breachlockbh to learn more about them! Show Notes: https://securityweekly.com/asw-347

09-09
01:17:09

AI, APIs, and the Next Cyber Battleground: Black Hat 2025 - Michael Callahan, Idan Plotnik, Josh Lemos, Chris Boehm - ASW #346

In this must-see BlackHat 2025 interview, Doug White sits down with Michael Callahan, CMO at Salt Security, for a high-stakes conversation about Agentic AI, Model Context Protocol (MCP) servers, and the massive API security risks reshaping the cyber landscape. Broadcast live from the CyberRisk TV studio at Mandalay Bay, Las Vegas, the discussion pulls back the curtain on how autonomous AI agents and centralized MCP hubs could supercharge productivity—while also opening the door to unprecedented supply chain vulnerabilities. From “shadow MCP servers” to the concept of an “API fabric,” Michael explains why these threats are evolving faster than traditional security measures can keep up, and why CISOs need to act before it’s too late. Viewers will get rare insight into the parallels between MCP exploitation and DNS poisoning, the hidden dangers of API sprawl, and why this new era of AI-driven communication could become a hacker’s dream. Blog: https://salt.security/blog/when-ai-agents-go-rogue-what-youre-missing-in-your-mcp-security Survey Report: https://content.salt.security/AI-Agentic-Survey-2025_LP-AI-Agentic-Survey-2025.html This segment is sponsored by Salt Security. Visit https://securityweekly.com/saltbh for a free API Attack Surface Assessment! At Black Hat 2025, live from the Cyber Risk TV studio in Las Vegas, Jackie McGuire sits down with Apiiro Co-Founder & CEO Idan Plotnik to unpack the real-world impact of AI code assistants on application security, developer velocity, and cloud costs. With experience as a former Director of Engineering at Microsoft, Idan dives into what drove him to launch Apiiro — and why 75% of engineers will be using AI assistants by 2028. From 10x more vulnerabilities to skyrocketing API bloat and security blind spots, Idan breaks down research from Fortune 500 companies on how AI is accelerating both innovation and risk. What you'll learn in this interview: - Why AI coding tools are increasing code complexity and risk - The massive cost of unnecessary APIs in cloud environments - How to automate secure code without slowing down delivery - Why most CISOs fail to connect security to revenue (and how to fix it) - How Apiiro’s Autofix AI Agent helps organizations auto-fix and auto-govern code risks at scale This isn’t just another AI hype talk. It’s a deep dive into the future of secure software delivery — with practical steps for CISOs, CTOs, and security leaders to become true business enablers. Watch till the end to hear how Apiiro is helping Fortune 500s bridge the gap between code, risk, and revenue. Apiiro AutoFix Agent. Built for Enterprise Security: https://youtu.be/f-_zrnqzYsc Deep Dive Demo: https://youtu.be/WnFmMiXiUuM This segment is sponsored by Apiiro. Be one of the first to see their new AppSec Agent in action at https://securityweekly.com/apiirobh. Is Your AI Usage a Ticking Time Bomb? In this exclusive Black Hat 2025 interview, Matt Alderman sits down with GitLab CISO Josh Lemos to unpack one of the most pressing questions in tech today: Are executives blindly racing into AI adoption without understanding the risks? Filmed live at the CyberRisk TV Studio in Las Vegas, this eye-opening conversation dives deep into: - How AI is being rapidly adopted across enterprises — with or without security buy-in - Why AI governance is no longer optional — and how to actually implement it - The truth about agentic AI, automation, and building trust in non-human identities - The role of frameworks like ISO 42001 in building AI transparency and assurance - Real-world examples of how teams are using LLMs in development, documentation & compliance Whether you're a CISO, developer, or business exec — this discussion will reshape how you think about AI governance, security, and adoption strategy in your org. Don’t wait until it’s too late to understand the risks. The Economics of Software Innovation: $750B+ Opportunity at a Crossroads Report: http://about.gitlab.com/software-innovation-report/ For more information about GitLab and their report, please visit: https://securityweekly.com/gitlabbh Live from Black Hat 2025 in Las Vegas, Jackie McGuire sits down with Chris Boehm, Field CTO at Zero Networks, for a high-impact conversation on microsegmentation, shadow IT, and why AI still struggles to stop lateral movement. With 15+ years of cybersecurity experience—from Microsoft to SentinelOne—Chris breaks down complex concepts like you're a precocious 8th grader (his words!) and shares real talk on why AI alone won’t save your infrastructure. Learn how Zero Networks is finally making microsegmentation frictionless, how summarization is the current AI win, and what red flags to look for when evaluating AI-infused security tools. If you're a CISO, dev, or just trying to stay ahead of cloud threats—this one's for you. This segment is sponsored by Zero Networks. Visit https://securityweekly.com/zerobh to learn more about them! Show Notes: https://securityweekly.com/asw-346

09-02
01:08:11

Translating Security Regulations into Secure Projects - Emily Fox, Roman Zhukov - ASW #345

The EU Cyber Resilience Act joins the long list of regulations intended to improve the security of software delivered to users. Emily Fox and Roman Zhukov share their experience education regulators on open source software and educating open source projects on security. They talk about creating a baseline for security that addresses technical items, maintaining projects, and supporting project owners so they can focus on their projects. Segment resources: github.com/ossf/wg-globalcyberpolicy github.com/orcwg baseline.openssf.org Show Notes: https://securityweekly.com/asw-345

08-26
01:13:31

Managing the Minimization of a Container Attack Surface - Neil Carpenter - ASW #344

A smaller attack surface should lead to a smaller list of CVEs to track, which in turn should lead to a smaller set of vulns that you should care about. But in practice, keeping something like a container image small has a lot of challenges in terms of what should be considered minimal. Neil Carpenter shares advice and anecdotes on what it takes to refine a container image and to change an org's expectations that every CVE needs to be fixed. Show Notes: https://securityweekly.com/asw-344

08-19
01:08:17

The Future of Supply Chain Security - Janet Worthington - ASW #343

Open source software is a massive contribution that provides everything from foundational frameworks to tiny single-purpose libraries. We walk through the dimensions of trust and provenance in the software supply chain with Janet Worthington. And we discuss how even with new code generated by LLMs and new terms like slopsquatting, a lot of the most effective solutions are old techniques. Resources https://www.forrester.com/blogs/make-no-mistake-software-is-a-supply-chain-and-its-under-attack/ https://www.forrester.com/report/the-future-of-software-supply-chain-security/RES184050 Show Notes: https://securityweekly.com/asw-343

08-12
42:13

Uniting software development and application security - Jonathan Schneider, Will Vandevanter - ASW #342

Maintaining code is a lot more than keeping dependencies up to date. It involved everything from keeping old code running to changing frameworks to even changing implementation languages. Jonathan Schneider talks about the engineering considerations of refactoring and rewriting code, why code maintenance is important to appsec, and how to build confidence that adding automation to a migration results in code that has the same workflows as before. Resources https://docs.openrewrite.org https://github.com/openrewrite Then, instead of our usual news segment, we do a deep dive on some recent vulns NVIDIA's Triton Inference Server disclosed by Trail of Bits' Will Vandevanter. Will talks about the thought process and tools that go into identify potential vulns, the analysis in determining whether they're exploitable, and the disclosure process with vendors. He makes the important point that even if something doesn't turn out to be a vuln, there's still benefit to the learning process and gaining experience in seeing the different ways that devs design software. Of course, it's also more fun when you find an exploitable vuln -- which Will did here! Resources https://nvidia.custhelp.com/app/answers/detail/a_id/5687 https://github.com/triton-inference-server/server https://blog.trailofbits.com/2025/07/31/hijacking-multi-agent-systems-in-your-pajamas/ https://blog.trailofbits.com/2025/07/28/we-built-the-security-layer-mcp-always-needed/ Show Notes: https://securityweekly.com/asw-342

08-05
58:07

How Product-Led Security Leads to Paved Roads - Julia Knecht - ASW #341

A successful strategy in appsec is to build platforms with defaults and designs that ease the burden of security choices for developers. But there's an important difference between expecting (or requiring!) developers to use a platform and building a platform that developers embrace. Julia Knecht shares her experience in building platforms with an attention to developer needs, developer experience, and security requirements. She brings attention to the product management skills and feedback loops that make paved roads successful -- as well as the areas where developers may still need or choose their own alternatives. After all, the impact of a paved road isn't in its creation, it's in its adoption. Show Notes: https://securityweekly.com/asw-341

07-29
01:04:11

Rise of Compromised LLMs - Sohrob Kazerounian - ASW #340

AI is more than LLMs. Machine learning algorithms have been part of infosec solutions for a long time. For appsec practitioners, a key concern is always going to be how to evaluate the security of software or a system. In some cases, it doesn't matter if a human or an LLM generated code -- the code needs to be reviewed for common flaws and design problems. But the creation of MCP servers and LLM-based agents is also adding a concern about what an unattended or autonomous piece of software is doing. Sohrob Kazerounian gives us context on how LLMs are designed, what to expect from them, and where they pose risk and reward to modern software engineering. Resources https://www.vectra.ai/research Show Notes: https://securityweekly.com/asw-340

07-22
01:06:35

Getting Started with Security Basics on the Way to Finding a Specialization - ASW #339

What are some appsec basics? There's no monolithic appsec role. Broadly speaking, appsec tends to branch into engineering or compliance paths, each with different areas of focus despite having shared vocabularies and the (hopefully!) shared goal of protecting software, data, and users. The better question is, "What do you want to secure?" We discuss the Cybersecurity Skills Framework put together by the OpenSSF and the Linux Foundation and how you might prepare for one of its job families. The important basics aren't about memorizing lists or technical details, but demonstrating experience in working with technologies, understanding how they can fail, and being able to express concerns, recommendations, and curiosity about their security properties. Resources: https://cybersecurityframework.io https://owasp.org/www-project-cheat-sheets/ https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/ https://aflplus.plus/ https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/ Show Notes: https://securityweekly.com/asw-339

07-15
01:07:50

Checking in on the State of Appsec in 2025 - Sandy Carielli, Janet Worthington - ASW #338

Appsec still deals with ancient vulns like SQL injection and XSS. And now LLMs are generating code along side humans. Sandy Carielli and Janet Worthington join us once again to discuss what all this new code means for appsec practices. On a positive note, the prevalence of those ancient vulns seems to be diminishing, but the rising use of LLMs is expanding a new (but not very different) attack surface. We look at where orgs are investing in appsec, who appsec teams are collaborating with, and whether we need security awareness training for LLMs. Resources: https://www.forrester.com/blogs/application-security-2025-yes-ai-just-made-it-harder-to-do-this-right/ Show Notes: https://securityweekly.com/asw-338

07-08
01:07:15

Simple Patterns for Complex Secure Code Reviews - Louis Nyffenegger - ASW #337

Manual secure code reviews can be tedious and time intensive if you're just going through checklists. There's plenty of room for linters and compilers and all the grep-like tools to find flaws. Louis Nyffenegger describes the steps of a successful code review process. It's a process that starts with understanding code, which can even benefit from an LLM assistant, and then applies that understanding to a search for developer patterns that lead to common mistakes like mishandling data, not enforcing a control flow, or not defending against unexpected application states. He explains how finding those kinds of more impactful bugs are rewarding for the reviewer and valuable to the code owner. It involves reading a lot of code, but Louis offers tips on how to keep notes, keep an app's context in mind, and keep code secure. Segment Resources: https://pentesterlab.com/live-training/ https://pentesterlab.com/appsecschool https://deepwiki.com https://daniel.haxx.se/blog/2025/05/29/decomplexification/ Show Notes: https://securityweekly.com/asw-337

07-01
38:26

How Fuzzing Barcodes Raises the Bar for Secure Code - Artur Cygan - ASW #336

Fuzzing has been one of the most successful ways to improve software quality. And it demonstrates how improving software quality improves security. Artur Cygan shares his experience in building and applying fuzzers to barcode scanners, smart contracts, and just about any code you can imagine. We go through the useful relationship between unit tests and fuzzing coverage, nudging fuzzers into deeper code paths, and how LLMs can help guide a fuzzer into using better inputs for its testing. Resources https://blog.trailofbits.com/2024/10/31/fuzzing-between-the-lines-in-popular-barcode-software/ https://github.com/crytic/echidna https://github.com/crytic/medusa https://lcamtuf.blogspot.com/2014/11/pulling-jpegs-out-of-thin-air.html Show Notes: https://securityweekly.com/asw-336

06-24
01:01:18

Threat Modeling With Good Questions and Without Checklists - Farshad Abasi - ASW #335

What makes a threat modeling process effective? Do you need a long list of threat actors? Do you need a long list of terms? What about a short list like STRIDE? Has an effective process ever come out of a list? Farshad Abasi joins our discussion as we explain why the answer to most of those questions is No and describe the kinds of approaches that are more conducive to useful threat models. Resources: https://www.eurekadevsecops.com/agile-devops-and-the-threat-modeling-disconnect-bridging-the-gap-with-developer-insights/ https://www.threatmodelingmanifesto.org https://kellyshortridge.com/blog/posts/security-decision-trees-with-graphviz/ In the news, learning from outage postmortems, an EchoLeak image speaks a 1,000 words from Microsoft 365 Copilot, TokenBreak attack targets tokenizing techniques, Google's layered strategy against prompt injection looks like a lot like defending against XSS, learning about code security from CodeAuditor CTF, and more! Show Notes: https://securityweekly.com/asw-335

06-17
01:08:00

Bringing CISA's Secure by Design Principles to OT Systems - Matthew Rogers - ASW #334

CISA has been championing Secure by Design principles. Many of the principles are universal, like adopting MFA and having opinionated defaults that reduce the need for hardening guides. Matthew Rogers talks about how the approach to Secure by Design has to be tailored for Operational Technology (OT) systems. These systems have strict requirements on safety and many of them rely on protocols that are four (or more!) decades old. He explains how the considerations in this space go far beyond just memory safety concerns. Segment Resources: https://www.cisa.gov/sites/default/files/2025-01/joint-guide-secure-by-demand-priority-considerations-for-ot-owners-and-operators-508c_0.pdf https://www.youtube.com/watch?v=vHSXu1P4ZTo Show Notes: https://securityweekly.com/asw-334

06-10
01:09:09

AIs, MCPs, and the Acutal Work that LLMs Are Generating - ASW #333

The recent popularity of MCPs is surpassed only by the recent examples deficiencies of their secure design. The most obvious challenge is how MCPs, and many more general LLM use cases, have erased two decades of security principles behind separating code and data. We take a look at how developers are using LLMs to generate code and continue our search for where LLMs are providing value to appsec. We also consider what indicators we'd look for as signs of success. For example, are LLMs driving useful commits to overburdened open source developers? Are LLMs climbing the ranks of bug bounty platforms? In the news, more examples of prompt injection techniques against LLM features in GitLab and GitHub, the value (and tradeoffs) in rewriting code, secure design lessons from a history of iOS exploitation, checking for all the ways to root, and NIST's approach to (maybe) measuring likely exploited vulns. Show Notes: https://securityweekly.com/asw-333

06-03
39:06

AI in AppSec: Agentic Tools, Vibe Coding Risks & Securing Non-Human Identities - Mo Aboul-Magd, Brian Fox, Mark Lambert, Shahar Man - ASW #332

ArmorCode unveils Anya—the first agentic AI virtual security champion designed specifically for AppSec and product security teams. Anya brings together conversation and context to help AppSec, developers and security teams cut through the noise, prioritize risks, and make faster, smarter decisions across code, cloud, and infrastructure. Built into the ArmorCode ASPM Platform and backed by 25B findings, 285+ integrations, natural language intelligence, and role-aware insights, Anya turns complexity into clarity, helping teams scale securely and close the security skills gap. Anya is now generally available and included as part of the ArmorCode ASPM Platform. Visit https://securityweekly.com/armorcodersac to request a demo! As 'vibe coding", the practice of using AI tools with specialized coding LLMs to develop software, is making waves, what are the implications for security teams? How can this new way of developing applications be made secure? Or have the horses already left the stable? Segment Resources: https://www.backslash.security/press-releases/backslash-security-reveals-in-new-research-that-gpt-4-1-other-popular-llms-generate-insecure-code-unless-explicitly-prompted https://www.backslash.security/blog/vibe-securing-4-1-pillars-of-appsec-for-vibe-coding This segment is sponsored by Backslash. Visit https://securityweekly.com/backslashrsac to learn more about them! The rise of AI has largely mirrored the early days of open source software. With rapid adoption amongst developers who are trying to do more with less time, unmanaged open source AI presents serious risks to organizations. Brian Fox, CTO & Co-founder of Sonatype, will dive into the risks associated with open source AI and best practices to secure it. Segment Resources: https://www.sonatype.com/solutions/open-source-ai https://www.sonatype.com/blog/beyond-open-vs.-closed-understanding-the-spectrum-of-ai-transparency https://www.sonatype.com/resources/whitepapers/modern-development-in-ai-era This segment is sponsored by Sonatype. Visit https://securityweekly.com/sonatypersac to learn more about Sonatype's AI SCA solutions! The surge in AI agents is creating a vast new cyber attack surface with Non-Human Identities (NHIs) becoming a prime target. This segment will explore how SandboxAQ's AQtive Guard Discover platform addresses this challenge by providing real-time vulnerability detection and mitigation for NHIs and cryptographic assets. We'll discuss the platform's AI-driven approach to inventory, threat detection, and automated remediation, and its crucial role in helping enterprises secure their AI-driven future. To take control of your NHI security and proactively address the escalating threats posed by AI agents, visit https://securityweekly.com/sandboxaqrsac to schedule an early deployment and risk assessment. Show Notes: https://securityweekly.com/asw-332

05-27
01:04:35

Appsec News & Interviews from RSAC on Identity and AI - Charlotte Wylie, Rami Saas - ASW #331

In the news, Coinbase deals with bribes and insider threat, the NCSC notes the cross-cutting problem of incentivizing secure design, we cover some research that notes the multitude of definitions for secure design, and discuss the new Cybersecurity Skills Framework from the OpenSSF and Linux Foundation. Then we share two more sponsored interviews from this year's RSAC Conference. With more types of identities, machines, and agents trying to access increasingly critical data and resources, across larger numbers of devices, organizations will be faced with managing this added complexity and identity sprawl. Now more than ever, organizations need to make sure security is not an afterthought, implementing comprehensive solutions for securing, managing, and governing both non-human and human identities across ecosystems at scale. This segment is sponsored by Okta. Visit https://securityweekly.com/oktarsac to learn more about them! At Mend.io, we believe that securing AI-powered applications requires more than just scanning for vulnerabilities in AI-generated code—it demands a comprehensive, enterprise-level strategy. While many AppSec vendors offer limited, point-in-time solutions focused solely on AI code, Mend.io takes a broader and more integrated approach. Our platform is designed to secure not just the code, but the full spectrum of AI components embedded within modern applications. By leveraging existing risk management strategies, processes, and tools, we uncover the unique risks that AI introduces—without forcing organizations to reinvent their workflows. Mend.io’s solution ensures that AI security is embedded into the software development lifecycle, enabling teams to assess and mitigate risks proactively and at scale. Unlike isolated AI security startups, Mend.io delivers a single, unified platform that secures an organization’s entire codebase—including its AI-driven elements. This approach maximizes efficiency, minimizes disruption, and empowers enterprises to embrace AI innovation with confidence and control. This segment is sponsored by Mend.io. Visit https://securityweekly.com/mendrsac to book a live demo! Show Notes: https://securityweekly.com/asw-331

05-20
01:01:48

Secure Code Reviews, LLM Coding Assistants, and Trusting Code - Rey Bango, Karim Toubba, Gal Elbaz - ASW #330

Developers are relying on LLMs as coding assistants, so where are the LLM assistants for appsec? The principles behind secure code reviews don't really change based on who write the code, whether human or AI. But more code means more reasons for appsec to scale its practices and figure out how to establish trust in code, packages, and designs. Rey Bango shares his experience with secure code reviews and where developer education fits in among the adoption of LLMs. As businesses rapidly embrace SaaS and AI-powered applications at an unprecedented rate, many small-to-medium sized businesses (SMBs) struggle to keep up due to complex tech stacks and limited visibility into the skyrocketing app sprawl. These modern challenges demand a smarter, more streamlined approach to identity and access management. Learn how LastPass is reimagining access control through “Secure Access Experiences” - starting with the introduction of SaaS Monitoring capabilities designed to bring clarity to even the most chaotic environments. Secure Access Experiences - https://www.lastpass.com/solutions/secure-access This segment is sponsored by LastPass. Visit https://securityweekly.com/lastpassrsac to learn more about them! Cloud Application Detection and Response (CADR) has burst onto the scene as one of the hottest categories in security, with numerous vendors touting a variety of capabilities and making promises on how bringing detection and response to the application-level will be a game changer. In this segment, Gal Elbaz, co-founder and CTO of Oligo Security, will dive into what CADR is, who it helps, and what the future will look like for this game changing technology. Segment Resources - https://www.oligo.security/company/whyoligo To see Oligo in action, please visit https://securityweekly.com/oligorsac Show Notes: https://securityweekly.com/asw-330

05-13
01:09:38

Recommend Channels