DiscoverAbsolute AppSec
Absolute AppSec
Claim Ownership

Absolute AppSec

Author: Ken Johnson and Seth Law

Subscribed: 104Played: 2,424
Share

Description

A weekly podcast of all things application security related. Hosted by Ken Johnson and Seth Law.
314 Episodes
Reverse
In this episode, the hosts discuss the seismic shift in the application security landscape triggered by the rise of Large Language Models (LLMs) and Anthropic’s "Claude Code". They highlight the massive economic repercussions of these AI advancements, noting that billions in market value were wiped from traditional cybersecurity stocks as investors begin to believe frontier models might eventually write perfectly secure code. The hosts critique the industry's historical reliance on "checkbox" compliance tools like SAST, DAST, and SCA, arguing that these "archaic" methods are being replaced by AI-native strategies capable of reasoning through complex logic flaws. While they acknowledge that AI can suffer from "reasoning drift" and still requires deterministic validation to avoid false positives, they emphasize that security professionals must adapt by building custom "skills" and focusing on governance and observability. The discussion concludes that as developers move to "AI speed," the traditional role of the AppSec professional is evolving into a "Jarvis-like" orchestrator who manages automated workflows and infuses institutional knowledge into AI agents to maintain oversight without slowing down production.
Ken Johnson and Seth Law examine the intensifying pressure on security practitioners as AI-driven development causes an unprecedented acceleration in industry velocity. A primary theme is the emergence of "shadow AI," where developers utilize unauthorized AI coding assistants and personal agents, introducing significant data classification risks and supply chain vulnerabilities. The discussion dives into technical concepts like AI agent "skills"—markdown files providing specialized directions—and the corresponding security risks found in new skill registries, such as malicious tools designed to exfiltrate credentials and crypto assets. The hosts also review 1Password’s SCAM (Security Comprehension Awareness Measure), highlighting broad performance gaps in an AI's ability to detect phishing, with some models failing up to 65% of the time. To manage these unpredictable systems, the hosts advocate for a shift toward high-level validation roles, emphasizing the need for Subject Matter Expertise to combat "reasoning drift" and maintain safety through test-driven development and periodic "checkpoints". Ultimately, they conclude that while AI can simulate expertise, human oversight remains vital to secure the probabilistic nature of modern agentic workflows.
In episode 312 of Absolute AppSec, the hosts discuss the double-edged sword of "vibe coding", noting that while AI agents often write better functional tests than humans, they frequently struggle with nuanced authorization patterns and inherit "upkeep costs" as foundational models change behavior over time. A central theme of the episode is that the greatest security risk to an organization is not AI itself, but an exhausted security team. The hosts explore how burnout often manifests as "silent withdrawal" and emphasize that managers must proactively draw out these issues within organizations that often treat security as a mere cost center. Additionally, they review new defensive strategies, such as TrapSec, a framework for deploying canary API endpoints to detect malicious scanning. They also highlight the value of security scorecarding—pioneered by companies like Netflix and GitHub—as a maturity activity that provides a holistic, blame-free view of application health by aggregating multiple metrics. The episode concludes with a reminder that technical tools like Semgrep remain essential for efficiency, even as practitioners increasingly leverage the probabilistic creativity of LLMs.
Ken Johnson and Seth Law examine the profound transformation of the security industry as AI tooling moves from simple generative models to sophisticated agentic architectures. A primary theme is the dramatic surge in development velocity, with some organizations seeing pull request volumes increase by over 800% as developers allow AI agents to operate nearly hands-off. This shift is redefining the role of Application Security practitioners, moving experts from manual tasks like manipulating Burp Suite requests to a validation-centric role where they spot-check complex findings generated by AI in minutes. The hosts characterize older security tools as "primitive" compared to modern AI analysis, which can now identify human-level flaws like complex authorization bypasses. A major technical highlight is the introduction of agent "skills"—markdown files containing instructions that empower coding assistants—and the associated emergence of new supply chain risks. They specifically reference research on malicious skills designed to exfiltrate crypto wallets and SSH credentials, warning that registries for these skills lack adequate security responses. To manage the inherent "reasoning drift" of AI, the hosts argue that test-driven development has become a critical safety requirement. Ultimately, they warn that the industry has already shifted fundamentally, and security professionals must lean into these new technologies immediately to avoid becoming obsolete in a day-to-day evolving landscape.
In this episode of Absolute AppSec, hosts Ken Johnson and Seth Law interview Mohan Kumar and Naveen K Mahavisnu, the practitioner-founders of Aira Security, to explore the critical challenges of securing autonomous AI agents in 2026. The conversation centers on the industry's shift toward "agentic workflows," where AI is delegated complex tasks that require monitoring not just for access control, but for the underlying "intent" of the agent's actions. The founders explain that agents can experience "reasoning drift," taking dangerous or unintended shortcuts to complete missions, which necessitates advanced guardrails like "trajectory analysis" and human-in-the-loop interventions to ensure safety and data integrity. A significant portion of the episode is dedicated to the security of the Model Context Protocol (MCP), highlighting how these integration servers can be vulnerable to "shadowing attacks" and indirect prompt injections—exemplified by a real-world case where private code was exfiltrated via a public GitHub pull request. To address these gaps, the guests introduce their open-source tool, MCP Checkpoint, which allows developers to baseline their agentic configurations and detect malicious changes in third-party tooling. Throughout the discussion, the group emphasizes that as AI moves into production, security must evolve into a proactive enablement layer that understands the probabilistic and unpredictable nature of LLM reasoning.
In this episode of Absolute AppSec, Nathan Hunstad, Director of Security at Vanta, discusses the intersection of security policy, governance, and technical defense. Drawing on his unique background in political science and the Minnesota state legislature, Hunstad argues that policy acts as the essential "conductor" for an organization's security tools. A major theme of the conversation is the challenge of compliance for startups, with the group advising founders to prioritize business survival and basic security hygiene—like password managers and IAM—before pursuing intensive certifications like SOC 2. The discussion also explores how AI is accelerating both development velocity and the ability to automate tedious security questionnaires. Furthermore, Hunstad contrasts the security posture of modern, cloud-native startups against legacy enterprises, noting that older organizations often struggle with "dark corners" of un-inventoried, vulnerable legacy tech. The episode concludes with a critique of outdated authentication standards, specifically advocating for the removal of mandatory password rotation in favor of NIST-aligned, phishing-resistant MFA.
Ken Johnson (cktricky on social media) and Seth Law are happy to announce a special episode of Absolute AppSec with Avi Douglen (sec_tigger on X), long-time OWASP Global Board of Directors member, founder and CEO of Bounce Security and co-author of the Threat Modeling Manifesto. The conversation ranges from Application Privacy related to Application Security, to participating in meetups and conferences, and finally OWASP as an Avi's experience as a board member.
In episode 307 of Absolute AppSec, hosts Ken and Seth conduct a retrospective on the application security landscape of 2025. They conclude that their previous predictions were largely accurate, particularly regarding the rise of prompt injection, AI-backed attacks, and the industry-wide shift toward per-token billing models. A major theme of the year was the solidification of supply chain security as a critical pillar of AppSec, driven by notable incidents such as Shai Hulud and React for Shell. The hosts also share insights from their four-day training course on utilizing LLMs for secure code review, noting that while AI development is becoming more prevalent, most practitioners are still in the nascent stages of building custom tooling. Much of the discussion focuses on the Model Context Protocol (MCP); while it offers significant value for agentic workflows, the hosts criticize its current lack of robust security controls, specifically highlighting issues with OAuth implementations and short timeouts in existing clients. Finally, they discuss how the industry is moving toward a more nuanced balance between deterministic tools like Semgrep and the probabilistic creativity of LLMs to increase efficiency in security consulting.
Given the spate of recent npm news stories, we've arranged a topical show with software supply-chain security researcher and npm hacker Paul McCarty (find Paul on bsky https://bsky.app/profile/6mile.githax.com) . Paul is currently a researcher with Safety (https://getsafety.com/) and has a background in security including work at John Deere, Boeing, Regence Blue Cross/Blue Shield, NASA Jet Propulsion Lab, the US Army, and the Queensland Government. He's also spent twenty some odd years helping startups with security practices, and is a maintainer of the Open Source Malware project. In addition, Paul has been long time friend of the show, contributing his insights to the Absolute AppSec community slack in addition to frequently writing up his research at the SourceCode RED blog: https://sourcecodered.com/blog.
The latest episode of Absolute AppSec is here, with Ken Johnson and Seth Law checking in during the busy Q4 holiday season to share some fascinating insights on the evolving landscape of security and technology. They kick off by reflecting on their intensive, ever-changing "Harnessing LLMs for Application Security" courses, noting how rapidly the underlying tech evolves. The conversation quickly turns to a compelling debate: How will the rise of generative AI impact career paths for newcomers, especially given that LLMs fundamentally rely on the contributions of existing experts? While pathways may change, they agree that core human activities—like networking, contributing to projects, and maintaining a hacker mindset—will remain crucial. The hosts then dive into a fascinating discussion on the darker side of SEO, introducing the concept of Generative AI Engine Optimization (GEO), where marketers exploit AI search results through tricks like keyword-stuffed files to game rankings. They tie this to historical examples of exploitation, harkening back to Google hacking days. Finally, they cover the recent Shai Hulud 2 supply chain attack, which infected hundreds of NPM packages and utilized even more sophisticated obfuscation and delayed execution tactics than its predecessor.
This episode, the 304th of Absolute AppSec, features hosts Ken Johnson (@cktricky) and Seth Law (@sethlaw) discussing the crush of Q4 expectations, upcoming training opportunities, the recent updates to the OWASP Top Ten, and the impact of AI tools like XBow on application security (AppSec) consulting. The hosts discuss the shift in the OWASP Top Ten from focusing on vulnerabilities to focusing on risks, and the dual role the list now plays for both awareness/training and compliance. Shifting to recent funding of XBow, the overall consensus is that while AI tools dramatically improve process flow, scoping, and the speed of vulnerability identification for consultants, they won't replace the need for human experts for complex, bespoke systems, business logic flaws, or authorization issues. AI is commoditizing lower-level AppSec work.
Prof. Brian Glas (infosecdad on social media) joins Seth Law (sethlaw) and Ken Johnson (cktricky) for a timely episode of Absolute AppSec. Infosec Guru and one of the OWASP Top Ten project leaders Prof. Glas joins us in the aftermath of the Global AppSec conference and the announcement of the new OWASP Top Ten (2025). This episode focuses on the process for compiling the list as well as gleaning any other insights from Prof. Glas.
Episode 302 of Absolute AppSec has hosts Ken Johnson and Seth Law speculating on the upcoming Global AppSec DC conference, predicting the announcement of the OWASP Top Ten 2025 edition, with Brian Glass scheduled to discuss it on the podcast. The conversation shifts to a technical discussion of OpenAI's new browser, Atlas, which is built on Chromium and includes AI capabilities. The hosts noted concern over the discovered prompt instructions for Atlas, which direct the ChatGPT agent to use browser history and available APIs to find data from the user's logged-in sites to answer ambiguous queries or fulfill requests. This functionality raises significant security concerns, as the agent's ability to comb the cache and logged-in sites could be exploited, effectively creating a "honeypot for cross-site scripting" with malicious potential like unauthorized money transfers. The hosts discussed the lack of talk submissions on Mobile Context Protocol (MCP) security at the conference, despite its growing relevance in a world of AI agents and tooling. Finally, they highlighted a new tool called SlopGuard, developed to prevent the risk of AI hallucinating non-existent, potentially malicious packages (which occurs 5-21% of the time) and attempting to install them from registries like NPM.
In this episode, Seth and Ken debate OpenAI's Atlas browser, which embeds AI into web browsing. Ken views it as a major privacy concern, potentially accelerating invasive data collection and surveillance. Seth noted that new browsers historically have critical flaws. They acknowledged that AI is very useful for generic and technical internet searches. They discussed the Co-Fish attack, a phishing vulnerability in Microsoft Copilot Studio that could exfiltrate access tokens via a seemingly valid Microsoft URL. Finally, they noted that big companies like Snyk and Black Duck are moving toward agentic AI capabilities, confirming the industry trend.
For the 300th (!!!!) episode of the podcast, Seth and Ken reminisce on changes to the industry and overall approach to application security since inception. The hosts discussed the evolution of the industry, noting that once-popular approaches like blindly emulating "hip" Silicon Valley security programs and running unmanaged Security Champions Programs have fallen out of favor, as organizations now better understand that these approaches are not one-size-fits-all and require careful, metrics-driven management. While Bug Bounty Programs remain popular, they noted an increase in submissions from "skiddies" (script kiddies) that challenge program effectiveness and highlight the need for internal support and a proactive stance before rolling out a public program. Positively, they observed that the industry has become more mature, focusing on business value, metrics, and ROI , a move that may have been accelerated by recent economic pressures. Furthermore, security practices have improved, with the decline of common vulnerabilities like XSS and SQL Injection due to safer frameworks and browser controls, allowing AppSec professionals to focus on more complex issues, such as business logic flaws and focused threat analysis, while the once monolithic process of threat modeling has evolved into a more nimble, "point-in-time" assessment readily adopted by developers.
The duo is back after a short hiatus. Today's episode is inspired by recent articles related to startups, funding, and the grind that happens when building a company or being an individual contributor. Specifically, a recent article about AI startup founders putting in long hours to the exclusion of everything else is debated. This is followed by aa discussion on the current security AI startup hype cycle, spurred by thoughts from FranklySpeaking, and how security companies in general are acquired and disappear over time.
In what is (sadly) becoming a weekly segment, this episode starts with talk of the latest installment of npm package takeovers, dubbed Shai Hulud as discussed in Slack and analyzed by Paul McCarty and team. Strategies discussed for monitoring packages and preventing malware from entering into organization's products. This is followed by an article referencing security via intentional redundancy when designing sensitive application functionality. Finally, analysis of a recent article from Frankly Speaking that lists a series of new commandments for security teams, which are mostly agreed to by both Seth and Ken, with some caveats.
The Absolute AppSec duo returns with an in-depth episode talking about true and false positives, where context matters and business impact must be taken into account in order to avoid rabbit holes. This discussion spurred by a recent article from signalblur of magonia.io discussing alerts in a security operations center. In short, only considering existence of a flaw (or alert) is not enough by itself. True impact comes by understanding context. Anyone want t-shirts? A discussion of the recent successful phish of an npm package maintainer, resulting in exposure of millions of projects depending on popular npm packages. It happens, folks, protect yourself.
Ken and Seth kickoff a podcast by reviewing current state of the OWASP Top 10 project, given recent requests and interactions on Absolute AppSec slack from various contributors. This is followed by an in-depth breakdown of the recent NX npm package compromise. This breakdown shows that even though AI is weaponized to exfiltrate data, the main exploit was the result of a command injection flaw. Crocs and Socks coming back to bit all of us. Finally, Ken and Seth provide a list of resources used to monitor the wider security community.
Seth and Ken return with a new episode summarizing their experience at DEF CON 33 and all things Las Vegas over the past month. This includes panels, talks, workshops, happy hours, and even corporate (boo) events. This is followed by discussion of a few research items that came out of the conference, including James Kettle's HTTP1.1 Must Die talk. Finally, why AI is infecting Application Security.
loading
Comments 
loading