DiscoverThe AI Native Dev - from Copilot today to AI Native Software Development tomorrow
The AI Native Dev - from Copilot today to AI Native Software Development tomorrow
Claim Ownership

The AI Native Dev - from Copilot today to AI Native Software Development tomorrow

Author: Tessl

Subscribed: 29Played: 558
Share

Description

Welcome to The AI Native Developer, hosted by Guy Podjarny and Simon Maple. Join us as we explore and help shape the future of software development through the lens of AI. In this new paradigm of AI Native Software Development, we delve into how AI is transforming the way we build software, from tools and practices to the very structure of development teams.

Our target audience includes developers and development leaders eager to stay ahead of the curve. If you're passionate about the future of software development and curious about how to leverage AI to build effective teams and groundbreaking software, this podcast is for you.

Each week, we bring you insights into the latest AI tools and best practices, keeping you up-to-date with the cutting-edge advancements in the industry. Additionally, every two weeks, we present deep dives with experts and leaders in the AI and software development space, offering a glimpse into the future of AI development.

Tune in to discover how AI will revolutionize your workflows, roles, and organizations. Get inspired by the latest tools and best practices, and prepare to be part of the next generation of software development.

94 Episodes
Reverse
Most teams think agentic dev is about writing better prompts. It's not. Guy Podjarny and Simon Maple explain why managing context, not crafting prompts, is what separates teams that scale with agents from teams that don't. They walk through a practical framework for building, evaluating, and distributing the context your agents actually need. In this episode: • Why agents fail without structured context about your internal platform • The 3 context layers: policies, platform docs, and applic...
"Charts are good for users, not good for agents. Agents look at the underlying data and do deep analysis." Mirko Novakovic built Instana, sold it to IBM, and now he's building Dash0, rethinking observability for agents, not humans. In conversation with Guy Podjarny, he explains: • why OpenTelemetry turned out to be perfect for AI • how UX changes when agents are your primary users • why interactive collaboration beats static chat outputs • the survival question for observability vendors in ...
One skill took coding success from 28% to 71%. Another made things worse. Guy Podjarny and Simon Maple tested 1000+ agent skills and reveal which ones actually work, which hurt performance, and why anecdotal evidence isn't enough anymore. Tessl Skills Registry is the first package manager for agent skills with built-in evaluations, versioning, and lifecycle management. Explore tested skills and see real performance data: https://tessl.io/registry On the docket: • Claude roasted Anthropic's...
“You have to prioritize between the thing you want to do and the thing that actually is driving the business, that’s what really big companies are fighting every single day.” As former CEO of GitHub and now a startup founder again, Thomas Dohmke brings a rare, inside-out perspective on innovation across both worlds. In conversation with Guy Podjarny, he explains: • why startups and incumbents fight in different weight classes • why humans shouldn’t sit in every feedback loop as agents scale...
Most AI agents fail because you're using them wrong. Here’s what actually works in production. In this episode, Simon Maple sits down with Itamar Friedman (CEO of Qodo) and Robert Brennan (CEO of OpenHands) at QCon AI. They pull back the curtain on why agents hallucinate, provide inconsistent answers, and ship low-quality code. On the docket: • Why a third of developer-reported AI output is incorrect • Why you must separate creative coding agents from structured review agents • How ex...
In this special milestone episode, Simon Maple and Guy Podjarny celebrate 1 million views by looking back at the chaos of 2025 and forecasting the high-stakes reality of 2026. On the docket: • Why appearing on this show has become a leading indicator for getting acquired or raising billions (and whether Simon should start charging a 2% carry). • The end of prompt engineering and the rise of context as the ultimate competitive advantage. • Our boldest 2026 predictions, including open models t...
What does it take to make AI work inside engineering teams? This high-stakes compilation episode with Ian Thomas (Meta), Wesley Reisz (ThoughtWorks), Sepehr Khosravi (Coinbase), and David Stein (ServiceTitan) goes inside the engineering rooms of the world's most sophisticated tech organisations to uncover how they're moving past AI hype into AI-native production. On the docket: • How Meta achieved 80% weekly AI adoption through grassroots community building instead of top-down mandates • Wh...
2025 changed what it means to be a developer. And 2026 is about to change even more. This Tessl episode brings a year-end reflection on how agents reshaped software development and what developers need to unlearn next, featuring Reuven Cohen, Founder of the Agentic Foundation, Maor Shlomo, Founder of Base44, and Maksim Shaposhnikov, Technical Member at Tessl. On the docket: • Reuven Cohen on why most agent systems fail and what agentic engineering demands • Maor Shlomo on where vibe coding...
Vibe coding is only good at creating a sense of progress for devs. In this episode of AI Native Dev, Cian Clarke, head of AI at Nearform, joins Simon Maple to talk about BMAD, their spec-driven approach that prioritizes clarity before code over prompt first development. They also get into: • speed upfront vs. maintainability over time • why senior engineers gain leverage as junior pathways narrow • the cultural shift needed to scale AI-generated software beyond solo builders AI Native Dev,...
Before you add context, understand the context that’s already there. In this episode, Yaniv Aknin, founding engineer at Tessl, explains the built in instructions that precede every user prompt, and why acknowledging that hidden layer is critical. On the docket: • why tool design matters more than raw reasoning ability • how Codex does more with fewer tools • how subagents let Claude stay flexible under context pressure • why generalist models may adapt better to unfamiliar tools AI Native...
We’re holding probabilistic systems to deterministic standards. In this episode of AI Native Dev, Simon Maple talks with Maria Gorinova, Member of Technical Staff at Tessl, about the mismatch between how developers expect software to behave, and how agents actually do. • how structured context improves abstraction use • why agent reliability can only be demonstrated through measurement • how Tessl’s tiles improve reliability without bloating context AI Native Dev, powered by Tes...
Your codebase already has AI contributors. If they don’t understand it, that’s on you. In this segment from AI Native DevCon, Sean Roberts, VP of Applied AI at Netlify, explains why agent experience is, in fact, an extension of developer experience. He also shares: why hallucinations are what inference looks like when context is missing.why knowledge graphs matter for large codebases.using feedback loops to help agents learn from real deployments.the downside of standardising on a single ag...
LLMs don’t get smarter when you dump everything into context, they get distracted. At AI-Native DevCon, Guy Podjarny unpacks the evolution of AI augmented development, and how devs can get the most from current tools. On the docket: • how to help agents close the capability-reliability gap • why 'context engineering is basically the same as specs.' • why statistical measurement is the only meaningful way to judge agent reliability. • how ‘tiles’ level inconsistent documentation across old ...
If software can improve autonomously, why shouldn’t security? On this episode of AI Native Dev, René Brandel, founder and CEO of Casco, explores how upfront specs enable reliable agent generated software, and how that same discipline drives Casco’s autonomous, continuously improving security. On the docket: • how small teams with self-improving agents can outperform large security orgs. • why vibe coding is ideal for rapid prototyping and live customer iteration. • what a practical, scalabl...
In this compilation, Simon Maple brings together Baruch Sadogursky (TuxCare), Liran Tal (Snyk), Alex Gavrilescu (Greentube), and Josh Long (Broadcom) to break down where AI-assisted development fails, and what teams must do to keep it reliable. On the docket: • why spec-compiled tests must come before letting AI generate code • the uncertainty around what “AI security engineering” actually means today • how Backlog.md stays minimal so agents can split work autonomously • how Spring shows AI ...
How do you give an agent the same visibility a human developer has, without giving it full control? Alan Pope, Senior Developer Advocate at Tessl, explains how Model Context Protocols (MCPs) give AI agents structured access to dev environments, enabling tools like Claude Code and TypingMind to read, build, and execute safely under human oversight. On the docket: • how MCPs enable hybrid collaboration, letting agents take controlled actions inside local environments while surfacing every cha...
Bots follow scripts. Assistants wait for your commands. Agents act autonomously. Maksim Shaposhnikov, AI Research Engineer at Tessl, joins Simon Maple to unpack the capabilities of AI coding agents, including how developers can test and trust the code they generate. On the docket: • how sub-agents operate independently, maintaining their own context windows to handle complex tasks without overloading the main agent. • why human-in-the-loop oversight is still essential, even as agents can a...
As AI outpaces human review, latency compounds. On AI Native Dev, Graphite co-founder and CEO, Merrill Lutsky joins Guy Podjarny to explore how stack aware reviews remove friction and accelerate AI-native development. They also get into: • how Graphite’s architecture ensures traceability across AI generated commits • what engineering velocity means when code quality depends on alignment • why the next generation of developers will act more like managers of autonomous dev teams than individu...
Even the smartest AI agent starts as a blank slate. Alexandru Gavrilescu, creator of Backlog.md, and Simon Maple explore how to give AI the right context and specifications so it can deliver like a human teammate, and sometimes faster. On the docket: • why humans still matter for review, but AI can accelerate work beyond traditional sprints • the rise of persistent agents that proactively manage tasks and subagents • preparing for a world of disposable, single-use software and continuous d...
The risk of letting AI do more than autocomplete? It can quickly spin out of control. On this episode of AI Native Dev, Steve Manuel, founder and CEO of Dylibso, unpacks MCP, the protocol that keeps AI extensions safe and predictable, and dives into mcp.run, his framework for tapping into shared MCP servers without losing control. With Simon Maple he shares: • why plugin-safe AI might be the most significant shift in developer tooling this decade • how mcp.run isolates compute to prevent AI...
loading
Comments