AI Assisted Coding: Building Reliable Software with Unreliable AI Tools With Lada Kesseler
Description
AI Assisted Coding: Building Reliable Software with Unreliable AI Tools
In this special episode, Lada Kesseler shares her journey from AI skeptic to pioneer in AI-assisted development. She explores the spectrum from careful, test-driven development to quick AI-driven experimentation, revealing practical patterns, anti-patterns, and the critical role of judgment in modern software engineering.
From Skeptic to Pioneer: Lada's AI Coding Journey
"I got a new skill for free!"
Lada's transformation began when she discovered Anthropic's Claude Projects. Despite being skeptical about AI tools throughout 2023, she found herself learning Angular frontend development with AI—a technology she had no prior experience with. This breakthrough moment revealed something profound: AI could serve as an extension of her existing development skills, enabling her to acquire new capabilities without the traditional learning curve. The journey evolved through WindSurf and Claude Code, each tool expanding her understanding of what's possible when developers collaborate with AI.
Understanding Vibecoding vs. AI-Assisted Development
"AI assisted coding requires judgment, and it's never been as important to exercise judgment as now."
Lada introduces the concept of "vibecoding" as one extreme on a new dimension in software development—the spectrum from careful, test-driven development to quick, AI-driven experimentation. The key insight isn't that one approach is superior, but that developers must exercise judgment about which approach fits their context. She warns against careless AI coding for production systems: "You just talk to a computer, you say, do this, do that. You don't really care about code... For some systems, that's fine. When the problem arises is when you put the stuff to production and you really care about your customers. Please, please don't do that." This wisdom highlights that with great power comes great responsibility—AI accelerates both good and bad practices.
The Answer Injection Anti-Pattern When Working With AI
"You're limiting yourself without knowing, you're limiting yourself just by how you formulate your questions. And it's so hard to detect."
One of Lada's most important discoveries is the "answer injection" anti-pattern—when developers unconsciously constrain AI's responses by how they frame their questions. She experienced this firsthand when she asked an AI about implementing a feature using a specific approach, only to realize later that she had prevented the AI from suggesting better alternatives. The solution? Learning to ask questions more openly and reformulating problems to avoid self-imposed limitations. As she puts it, "Learn to ask the right way. This is one of the powers this year that's been kind of super cool." This skill of question formulation has become as critical as any technical capability.
Answer injection is when we—sometimes, unknowingly—ask a leading question that also injects a possible answer. It's an anti-pattern because LLM's have access to far more information than we do. Lada's advice: "just ask for anything you need", the LLM might have a possible answer for you.
Never Trust a Single LLM: Multi-Agent Collaboration
"Never trust the output of a single LLM. When you ask it to develop a feature, and then you ask the same thing to look at that feature, understand the code, find the issues with it—it suddenly finds improvements."
Lada shares her experiments with swarm programming—using multiple AI instances that collaborate and cross-check each other's work. She created specialized agents (architect, developer, tester) and even built systems using AppleScript <span style="font-size: 11pt; font-family: Arial,sans-serif; color: #000000; background-color: transparent; font-weight: 400; font-styl























