Head of Claude Code: What happens after coding is solved | Boris Cherny
Digest
This podcast explores the transformative impact of AI, particularly Claude Code, on software development. It highlights how AI is generating a significant portion of code, dramatically increasing engineer productivity, and potentially making traditional coding roles obsolete. The discussion delves into the genesis and rapid growth of Claude Code, its surprising ability to use tools, and the product lessons learned during its development. The concept of "latent demand" is presented as a key driver of AI product success, exemplified by the evolution of Claude Code into tools like Co-work. The podcast also touches upon the accelerating pace of change in software engineering, the importance of exponential thinking, and the future of coding, drawing parallels to historical innovations like the printing press. Furthermore, it examines AI's broader societal implications, including its potential to empower non-technical users, the evolving role of software engineers, and the critical need for AI safety. Principles for building AI products, such as empowering AI models with tools and embracing general models, are discussed, alongside advice for users of AI coding tools and a vision for a future where AI enables universal programming. The conversation concludes with reflections on job enjoyment in the AI era, the blurring of traditional roles, and Anthropic's philosophy of balancing innovation with safety.
Outlines

The AI Revolution in Software Development
The podcast begins by exploring the profound impact of AI, specifically Claude Code, on software development. The guest shares personal experiences of AI-generated code, highlighting increased productivity and the potential shift in traditional software engineering roles. This section also introduces sponsors DX and Sentry, showcasing their roles in developer intelligence and error tracking, respectively.

Boris Cherny's Journey and Anthropic's Mission
Boris Cherny discusses his career path, including a brief stint at Cursor and his return to Anthropic, emphasizing the company's core mission of AI safety as a primary motivator.

Claude Code's Genesis, Growth, and Impact
The discussion focuses on Claude Code's significant impact on software development, with a substantial percentage of GitHub commits now AI-authored. Boris reflects on its unexpected growth from a "hack" to a transformative tool, detailing its evolution from coding to tool and computer use, with a strong emphasis on safety. Early prototyping, including the "Claude CLI," and its surprising tool-using capabilities are also covered, alongside the product decision to maintain a terminal-based interface.

Viral Growth and Latent Demand
Claude Code experienced rapid internal adoption and later a slower but significant external release. The principle of "latent demand" is explained as a key factor in its success, demonstrating how bringing tools into users' existing workflows can unlock unforeseen use cases and drive product adoption.

The Accelerating Pace of Change and Exponential Thinking
The podcast emphasizes the rapid transformation of software engineering within a year, with AI-generated code becoming commonplace. Anthropic's culture of exponential thinking, rooted in scaling laws, is discussed, revisiting past predictions about the need for IDEs and highlighting the swift advancement of AI capabilities.

Innovation Through Play and Personal AI Adoption
The importance of experimentation and allowing space for innovation is highlighted. Boris shares his personal journey of relying entirely on Claude Code for his programming needs since November, illustrating the gradual but significant increase in AI's contribution to his work.

The Next Frontier: AI Ideation and Beyond Coding
The conversation shifts to the future evolution of AI, with Claude Code now capable of generating ideas for bug fixes and features. The speaker posits that coding is becoming a solved problem, and AI's focus is expanding to broader ideation and problem-solving tasks, even assisting product managers.

Unprecedented Productivity Gains and Adapting to Change
The AI's increasing capability in code review and overall engineering productivity is discussed, with Anthropic reporting a 200% increase. The challenges of keeping up with rapid AI model updates are also addressed, alongside key principles for managing AI teams, such as encouraging AI usage and prioritizing speed.

Empowering Engineers and Token Usage
Advice is given to provide engineers with ample tokens for experimentation rather than focusing on cost-cutting initially. The trend of engineers consuming significant AI tokens is noted, suggesting a shift in cost structures for AI-powered development. Boris expresses increased enjoyment in coding due to AI handling minutiae.

The Practicality and Evolution of Programming
Boris shares his early, practical motivations for coding and reiterates his view of programming as a practical tool. Concerns about skill atrophy are addressed by viewing programming as an evolving continuum, drawing parallels to historical changes like the introduction of punch cards.

Historical Parallels and Democratizing Skills
The printing press is used as an analogy for AI's current revolution in coding, predicting a future where traditional coding skills become less critical. The printing press's democratizing effect on information and skills is detailed, highlighting its role in increasing literacy and societal transformation.

AI Empowering Non-Technical Users and Evolving Roles
AI tools are enabling individuals without technical experience to build software, mirroring the accessibility brought by the printing press. The ease with which AI helps unblock users, especially former engineers, is highlighted. The software engineer's role is evolving towards describing desired outcomes to AI, a rapid shift occurring over the past year.

AI's Broad Impact and Understanding Agents
The discussion explores which roles beyond engineering will be most impacted by AI, suggesting product managers, designers, and data scientists are next. The concept of an "agent" in AI is clarified as an LLM capable of using tools and acting within systems. The profound societal and industrial implications of AI advancement are discussed, with Anthropic prioritizing ethical development.

Jobs, Productivity, and the Unpredictable Future
Fears of job displacement are contrasted with the Jevons Paradox. The guest shares that AI has made their work more enjoyable and productive. While acknowledging current excitement, the speaker admits uncertainty about the long-term impact of AI on jobs, drawing parallels to the printing press's unforeseen consequences.

Universal Programming and Navigating Disruption
The speaker envisions a future where everyone can program, leading to unprecedented innovation. Key advice for succeeding in the AI era includes experimenting with AI tools, embracing a generalist mindset, and crossing disciplinary boundaries to foster adaptability.

Blurring Lines and Sponsor: MetaView
Traditional roles are becoming blurred as AI tools enable cross-functional capabilities, with predictions of new titles like "builder" emerging. This section also includes a sponsor message for MetaView, an AI-powered hiring platform.

AI's Impact on Job Enjoyment and Designer Adoption
A survey indicates a high percentage of engineers and PMs enjoy their jobs more with AI tools. At Anthropic, designers are increasingly coding with AI assistance, enhancing their ability to unblock themselves and learn new skills, contributing to their job satisfaction.

User-Centric Design and Latent Demand in Practice
Concepts like "multi-quadding" (running multiple AI sessions) and bringing AI products to users' existing workflows are discussed. Latent demand is further explained through examples like Facebook Marketplace and the emergence of Co-work from observing users' unconventional use of Claude Code.

The Power of Early Release and Iteration
Releasing products like Co-work early, even in a rough state, is crucial for gathering feedback, understanding user needs, and iterating based on real-world usage and latent demand.

Anthropic's Multi-Layered Approach to AI Safety
AI safety is approached through mechanistic interpretability, controlled testing (evals), and real-world observation. This layered approach is crucial for responsible AI development, balancing intense competition with a deep commitment to ensuring AI benefits humanity.

Mechanistic Interpretability and the "Race to the Top"
Mechanistic interpretability, pioneered by Chris Olah, involves studying AI models at a neuron level to understand their internal workings. Anthropic's "race to the top" philosophy encourages open-sourcing safety tools to promote responsible AI development across the industry.

Agent Anxiety and the Evolution of "Coding"
The anxiety of idle agents is discussed, with the guest maintaining multiple agents for continuous operation. The definition of "coding" is evolving to encompass describing desired outcomes to AI, a shift that parallels historical changes in programming methodologies. A personal connection to programming history is shared through an anecdote about a Soviet programmer grandfather.

Key Principles for Building AI Products
Advice for building AI products includes not boxing in models, betting on general models, and embracing the "bitter lesson" of AI's continuous improvement. The principle of giving AI models tools and goals, rather than imposing strict workflows, leads to better results.

The Bitter Lesson and Building for the Future
The "bitter lesson" posits that more general AI models consistently outperform specialized ones. The strategy of building AI products for future model capabilities, not just current ones, is discussed as a driver of exponential growth, anticipating advancements in AI's coding and tool-use abilities.

Anticipating AI Advancements and Using AI Tools
The discussion focuses on predicting how AI models will improve, particularly in tool usage and sustained operation. Tips for users of AI coding tools like Quad Code include using the most capable model, leveraging "plan mode," and exploring different interfaces.

Competition, User Focus, and Post-AGI Aspirations
The speaker addresses competition in the AI coding agent space by emphasizing a focus on solving user problems and user feedback. A personal aspiration for life after AGI is shared: a slower pace, possibly in rural Japan, focusing on making miso.

Anthropic's Philosophy and Recommended Reading
Anthropic's core philosophy of progressing from coding to tool and computer use is reiterated, with the rapid growth of products like Quad Code validating this vision. Recommended reading includes technical and sci-fi books that explore future possibilities and the pace of change.

AI Leaders' Habits and Product Discoveries
AI leaders often have limited time for traditional media. The speaker highlights "Co-work" as a life-changing product for automating tedious tasks and recommends the "Acquired" podcast for business history insights.

Practical Applications of Co-work and Common Sense
Detailed advice is provided on using Co-work, including starting with tool usage and connecting it to other applications. The speaker's life motto, "use common sense," is emphasized as crucial for avoiding failures in work and product development.

Engaging with Users and Conclusion
The speaker explains their increased activity on Twitter/X as a way to engage directly with users, gather feedback, and address bugs quickly. This direct interaction is vital for product improvement, concluding the podcast's exploration of AI's impact on software development and beyond.
Keywords
Claude Code
An AI-powered coding assistant developed by Anthropic that generates code, assists with debugging, and automates various software development tasks, significantly boosting engineer productivity.
AI Agents
AI systems designed to perform tasks autonomously, interact with environments, and use tools to achieve specific goals. In software development, they can write code, review pull requests, and even suggest new features.
Latent Demand
The principle that unmet needs or desires exist within a market that are not immediately apparent. Identifying and addressing latent demand can lead to the creation of highly successful products and services.
Mechanistic Interpretability
A field of AI research focused on understanding the internal workings of neural networks by studying individual neurons and their connections, aiming to decipher how AI models process information and make decisions.
Exponential Thinking
A mindset that anticipates rapid, accelerating growth and change, often applied in technology and business strategy. It contrasts with linear thinking and is crucial for navigating fast-evolving fields like AI.
Developer Productivity
The efficiency and output of software developers. AI tools like Claude Code aim to dramatically increase developer productivity by automating tasks, reducing boilerplate code, and streamlining workflows.
AI Safety
The research and practice of ensuring that artificial intelligence systems are developed and deployed in a way that is beneficial to humanity and avoids unintended negative consequences. This includes alignment, interpretability, and robust testing.
Generative AI
A type of artificial intelligence capable of creating new content, such as text, images, music, and code. Generative AI models are transforming industries by automating creative and technical tasks.
Quad Code
A powerful AI coding assistant that automates code generation, assists with debugging, and integrates with various development environments. It aims to significantly boost developer productivity by handling repetitive coding tasks and complex problem-solving.
Opus 4.6
A highly capable AI model developed by Anthropic, known for its advanced reasoning and coding abilities. It represents a significant leap in AI performance, enabling more complex tasks and better tool utilization.
Q&A
How has AI, specifically Claude Code, changed the role of a software engineer?
AI tools like Claude Code are automating significant portions of software development, including code generation and review. This allows engineers to focus on higher-level tasks like problem-solving, ideation, and system design, rather than the minutiae of writing code. Many engineers now report 100% of their code being AI-generated.
What is "latent demand" and how has it influenced product development at Anthropic?
Latent demand refers to unmet needs that users express through unconventional product usage. Anthropic observed users employing Claude Code for non-coding tasks, leading to the development of Co-work, a product designed to address this demand for broader AI assistance beyond just coding.
What are the key principles for building successful AI products, according to the guest?
Key principles include not restricting AI models with rigid workflows but providing them with tools and goals to figure out solutions themselves. It's also crucial to bet on more general AI models, as they tend to outperform specialized ones over time, and to embrace rapid iteration and user feedback.
How does Anthropic approach AI safety, and why is it important?
Anthropic employs a multi-layered approach to AI safety, including mechanistic interpretability to understand model internals, rigorous evaluations in controlled settings, and real-world observation. This focus is critical to ensure AI development is aligned with human values and benefits society.
What is the future outlook for coding skills and the software engineering profession in the age of AI?
The speaker predicts that traditional coding skills may become less critical within a few years as AI handles most of the coding process. The profession is likely to evolve towards roles like "builder," where individuals describe desired outcomes to AI, democratizing software creation.
What is the core strategy behind building AI products at Anthropic?
Anthropic's strategy is to build for the future capabilities of AI models, anticipating advancements in areas like coding and tool use, rather than focusing solely on current limitations. This forward-looking approach has driven exponential growth.
How has AI model performance evolved in terms of task execution?
Early AI models could only perform tasks for short durations (seconds to minutes) and required constant supervision. Newer models, like Opus 4.6, can run unattended for much longer periods (minutes to hours, even days), significantly reducing the need for human babysitting.
What are the key recommendations for users of AI coding tools like Quad Code?
Users should utilize the most capable models available, enable "maximum effort" settings, and leverage "plan mode" for better control over AI-generated code. Experimenting with different interfaces (terminal, desktop app, web) is also encouraged.
How does Anthropic view competition in the AI coding agent market?
While aware of competitors, Anthropic prioritizes solving user problems and improving their product based on direct feedback. They believe competition ultimately benefits users by driving innovation and forcing all players to improve.
What is the significance of "common sense" in professional and product development?
"Common sense" is crucial for avoiding failures. It involves thinking from first principles, questioning processes, and recognizing when an idea or product direction "smells weird," leading to better decision-making and outcomes.
Why did the speaker become more active on Twitter/X?
Increased activity on Twitter/X stemmed from a desire to engage directly with users, identify bugs, and gather feedback. The ability to quickly address issues using AI tools like Quad Code made this interaction surprisingly effective and rewarding.
What are some recommended books for technical and broader thinking?
Recommended books include "Functional Programming in Scala" for its technical elegance, "Accelerando" by Stross for its capture of rapid change, and "The Wandering Earth" by Liu Cixin for its unique perspective on Chinese sci-fi and grand concepts.
How does Co-work automate tasks and what are its key features?
Co-work uses AI to perform actions like managing subscriptions, filling forms, and project management. Its Chrome integration allows it to interact with web pages, and it can connect to other tools for more complex workflows, significantly reducing manual effort.
Show Notes
Boris Cherny is the creator and head of Claude Code at Anthropic. What began as a simple terminal-based prototype just a year ago has transformed the role of software engineering and is increasingly transforming all professional work.
We discuss:
1. How Claude Code grew from a quick hack to 4% of public GitHub commits, with daily active users doubling last month
2. The counterintuitive product principles that drove Claude Code’s success
3. Why Boris believes coding is “solved”
4. The latent demand that shaped Claude Code and Cowork
5. Practical tips for getting the most out of Claude Code and Cowork
6. How underfunding teams and giving them unlimited tokens leads to better AI products
7. Why Boris briefly left Anthropic for Cursor, then returned after just two weeks
8. Three principles Boris shares with every new team member
—
Brought to you by:
DX—The developer intelligence platform designed by leading researchers: https://getdx.com/lenny
Sentry—Code breaks, fix it faster: https://sentry.io/lenny
Metaview—The AI platform for recruiting: https://metaview.ai/lenny
—
Episode transcript: https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens
—
Archive of all Lenny's Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0
—
Where to find Boris Cherny:
• LinkedIn: https://www.linkedin.com/in/bcherny
• Website: https://borischerny.com
—
Where to find Lenny:
• Newsletter: https://www.lennysnewsletter.com
• X: https://twitter.com/lennysan
• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/
—
In this episode, we cover:
(00:00 ) Introduction to Boris and Claude Code
(03:45 ) Why Boris briefly left Anthropic for Cursor (and what brought him back)
(05:35 ) One year of Claude Code
(08:41 ) The origin story of Claude Code
(13:29 ) How fast AI is transforming software development
(15:01 ) The importance of experimentation in AI innovation
(16:17 ) Boris’s current coding workflow (100% AI-written)
(17:32 ) The next frontier
(22:24 ) The downside of rapid innovation
(24:02 ) Principles for the Claude Code team
(26:48 ) Why you should give engineers unlimited tokens
(27:55 ) Will coding skills still matter in the future?
(32:15 ) The printing press analogy for AI’s impact
(36:01 ) Which roles will AI transform next?
(40:41 ) Tips for succeeding in the AI era
(44:37 ) Poll: Which roles are enjoying their jobs more with AI
(46:32 ) The principle of latent demand in product development
(51:53 ) How Cowork was built in just 10 days
(54:04 ) The three layers of AI safety at Anthropic
(59:35 ) Anxiety when AI agents aren’t working
(01:02:25 ) Boris’s Ukrainian roots
(01:03:21 ) Advice for building AI products
(01:08:38 ) Pro tips for using Claude Code effectively
(01:11:16 ) Thoughts on Codex
(01:12:13 ) Boris’s post-AGI plans
(01:14:02 ) Lightning round and final thoughts
—
References: https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens
—
Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.
—
Lenny may be an investor in the companies discussed.
To hear more, visit www.lennysnewsletter.com























