DiscoverThe Pragmatic Engineer
The Pragmatic Engineer
Claim Ownership

The Pragmatic Engineer

Author: Gergely Orosz

Subscribed: 907Played: 15,277
Share

Description

Software engineering at Big Tech and startups, from the inside. Deepdives with experienced engineers and tech professionals who share their hard-earned lessons, interesting stories and advice they have on building software.

Especially relevant for software engineers and engineering leaders: useful for those working in tech.

newsletter.pragmaticengineer.com
55 Episodes
Reverse
Brought to You By:• Statsig — ⁠ The unified platform for flags, analytics, experiments, and more.• Sonar – The makers of SonarQube, the industry standard for automated code review• WorkOS – Everything you need to make your app enterprise ready.—Steve Yegge has spent decades writing software and thinking about how the craft evolves. From his early years at Amazon and Google, to his influential blog posts, he has often been early at spotting shifts in how software gets built. In this episode of Pragmatic Engineer, I talk with Steve about how AI is changing engineering work, why he believes coding by hand may gradually disappear, and what developers should focus on, instead. We discuss his latest book, Vibe Coding, and the open-source AI agent orchestrator he built called Gas Town, which he said most devs should avoid using.Steve shares his framework for levels of AI adoption by engineers, ranging from avoiding AI tools entirely, to running multiple agents in parallel. We discuss why he believes the knowledge that engineers need to know keeps changing, and why understanding how systems evolve may matter more than mastering any particular tool.We also explore broader implications. Steve argues that AI’s role is not primarily to replace engineers, but to amplify them. At the same time, he warns that the pace of change will create new kinds of technical debt, new productivity pressures, and fresh challenges for how teams operate.—Timestamps(00:00) Intro(01:43) Steve’s latest projects(02:27) Important blog posts(04:48) Shifts in what engineers need to know(10:46) Steve’s current AI stance(13:23) Steve’s book Vibe Coding(18:25) Layoffs and disruption in tech(31:13) Gas Town(40:10) New ways of working(51:08) The problem of too many people(54:45) Why AI results lag in business(59:57) Gamification and product stickiness(1:04:54) The ‘Bitter Lesson’ explained(1:07:14) The future of software development(1:23:06) Where languages stand(1:24:47) Adapting to change(1:27:32) Steve’s predictions —The Pragmatic Engineer deepdives relevant for this episode:• Vibe coding as a software engineer• The full circle of developer productivity with Steve Yegge• AI Tooling for Software Engineers in 2026• The AI Engineering Stack—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:• Statsig — ⁠ The unified platform for flags, analytics, experiments, and more.• Sonar – The makers of SonarQube, the industry standard for automated code review• WorkOS – Everything you need to make your app enterprise ready.—Boris Cherny is the creator and Head of Claude Code at Anthropic. He previously spent five years at Meta as a Principal Engineer and is the author of the book Programming TypeScript.In this episode of Pragmatic Engineer, we went through how Claude Code was built and what it means when engineers no longer write most of the code themselves.We discuss how Claude Code evolved from a side project into a core internal tool at Anthropic and how Boris uses it day-to-day. We go deep into workflow details, including parallel agents, PR structure, deterministic review patterns, and how the system retrieves context from large codebases. We also get into how Claude Cowork was built.As coding becomes more accessible, the role of engineers shifts rather than shrinks. We examine what that shift means in practice, which skills become more important, and why the lines between product, engineering, and design are blurring.—Timestamps(00:00) Intro(11:15) Lessons from Meta(19:46) Joining Anthropic(23:08) The origins of Claude Code(32:55) Boris's Claude Code workflow(36:27) Parallel agents(40:25) Code reviews(47:18) Claude Code's architecture(52:38) Permissions and sandboxing(55:05) Engineering culture at Anthropic(1:05:15) Claude Cowork(1:12:48) Observability and privacy(1:14:45) Agent swarms(1:21:16) LLMs and the printing press analogy(1:30:16) Standout engineer archetypes(1:32:12) What skills still matter for engineers(1:35:24) Book recommendations—The Pragmatic Engineer deepdives relevant for this episode:• How Claude Code is built• How Anthropic built Artifacts• How Codex is built• Real-world engineering challenges: building Cursor—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:• Statsig — ⁠ The unified platform for flags, analytics, experiments, and more.• Sonar – The makers of SonarQube, the industry standard for automated code review• WorkOS – Everything you need to make your app enterprise ready.—How has the day-to-day workflow of Mitchell Hashimoto changed, thanks to AI tools?Mitchell Hashimoto is one of the most influential infrastructure engineers of our time, and is one of the most pragmatic builders I’ve met. He is the co-founder of HashiCorp and creator of Ghostty. In this episode, we talk about how he got into software engineering, the history of HashiCorp, and the challenges of turning widely used open-source tools into a durable business. We also go into what it’s really like to work with AWS, Azure and GCP as a startup.Mitchell shares how he uses AI these days, and how agents have completely changed how he works. We touch on Ghostty, open source, and what’s changing for software engineers and founders in an AI-native era.—Timestamps(00:00) Intro(02:03) Mitchell’s path into software engineering(07:19) The origins of HashiCorp(15:52) Early cloud computing(18:22) The 2010s startup scene in SF(23:11) Funding HashiCorp(25:23) The Hashi stack(32:33) Why HashiCorp’s business lagged behind its technology(35:28) An early failure in commercialization(38:28) The open-core pivot and path to enterprise profitability(48:08) Taking HashiCorp public(51:58) The near VMware acquisition(59:10) Mitchell’s take on all the cloud providers(1:06:02) AI’s impact on open source(1:07:00) Why Mitchell built Ghostty(1:09:11) Why Mitchell used Zig(1:10:38) How terminals work and Ghostty’s approach(1:17:31) AI’s impact on terminals and libghostty(1:19:13) How Mitchell uses AI(1:22:02) Ghostty’s evolving AI use policy(1:28:36) Why open source must change(1:31:46) The problem of Git in monorepos(1:36:22) What needs to change to work effectively with AI(1:39:57) Mitchell’s hiring practices(1:47:52) Mitchell’s AI adoption journey(1:50:41) Advice to would-be founders(1:52:21) Mitchell’s advising work(1:53:20) What’s changing for software engineers(1:55:03) How Mitchell recharges(1:55:50) Book recommendation—The Pragmatic Engineer deepdives relevant for this episode:• AI Engineering in the real world• The AI Engineering stack• Pressure on commercial open source to make more money – and HashiCorp changing its license• How Linux is built with Greg Kroah-Hartman—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:• Statsig — ⁠ The unified platform for flags, analytics, experiments, and more.• Sonar – The makers of SonarQube, the industry standard for automated code review• WorkOS – Everything you need to make your app enterprise ready.—Andrey Breslav is the creator of Kotlin and the founder of CodeSpeak, a new programming language that aims to reduce boilerplate by replacing trivial code with concise, plain-English descriptions. He led Kotlin’s design at JetBrains through its early releases, shaping both the language and its compiler as Kotlin grew into a core part of the Android ecosystem.In this episode, we talk about what it takes to design and evolve a programming language in production. We discuss the influences behind Kotlin, the tradeoffs that shaped it, and why interoperability with Java became so central to its success. Andrey also explains why he is building CodeSpeak as a response to growing code complexity in an era of LLM agents, and why he believes keeping humans in control of the software development lifecycle will matter even more as AI becomes more capable.—Timestamps(00:00) Intro(01:02) Why Kotlin was created(06:26) Dynamic vs. static languages(09:27) Andrey joins the Kotlin project(14:26) Designing a new language (19:40) Frontend vs. Backend in language design(21:05) Why is it named Kotlin?(24:37) Kotlin vs. Java tradeoffs(28:32) Null safety (31:24) Kotlin’s influences (39:12) Smartcasts (40:42) Features Kotlin left out(44:54) Bidirectional Java interoperability(55:01) The Kotlin timeline (58:00) Kotlin’s development process(1:07:20) From Java to Android developers(1:12:12) How Android became Kotlin-first (1:18:20) CodeSpeak: a language for LLMs(1:24:07) LLMs and new languages(1:28:20) How software engineering is changing with AI(1:36:12) Developer tools of the future (1:39:00) Andrey’s advice for junior engineers and students (1:42:32) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:• Cross-platform mobile development• How Swift was built – with Chris Lattner, the creator of the language• Building Reddit’s iOS and Android app• Notion: going native on iOS and Android• Is there a drop in native iOS and Android hiring at startups?—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:• Statsig — ⁠ The unified platform for flags, analytics, experiments, and more.• Sonar – The makers of SonarQube, the industry standard for automated code review• WorkOS – Everything you need to make your app enterprise ready.—Every few decades, software engineering is declared “dead” or on the verge of being automated away. We’ve heard versions of this story before. But what if it’s just the start of a new “golden age” of a different type of software engineering, like it has been many times before?In this episode of The Pragmatic Engineer, I’m joined once again by Grady Booch, one of the most influential figures in the history of software engineering, to put today’s claims about AI and automation into historical context.Grady is the co-creator of the Unified Modeling Language, author of several books and papers that have shaped modern software development, and Chief Scientist for Software Engineering at IBM, where he focuses on embodied cognition.Grady shares his perspective on three golden ages of computing since the 1940s, and how each emerged in response to the constraints of its time. He explains how technical limits and human factors have always shaped the systems we build, and why periods of rapid change tend to produce both real progress and inflated expectations.He also responds to current claims that software engineering will soon be fully automated, explaining why systems thinking, human judgment, and responsibility remain central to the work, even as tools continue to evolve.—Timestamps(00:00) Intro(01:04) The first golden age of software engineering(18:05) The software crisis(32:07) The second golden age of software engineering (41:27) Y2K and the Dotcom crash (44:53) Early AI (46:40) The third golden age of software engineering (50:54) Why software engineers will very much be needed(57:52) Grady responds to Dario Amodei(1:06:00) New skills engineers will need to succeed(1:09:10) Resources for studying complex systems (1:13:39) How to thrive during periods of change—The Pragmatic Engineer deepdives relevant for this episode:• When AI writes almost all code, what happens to software engineering? • Inside a five-year-old startup’s rapid AI makeover• Software architecture with Grady Booch• What is old is new again—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:• Statsig — ⁠ The unified platform for flags, analytics, experiments, and more.• Sonar – The makers of SonarQube, the industry standard for automated code review• WorkOS – Everything you need to make your app enterprise ready.—Peter Steinberger ships more code than I’ve seen a single person do: in January, he was at more than 6,600 commits alone. As he puts it: “From the commits, it might appear like it's a company. But it’s not. This is one dude sitting at home having fun."How does he do it?Peter Steinberger is the creator of Clawdbot (as of yesterday: renamed to Moltbot) and founder of PSPDFKit. Moltbot – a work-in-progress AI agent that shows what the future of Siri could be like – is currently the hottest AI project in the tech industry, with more searches on Google than Claude Code or Codex. I sat down with Peter in London to talk about what building software looks like when you go all-in with AI tools like Claude and Codex.Peter’s background is fascinating. He built and scaled PSPDFKit into a global developer tools business. Then, after a three-year break, he returned to building. This time, LLMs and AI agents sit at the center of his workflow. We discuss what changes when one person can operate like a team and why closing the loop between code, tests, and feedback becomes a prerequisite for working effectively with AI.We also go into how engineering judgment shifts with AI, how testing and planning evolve when agents are involved, and which skills and habits are needed to work effectively. This is a grounded conversation about real workflows and real tradeoffs, and about designing systems that can test and improve themselves.—Timestamps(00:00) Intro(01:07) How Peter got into tech (08:27) PSPDFKit(19:14) PSPDFKit’s tech stack and culture(22:33) Enterprise pricing(29:42) Burnout (34:54) Peter finding his spark again(43:02) Peter’s workflow (49:10) Managing agents (54:08) Agentic engineering(59:01) Testing and debugging (1:03:49) Why devs struggle with LLM coding(1:07:20) How PSPDFkit would look if built today (1:11:10) How planning has changed with AI (1:21:14) Building Clawdbot (now: Moltbot)(1:34:22) AI’s impact on large companies(1:38:38) “I don’t care about CI”(1:40:01) Peter’s process for new features (1:44:48) Advice for new grads(1:50:18) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:• Inside a five-year-old startup’s rapid AI makeover• When AI writes almost all code, what happens to software engineering?• Why it’s so dramatic that “writing code by hand is dead”• AI Engineering in the real world• The AI Engineering stack—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
How AWS S3 is built

How AWS S3 is built

2026-01-2101:18:14

Brought to You By:• Statsig — ⁠ The unified platform for flags, analytics, experiments, and more.• Sonar – The makers of SonarQube, the industry standard for automated code review• WorkOS – Everything you need to make your app enterprise ready.—Amazon S3 is one of the largest distributed systems ever built, storing and serving data for a significant portion of the internet. Behind its simple interfaces hides an enormous amount of engineering work, careful tradeoffs, and long-term thinking.In this episode, I sit down with Mai-Lan Tomsen Bukovec, VP of Data and Analytics at AWS, who has been running Amazon S3 for more than a decade. Mai-Lan shares how S3 operates at extreme scale, what it takes to design for durability and availability across millions of servers, and why building for failure is a core principle.We also go deep into how AWS approaches correctness using formal methods, how storage tiers and limits shape system design, and why simplicity remains one of the hardest and most important goals at S3’s scale.—Timestamps(00:00) Intro(01:03) S3’s scale (03:58) How S3 started (07:25) Parquet, Iceberg, and S3 tables(09:46) S3 for developers (13:37) Why AWS keeps S3 prices low (17:10) AWS pricing tiers(19:38) Availability and durability (26:21) The cost of S3's consistency(31:22) Automated reasoning and proof of correctness (35:14) Durability at AWS scale(39:58) Correlated failure and crash consistency (43:22) Failure allowances (46:04) Two opposing principles in S3 design(49:09) S3’s evolution (52:21) S3 Vectors (1:01:16) The 50 TB limit on AWS(1:07:54) The simplicity principle(1:10:10) Types of engineers working on S3(1:14:15) Closing recommendations —The Pragmatic Engineer deepdives relevant for this episode:• Inside Amazon’s engineering culture• How AWS deals with a major outage• A Day in the Life of a Senior Manager at Amazon• What is a Principal Engineer at Amazon? – with Steve Huynh• Working at Amazon as a software engineer – with Dave AndersonAmazon papers recommended by Mai-Lan:• Using lightweight formal methods to validate a key-value storage node in Amazon S3• Formally verified cloud-scale authorization• Analyzing metastable failures• Amazon’s engineering tenets—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more.•⁠ Linear ⁠ — ⁠ The system for modern product development.—How have servers and the cloud evolved in the last 30 years, and what might be next? Bryan Cantrill was a distinguished engineer at Sun Microsystems during both the Dotcom Boom and the Dotcom Bust. Today, he is the co-founder and CTO of Oxide Computer, where he works on modern server infrastructure.In this episode of The Pragmatic Engineer, Bryan joins me to break down how modern computing infrastructure evolved. We discuss why the Dotcom Bust produced deeper innovation than the Boom, how constraints shape better systems, and what the rise of the cloud changed and did not change about building reliable infrastructure.Our conversation covers early web infrastructure at Sun, the emergence of AWS, Kubernetes and cloud neutrality, and the tradeoffs between renting cloud space and building your own. We also touch on the complexity of server-side software updates, experimenting with AI, the limits of large language models, and how engineering organizations scale without losing their values.If you want a systems-level perspective on computing that connects past cycles to today’s engineering decisions, this episode offers a rare long-range view.—Timestamps(00:00) Intro(01:26) Computer science in the 1990s(03:01) Sun and Cisco’s web dominance(05:41) The Dotcom Boom(10:26) From Boom to Bust (15:32) The innovations of the Bust(17:50) The open source shift(22:00) Oracle moves into Sun’s orbit(24:54) AWS dominance (2010–2014)(28:15) How Kubernetes and cloud neutrality(30:58) Custom infrastructure (36:10) Renting the cloud vs. buying hardware(45:28) Designing a computer from first principles (50:02) Why everyone is paid the same salary at Oxide(54:14) Oxide’s software stack (58:33) The evolution of software updates(1:02:55) How Oxide uses AI (1:06:05) The limitations of LLMs(1:11:44) AI use and experimentation at Oxide (1:17:45) Oxide’s diverse teams(1:22:44) Remote work at Oxide(1:24:11) Scaling company values(1:27:36) AI’s impact on the future of engineering (1:31:04) Bryan’s advice for junior engineers(1:34:01) Book recommendations—The Pragmatic Engineer deepdives relevant for this episode:• Startups on hard mode: Oxide. Part 1: Hardware• Startups on hard mode: Oxide, Part 2: Software & Culture• Three cloud providers, three outages: three different responses• Inside Uber’s move to the Cloud• Inside Agoda’s private Cloud—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. •⁠ Linear ⁠ — ⁠ The system for modern product development. —Michelle Lim joined Warp as engineer number one and is now building her own startup, Flint. She brings a strong product-first mindset shaped by her time at Facebook, Slack, Robinhood, and Warp. Michelle shares why she chose Warp over safer offers, how she evaluates early-stage opportunities, and what she believes distinguishes great founding engineers.Together, we cover how product-first engineers create value, why negotiating equity at early-stage startups requires a different approach, and why asking founders for references is a smart move. Michelle also shares lessons from building consumer and infrastructure products, how she thinks about tech stack choices, and how engineers can increase their impact by taking on work outside their job descriptions.If you want to understand what founders look for in early engineers or how to grow into a founding-engineer role, this episode is full of practical advice backed by real examples—Timestamps(00:00) Intro(01:32) How Michelle got into software engineering (03:30) Michelle’s internships (06:19) Learnings from Slack (08:48) Product learnings at Robinhood(12:47) Joining Warp as engineer #1(22:01) Negotiating equity(26:04) Asking founders for references(27:36) The top reference questions to ask(32:53) The evolution of Warp’s tech stack (35:38) Product-first engineering vs. code-first(38:27) Hiring product-first engineers (41:49) Different types of founding engineers (44:42) How Flint uses AI tools (45:31) Avoiding getting burned in founder exits(49:26) Hiring top talent(50:15) An overview of Flint(56:08) Advice for aspiring founding engineers(1:01:05) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:• Thriving as a founding engineer: lessons from the trenches• From software engineer to AI engineer• AI Engineering in the real world• The AI Engineering stack—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Statsig are helping make the first-ever Pragmatic Summit a reality. Join me and 400 other top engineers and leaders on 11 February, in San Francisco for a special one-day event. Reserve your spot here.•⁠ Linear ⁠ — ⁠ The system for modern product development. Engineering teams today move much faster, thanks to AI. Because of this, coordination increasingly becomes a problem. This is where Linear helps fast-moving teams stay focused. Check out Linear.—As software engineers, what should we know about writing secure code?Johannes Dahse is the VP of Code Security at Sonar and a security expert with 20 years of industry experience. In today’s episode of The Pragmatic Engineer, he joins me to talk about what security teams actually do, what developers should own, and where real-world risk enters modern codebases.We cover dependency risk, software composition analysis, CVEs, dynamic testing, and how everyday development practices affect security outcomes. Johannes also explains where AI meaningfully helps, where it introduces new failure modes, and why understanding the code you write and ship remains the most reliable defense.If you build and ship software, this episode is a practical guide to thinking about code security under real-world engineering constraints.—Timestamps(00:00) Intro(02:31) What is penetration testing?(06:23) Who owns code security: devs or security teams?(14:42) What is code security? (17:10) Code security basics for devs(21:35) Advanced security challenges(24:36) SCA testing (25:26) The CVE Program (29:39) The State of Code Security report (32:02) Code quality vs security(35:20) Dev machines as a security vulnerability(37:29) Common security tools(42:50) Dynamic security tools(45:01) AI security reviews: what are the limits?(47:51) AI-generated code risks(49:21) More code: more vulnerabilities(51:44) AI’s impact on code security(58:32) Common misconceptions of the security industry(1:03:05) When is security “good enough?”(1:05:40) Johannes’s favorite programming language—The Pragmatic Engineer deepdives relevant for this episode:• What is Security Engineering?•⁠ Mishandled security vulnerability in Next.js•⁠ Okta Schooled on Its Security Practices—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. AI-accelerated development isn’t just about shipping faster: it’s about measuring whether, what you ship, actually delivers value. This is where modern experimentation with Statsig comes in. Check it out.•⁠ Linear ⁠ — ⁠ The system for modern product development. I had a jaw-dropping experience when I dropped in for the weekly “Quality Wednesdays” meeting at Linear. Every week, every dev fixes at least one quality isse, large or small. Even if it’s one pixel misalignment, like this one. I’ve yet to see a team obsess this much about quality. Read more about how Linear does Quality Wednesdays – it’s fascinating!—Martin Fowler is one of the most influential people within software architecture, and the broader tech industry. He is the Chief Scientist at Thoughtworks and the author of Refactoring and Patterns of Enterprise Application Architecture, and several other books. He has spent decades shaping how engineers think about design, architecture, and process, and regularly publishes on his blog, MartinFowler.com.In this episode, we discuss how AI is changing software development: the shift from deterministic to non-deterministic coding; where generative models help with legacy code; and the narrow but useful cases for vibe coding. Martin explains why LLM output must be tested rigorously, why refactoring is more important than ever, and how combining AI tools with deterministic techniques may be what engineering teams need.We also revisit the origins of the Agile Manifesto and talk about why, despite rapid changes in tooling and workflows, the skills that make a great engineer remain largely unchanged.—Timestamps(00:00) Intro(01:50) How Martin got into software engineering (07:48) Joining Thoughtworks (10:07) The Thoughtworks Technology Radar(16:45) From Assembly to high-level languages(25:08) Non-determinism (33:38) Vibe coding(39:22) StackOverflow vs. coding with AI(43:25) Importance of testing with LLMs (50:45) LLMs for enterprise software(56:38) Why Martin wrote Refactoring (1:02:15) Why refactoring is so relevant today(1:06:10) Using LLMs with deterministic tools(1:07:36) Patterns of Enterprise Application Architecture(1:18:26) The Agile Manifesto (1:28:35) How Martin learns about AI (1:34:58) Advice for junior engineers (1:37:44) The state of the tech industry today(1:42:40) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:• Vibe coding as a software engineer• The AI Engineering stack• AI Engineering in the real world• What changed in 50 years of computing—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Statsig enables two cultures at once: continuous shipping and experimentation. Companies like Notion went from single-digit experiments per quarter to over 300 experiments with Statsig. Start using Statsig with a generous free tier, and a $50K startup program.•⁠ Linear ⁠ — ⁠ The system for modern product development. When most companies hit real scale, they start to slow down, and are faced with “process debt.” This often hits software engineers the most. Companies switch to Linear to hit a hard reset on this process debt – ones like Scale cut their bug resolution in half after the switch. Check out Linear’s migration guide for details.—What’s it like to work as a software engineer inside one of the world’s biggest streaming companies?In this special episode recorded at Netflix’s headquarters in Los Gatos, I sit down with Elizabeth Stone, Netflix’s Chief Technology Officer. Before becoming CTO, Elizabeth led data and insights at Netflix and was VP of Science at Lyft. She brings a rare mix of technical depth, product thinking, and people leadership.We discuss what it means to be “unusually responsible” at Netflix, how engineers make decisions without layers of approval, and how the company balances autonomy with guardrails for high-stakes projects like Netflix Live. Elizabeth shares how teams self-reflect and learn from outages and failures, why Netflix doesn’t do formal performance reviews, and what new grads bring to a company known for hiring experienced engineers.This episode offers a rare inside look at how Netflix engineers build, learn, and lead at a global scale.—Timestamps(00:00) Intro(01:44) The scale of Netflix (03:31) Production software stack(05:20) Engineering challenges in production(06:38) How the Open Connect delivery network works(08:30) From pitch to play (11:31) How Netflix enables engineers to make decisions (13:26) Building Netflix Live for global sports(16:25) Learnings from Paul vs. Tyson for NFL Live(17:47) Inside the control room (20:35) What being unusually responsible looks like(24:15) Balancing team autonomy with guardrails for Live(30:55) The high talent bar and introduction of levels at Netflix(36:01) The Keeper Test  (41:27) Why engineers leave or stay (44:27) How AI tools are used at Netflix(47:54) AI’s highest-impact use cases(50:20) What new grads add and why senior talent still matters(53:25) Open source at Netflix (57:07) Elizabeth’s parting advice for new engineers to succeed at Netflix —The Pragmatic Engineer deepdives relevant for this episode:• The end of the senior-only level at Netflix• Netflix revamps its compensation philosophy• Live streaming at world-record scale with Ashutosh Agrawal• Shipping to production• What is good software architecture?—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Companies like Graphite, Notion, and Brex rely on Statsig to measure the impact of the pace they ship. Get a 30-day enterprise trial here.•⁠ Linear – The system for modern product development. Linear is a heavy user of Swift: they just redesigned their native iOS app using their own take on Apple’s Liquid Glass design language. The new app is about speed and performance – just like Linear is. Check it out.—Chris Lattner is one of the most influential engineers of the past two decades. He created the LLVM compiler infrastructure and the Swift programming language – and Swift opened iOS development to a broader group of engineers. With Mojo, he’s now aiming to do the same for AI, by lowering the barrier to programming AI applications.I sat down with Chris in San Francisco, to talk language design, lessons on designing Swift and Mojo, and – of course! – compilers. It’s hard to find someone who is as enthusiastic and knowledgeable about compilers as Chris is!We also discussed why experts often resist change even when current tools slow them down, what he learned about AI and hardware from his time across both large and small engineering teams, and why compiler engineering remains one of the best ways to understand how software really works.—Timestamps(00:00) Intro(02:35) Compilers in the early 2000s(04:48) Why Chris built LLVM(08:24) GCC vs. LLVM(09:47) LLVM at Apple (19:25) How Chris got support to go open source at Apple(20:28) The story of Swift (24:32) The process for designing a language (31:00) Learnings from launching Swift (35:48) Swift Playgrounds: making coding accessible(40:23) What Swift solved and the technical debt it created(47:28) AI learnings from Google and Tesla (51:23) SiFive: learning about hardware engineering(52:24) Mojo’s origin story(57:15) Modular’s bet on a two-level stack(1:01:49) Compiler shortcomings(1:09:11) Getting started with Mojo (1:15:44) How big is Modular, as a company?(1:19:00) AI coding tools the Modular team uses (1:22:59) What kind of software engineers Modular hires (1:25:22) A programming language for LLMs? No thanks(1:29:06) Why you should study and understand compilers—The Pragmatic Engineer deepdives relevant for this episode:•⁠ AI Engineering in the real world• The AI Engineering stack• Uber's crazy YOLO app rewrite, from the front seat• Python, Go, Rust, TypeScript and AI with Armin Ronacher• Microsoft’s developer tools roots—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. •⁠ Linear – The system for modern product development. —Addy Osmani is Head of Chrome Developer Experience at Google, where he leads teams focused on improving performance, tooling, and the overall developer experience for building on the web. If you’ve ever opened Chrome’s Developer Tools bar, you’ve definitely used features Addy has built. He’s also the author of several books, including his latest, Beyond Vibe Coding, which explores how AI is changing software development.In this episode of The Pragmatic Engineer, I sit down with Addy to discuss how AI is reshaping software engineering workflows, the tradeoffs between speed and quality, and why understanding generated code remains critical. We dive into his article The 70% Problem, which explains why AI tools accelerate development but struggle with the final 30% of software quality—and why this last 30% is tackled easily by software engineers who understand how the system actually works.—Timestamps(00:00) Intro(02:17) Vibe coding vs. AI-assisted engineering(06:07) How Addy uses AI tools(13:10) Addy’s learnings about applying AI for development(18:47) Addy’s favorite tools(22:15) The 70% Problem(28:15) Tactics for efficient LLM usage(32:58) How AI tools evolved(34:29) The case for keeping expectations low and control high(38:05) Autonomous agents and working with them(42:49) How the EM and PM role changes with AI(47:14) The rise of new roles and shifts in developer education(48:11) The importance of critical thinking when working with AI(54:08) LLMs as a tool for learning(1:03:50) Rapid questions—The Pragmatic Engineer deepdives relevant for this episode:•⁠ Vibe Coding as a software engineer•⁠ How AI-assisted coding will change software engineering: hard truths•⁠ AI Engineering in the real world•⁠ The AI Engineering stack•⁠ How Claude Code is built—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Something interesting is happening with the latest generation of tech giants. Rather than building advanced experimentation tools themselves, companies like Anthropic, Figma, Notion and a bunch of others… are just using Statsig. Statsig has rebuilt this entire suite of data tools that was available at maybe 10 or 15 giants until now. Check out Statsig.•⁠ Linear – The system for modern product development. Linear is just so fast to use – and it enables velocity in product workflows. Companies like Perplexity and OpenAI have already switched over, because simplicity scales. Go ahead and check out Linear and see why it feels like a breeze to use.—What is it really like to be an engineer at Google?In this special deep dive episode, we unpack how engineering at Google actually works. We spent months researching the engineering culture of the search giant, and talked with 20+ current and former Googlers to bring you this deepdive with Elin Nilsson, tech industry researcher for The Pragmatic Engineer and a former Google intern.Google has always been an engineering-driven organization. We talk about its custom stack and tools, the design-doc culture, and the performance and promotion systems that define career growth. We also explore the culture that feels built for engineers: generous perks, a surprisingly light on-call setup often considered the best in the industry, and a deep focus on solving technical problems at scale.If you are thinking about applying to Google or are curious about how the company’s engineering culture has evolved, this episode takes a clear look at what it was like to work at Google in the past versus today, and who is a good fit for today’s Google.Jump to interesting parts:(13:50) Tech stack(1:05:08) Performance reviews (GRAD)(2:07:03) The culture of continuously rewriting things—Timestamps(00:00) Intro(01:44) Stats about Google(11:41) The shared culture across Google(13:50) Tech stack(34:33) Internal developer tools and monorepo(43:17) The downsides of having so many internal tools at Google(45:29) Perks(55:37) Engineering roles(1:02:32) Levels at Google (1:05:08) Performance reviews (GRAD)(1:13:05) Readability(1:16:18) Promotions(1:25:46) Design docs(1:32:30) OKRs(1:44:43) Googlers, Nooglers, ReGooglers(1:57:27) Google Cloud(2:03:49) Internal transfers(2:07:03) Rewrites(2:10:19) Open source(2:14:57) Culture shift(2:31:10) Making the most of Google, as an engineer(2:39:25) Landing a job at Google—The Pragmatic Engineer deepdives relevant for this episode:•⁠ Inside Google’s engineering culture•⁠ Oncall at Google•⁠ Performance calibrations at tech companies•⁠ Promotions and tooling at Google•⁠ How Kubernetes is built•⁠ The man behind the Big Tech comics: Google cartoonist Manu Cornet—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Most teams end up in this situation: ship a feature to 10% of users, wait a week, check three different tools, try to correlate the data, and you’re still unsure if it worked. The problem is that each tool has its own user identification and segmentation logic. Statsig solved this problem by building everything within a unified platform. Check out Statsig.•⁠ Linear – The system for modern product development. In the episode, Armin talks about how he uses an army of “AI interns” at his startup. With Linear, you can easily do the same: Linear’s Cursor integration lets you add Cursor as an agent to your workspace. This agent then works alongside you and your team to make code changes or answer questions. You’ve got to try it out: give Linear a spin and see how it integrates with Cursor.—Armin Ronacher is the creator of the Flask framework for Python, was one of the first engineers hired at Sentry, and now the co-founder of a new startup. He has spent his career thinking deeply about how tools shape the way we build software.In this episode of The Pragmatic Engineer Podcast, he joins me to talk about how programming languages compare, why Rust may not be ideal for early-stage startups, and how AI tools are transforming the way engineers work. Armin shares his view on what continues to make certain languages worth learning, and how agentic coding is driving people to work more, sometimes to their own detriment. We also discuss: • Why the Python 2 to 3 migration was more challenging than expected• How Python, Go, Rust, and TypeScript stack up for different kinds of work • How AI tools are changing the need for unified codebases• What Armin learned about error handling from his time at Sentry• And much more Jump to interesting parts:• (06:53) How Python, Go, and Rust stack up and when to use each one• (30:08) Why Armin has changed his mind about AI tools• (50:32) How important are language choices from an error-handling perspective?—Timestamps(00:00) Intro(01:34) Why the Python 2 to 3 migration created so many challenges(06:53) How Python, Go, and Rust stack up and when to use each one(08:35) The friction points that make Rust a bad fit for startups(12:28) How Armin thinks about choosing a language for building a startup(22:33) How AI is impacting the need for unified code bases(24:19) The use cases where AI coding tools excel (30:08) Why Armin has changed his mind about AI tools(38:04) Why different programming languages still matter but may not in an AI-driven future(42:13) Why agentic coding is driving people to work more and why that’s not always good(47:41) Armin’s error-handling takeaways from working at Sentry (50:32) How important is language choice from an error-handling perspective(56:02) Why the current SDLC still doesn’t prioritize error handling (1:04:18) The challenges language designers face (1:05:40) What Armin learned from working in startups and who thrives in that environment(1:11:39) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Statsig built a complete set of data tools that allow engineering teams to measure the impact of their work. This toolkit is SO valuable to so many teams, that OpenAI - who was a huge user of Statsig - decided to acquire the company, the news announced last week. Talk about validation! Check out Statsig.•⁠ Linear – The system for modern product development. Here’s an interesting story: OpenAI switched to Linear as a way to establish a shared vocabulary between teams. Every project now follows the same lifecycle, uses the same labels, and moves through the same states. Try Linear for yourself.—What does it take to do well at a hyper-growth company? In this episode of The Pragmatic Engineer, I sit down with Charles-Axel Dein, one of the first engineers at Uber, who later hired me there. Since then, he’s gone on to work at CloudKitchens. He’s also been maintaining the popular Professional programming reading list GitHub repo for 15 years, where he collects articles that made him a better programmer. In our conversation, we dig into what it’s really like to work inside companies that grow rapidly in scale and headcount. Charles shares what he’s learned about personal productivity, project management, incidents, interviewing, plus how to build flexible skills that hold up in fast-moving environments. Jump to interesting parts:• 10:41 – the reality of working inside a hyperscale company• 41:10 – the traits of high-performing engineers• 1:03:31 – Charles’ advice for getting hired in today’s job marketWe also discuss:• How to spot the signs of hypergrowth (and when it’s slowing down)• What sets high-performing engineers apart beyond shipping• Charles’s personal productivity tips, favorite reads, and how he uses reading to uplevel his skills• Strategic tips for building your resume and interviewing • How imposter syndrome is normal, and how leaning into it helps you grow• And much more!If you’re at a fast-growing company, considering joining one, or looking to land your next role, you won’t want to miss this practical advice on hiring, interviewing, productivity, leadership, and career growth.—Timestamps(00:00) Intro(04:04) Early days at Uber as engineer #20(08:12) CloudKitchens’ similarities with Uber(10:41) The reality of working at a hyperscale company(19:05) Tenancies and how Uber deployed new features(22:14) How CloudKitchens handles incidents(26:57) Hiring during fast-growth(34:09) Avoiding burnout(38:55) The popular Professional programming reading list repo(41:10) The traits of high-performing engineers (53:22) Project management tactics(1:03:31) How to get hired as a software engineer(1:12:26) How AI is changing hiring(1:19:26) Unexpected ways to thrive in fast-paced environments(1:20:45) Dealing with imposter syndrome (1:22:48) Book recommendations (1:27:26) The problem with survival bias (1:32:44) AI’s impact on software development (1:42:28) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:•⁠ Software engineers leading projects•⁠ The Platform and Program split at Uber•⁠ Inside Uber’s move to the Cloud•⁠ How Uber built its observability platform•⁠ From Software Engineer to AI Engineer – with Janvi Kalra—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Statsig built a complete set of data tools that allow engineering teams to measure the impact of their work. This toolkit is SO valuable to so many teams, that OpenAI - who was a huge user of Statsig - decided to acquire the company, the news announced last week. Talk about validation! Check out Statsig.•⁠ Linear – The system for modern product development. Here’s an interesting story: OpenAI switched to Linear as a way to establish a shared vocabulary between teams. Every project now follows the same lifecycle, uses the same labels, and moves through the same states. Try Linear for yourself.—The Pragmatic Engineer Podcast is back with the Fall 2025 season. Expect new episodes to be published on most Wednesdays, looking ahead.Code Complete is one of the most enduring books on software engineering. Steve McConnell wrote the 900-page handbook just five years into his career, capturing what he wished he’d known when starting out. Decades later, the lessons remain relevant, and Code Complete remains a best-seller.In this episode, we talk about what has aged well, what needed updating in the second edition, and the broader career principles Steve has developed along the way. From his “career pyramid” model to his critique of “lily pad hopping,” and why periods of working in fast-paced, all-in environments can be so rewarding, the emphasis throughout is on taking ownership of your career and making deliberate choices.We also discuss:• Top-down vs. bottom-up design and why most engineers default to one approach• Why rewriting code multiple times makes it better• How taking a year off to write Code Complete crystallized key lessons• The 3 areas software designers need to understand, and why focusing only on technology may be the most limiting • And much more!Steve rarely gives interviews, so I hope you enjoy this conversation, which we recorded in Seattle.—Timestamps(00:00) Intro(01:31) How and why Steve wrote Code Complete(08:08) What code construction is and how it differs from software development(11:12) Top-down vs. bottom-up design approach(14:46) Why design documents frustrate some engineers(16:50) The case for rewriting everything three times(20:15) Steve’s career before and after Code Complete(27:47) Steve’s career advice(44:38) Three areas software designers need to understand(48:07) Advice when becoming a manager, as a developer(53:02) The importance of managing your energy(57:07) Early Microsoft and why startups are a culture of intense focus(1:04:14) What changed in the second edition of Code Complete (1:10:50) AI’s impact on software development: Steve’s take(1:17:45) Code reviews and GenAI(1:19:58) Why engineers are becoming more full-stack (1:21:40) Could AI be the exception to “no silver bullets?”(1:26:31) Steve’s advice for engineers on building a meaningful career—The Pragmatic Engineer deepdives relevant for this episode:• What changed in 50 years of computing• The past and future of modern backend practices• The Philosophy of Software Design – with John Ousterhout• AI tools for software engineers, but without the hype – with Simon Willison (co-creator of Django) • TDD, AI agents and coding – with Kent Beck—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:•⁠ WorkOS — The modern identity platform for B2B SaaS.•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more.• Sonar —  Code quality and code security for ALL code.—In this episode of The Pragmatic Engineer, I sit down with Peter Walker, Head of Insights at Carta, to break down how venture capital and startups themselves are changing.We go deep on the numbers: why fewer companies are getting funded despite record VC investment levels, how hiring has shifted dramatically since 2021, and why solo founders are on the rise even though most VCs still prefer teams. We also unpack the growing emphasis on ARR per FTE, what actually happens in bridge and down rounds, and why the time between fundraising rounds has stretched far beyond the old 18-month cycle.We cover what all this means for engineers: what to ask before joining a startup, how to interpret valuation trends, and what kind of advisor roles startups are actually looking for.If you work at a startup, are considering joining one, or just want a clearer picture of how venture-backed companies operate today, this episode is for you.—Timestamps(00:00) Intro(01:21) How venture capital works and the goal of VC-backed startups(03:10) Venture vs. non-venture backed businesses (05:59) Why venture-backed companies prioritize growth over profitability(09:46) A look at the current health of venture capital (13:19) The hiring slowdown at startups(16:00) ARR per FTE: The new metric VCs care about(21:50) Priced seed rounds vs. SAFEs (24:48) Why some founders are incentivized to raise at high valuations(29:31) What a bridge round is and why they can signal trouble(33:15) Down rounds and how optics can make or break startups (36:47) Why working at startups offers more ownership and learning(37:47) What the data shows about raising money in the summer(41:45) The length of time it takes to close a VC deal(44:29) How AI is reshaping startup formation, team size, and funding trends(48:11) Why VCs don’t like solo founders(50:06) How employee equity (ESOPs) work(53:50) Why acquisition payouts are often smaller than employees expect(55:06) Deep tech vs. software startups:(57:25) Startup advisors: What they do, how much equity they get(1:02:08) Why time between rounds is increasing and what that means(1:03:57) Why it’s getting harder to get from Seed to Series A (1:06:47) A case for quitting (sometimes) (1:11:40) How to evaluate a startup before joining as an engineer(1:13:22) The skills engineers need to thrive in a startup environment(1:16:04) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:—See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Supported by Our Partners•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more.• Graphite — The AI developer productivity platform.—There’s no shortage of bold claims about AI and developer productivity, but how do you separate signal from noise?In this episode of The Pragmatic Engineer, I’m joined by Laura Tacho, CTO at DX, to cut through the hype and share how well (or not) AI tools are actually working inside engineering orgs. Laura shares insights from DX’s research across 180+ companies, including surprising findings about where developers save the most time, why devs don’t use AI at all, and what kinds of rollouts lead to meaningful impact.We also discuss: • The problem with oversimplified AI headlines and how to think more critically about them• An overview of the DX AI Measurement framework• Learnings from Booking.com’s AI tool rollout• Common reasons developers aren’t using AI tools• Why using AI tools sometimes decreases developer satisfaction• Surprising results from DX’s 180+ company study• How AI-generated documentation differs from human-written docs• Why measuring developer experience before rolling out AI is essential• Why Laura thinks roadmaps are on their way out• And much more!—Timestamps(00:00) Intro(01:23) Laura’s take on AI overhyped headlines (10:46) Common questions Laura gets about AI implementation (11:49) How to measure AI’s impact (15:12) Why acceptance rate and lines of code are not sufficient measures of productivity(18:03) The Booking.com case study(20:37) Why some employees are not using AI (24:20) What developers are actually saving time on (29:14) What happens with the time savings(31:10) The surprising results from the DORA report on AI in engineering (33:44) A hypothesis around AI and flow state and the importance of talking to developers(35:59) What’s working in AI architecture (42:22) Learnings from WorkHuman’s adoption of Copilot (47:00) Consumption-based pricing, and the difficulty of allocating resources to AI (52:01) What DX Core 4 measures (55:32) The best outcomes of implementing AI (58:56) Why highly regulated industries are having the best results with AI rollout(1:00:30) Indeed’s structured AI rollout (1:04:22) Why migrations might be a good use case for AI (and a tip for doing it!) (1:07:30) Advice for engineering leads looking to get better at AI tooling and implementation (1:08:49) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:• AI Engineering in the real world• Measuring software engineering productivity• The AI Engineering stack• A new way to measure developer productivity – from the creators of DORA and SPACE—See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
loading
Comments 
loading