Discover
The Pragmatic Engineer
The Pragmatic Engineer
Author: Gergely Orosz
Subscribed: 732Played: 10,470Subscribe
Share
© Gergely Orosz
Description
Software engineering at Big Tech and startups, from the inside. Deepdives with experienced engineers and tech professionals who share their hard-earned lessons, interesting stories and advice they have on building software.
Especially relevant for software engineers and engineering leaders: useful for those working in tech.
newsletter.pragmaticengineer.com
Especially relevant for software engineers and engineering leaders: useful for those working in tech.
newsletter.pragmaticengineer.com
46 Episodes
Reverse
Brought to You By:• Statsig — The unified platform for flags, analytics, experiments, and more. Statsig are helping make the first-ever Pragmatic Summit a reality. Join me and 400 other top engineers and leaders on 11 February, in San Francisco for a special one-day event. Reserve your spot here.• Linear — The system for modern product development. Engineering teams today move much faster, thanks to AI. Because of this, coordination increasingly becomes a problem. This is where Linear helps fast-moving teams stay focused. Check out Linear.—As software engineers, what should we know about writing secure code?Johannes Dahse is the VP of Code Security at Sonar and a security expert with 20 years of industry experience. In today’s episode of The Pragmatic Engineer, he joins me to talk about what security teams actually do, what developers should own, and where real-world risk enters modern codebases.We cover dependency risk, software composition analysis, CVEs, dynamic testing, and how everyday development practices affect security outcomes. Johannes also explains where AI meaningfully helps, where it introduces new failure modes, and why understanding the code you write and ship remains the most reliable defense.If you build and ship software, this episode is a practical guide to thinking about code security under real-world engineering constraints.—Timestamps(00:00) Intro(02:31) What is penetration testing?(06:23) Who owns code security: devs or security teams?(14:42) What is code security? (17:10) Code security basics for devs(21:35) Advanced security challenges(24:36) SCA testing (25:26) The CVE Program (29:39) The State of Code Security report (32:02) Code quality vs security(35:20) Dev machines as a security vulnerability(37:29) Common security tools(42:50) Dynamic security tools(45:01) AI security reviews: what are the limits?(47:51) AI-generated code risks(49:21) More code: more vulnerabilities(51:44) AI’s impact on code security(58:32) Common misconceptions of the security industry(1:03:05) When is security “good enough?”(1:05:40) Johannes’s favorite programming language—The Pragmatic Engineer deepdives relevant for this episode:• What is Security Engineering?• Mishandled security vulnerability in Next.js• Okta Schooled on Its Security Practices—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:• Statsig — The unified platform for flags, analytics, experiments, and more. AI-accelerated development isn’t just about shipping faster: it’s about measuring whether, what you ship, actually delivers value. This is where modern experimentation with Statsig comes in. Check it out.• Linear — The system for modern product development. I had a jaw-dropping experience when I dropped in for the weekly “Quality Wednesdays” meeting at Linear. Every week, every dev fixes at least one quality isse, large or small. Even if it’s one pixel misalignment, like this one. I’ve yet to see a team obsess this much about quality. Read more about how Linear does Quality Wednesdays – it’s fascinating!—Martin Fowler is one of the most influential people within software architecture, and the broader tech industry. He is the Chief Scientist at Thoughtworks and the author of Refactoring and Patterns of Enterprise Application Architecture, and several other books. He has spent decades shaping how engineers think about design, architecture, and process, and regularly publishes on his blog, MartinFowler.com.In this episode, we discuss how AI is changing software development: the shift from deterministic to non-deterministic coding; where generative models help with legacy code; and the narrow but useful cases for vibe coding. Martin explains why LLM output must be tested rigorously, why refactoring is more important than ever, and how combining AI tools with deterministic techniques may be what engineering teams need.We also revisit the origins of the Agile Manifesto and talk about why, despite rapid changes in tooling and workflows, the skills that make a great engineer remain largely unchanged.—Timestamps(00:00) Intro(01:50) How Martin got into software engineering (07:48) Joining Thoughtworks (10:07) The Thoughtworks Technology Radar(16:45) From Assembly to high-level languages(25:08) Non-determinism (33:38) Vibe coding(39:22) StackOverflow vs. coding with AI(43:25) Importance of testing with LLMs (50:45) LLMs for enterprise software(56:38) Why Martin wrote Refactoring (1:02:15) Why refactoring is so relevant today(1:06:10) Using LLMs with deterministic tools(1:07:36) Patterns of Enterprise Application Architecture(1:18:26) The Agile Manifesto (1:28:35) How Martin learns about AI (1:34:58) Advice for junior engineers (1:37:44) The state of the tech industry today(1:42:40) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:• Vibe coding as a software engineer• The AI Engineering stack• AI Engineering in the real world• What changed in 50 years of computing—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:• Statsig — The unified platform for flags, analytics, experiments, and more. Statsig enables two cultures at once: continuous shipping and experimentation. Companies like Notion went from single-digit experiments per quarter to over 300 experiments with Statsig. Start using Statsig with a generous free tier, and a $50K startup program.• Linear — The system for modern product development. When most companies hit real scale, they start to slow down, and are faced with “process debt.” This often hits software engineers the most. Companies switch to Linear to hit a hard reset on this process debt – ones like Scale cut their bug resolution in half after the switch. Check out Linear’s migration guide for details.—What’s it like to work as a software engineer inside one of the world’s biggest streaming companies?In this special episode recorded at Netflix’s headquarters in Los Gatos, I sit down with Elizabeth Stone, Netflix’s Chief Technology Officer. Before becoming CTO, Elizabeth led data and insights at Netflix and was VP of Science at Lyft. She brings a rare mix of technical depth, product thinking, and people leadership.We discuss what it means to be “unusually responsible” at Netflix, how engineers make decisions without layers of approval, and how the company balances autonomy with guardrails for high-stakes projects like Netflix Live. Elizabeth shares how teams self-reflect and learn from outages and failures, why Netflix doesn’t do formal performance reviews, and what new grads bring to a company known for hiring experienced engineers.This episode offers a rare inside look at how Netflix engineers build, learn, and lead at a global scale.—Timestamps(00:00) Intro(01:44) The scale of Netflix (03:31) Production software stack(05:20) Engineering challenges in production(06:38) How the Open Connect delivery network works(08:30) From pitch to play (11:31) How Netflix enables engineers to make decisions (13:26) Building Netflix Live for global sports(16:25) Learnings from Paul vs. Tyson for NFL Live(17:47) Inside the control room (20:35) What being unusually responsible looks like(24:15) Balancing team autonomy with guardrails for Live(30:55) The high talent bar and introduction of levels at Netflix(36:01) The Keeper Test (41:27) Why engineers leave or stay (44:27) How AI tools are used at Netflix(47:54) AI’s highest-impact use cases(50:20) What new grads add and why senior talent still matters(53:25) Open source at Netflix (57:07) Elizabeth’s parting advice for new engineers to succeed at Netflix —The Pragmatic Engineer deepdives relevant for this episode:• The end of the senior-only level at Netflix• Netflix revamps its compensation philosophy• Live streaming at world-record scale with Ashutosh Agrawal• Shipping to production• What is good software architecture?—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:• Statsig — The unified platform for flags, analytics, experiments, and more. Companies like Graphite, Notion, and Brex rely on Statsig to measure the impact of the pace they ship. Get a 30-day enterprise trial here.• Linear – The system for modern product development. Linear is a heavy user of Swift: they just redesigned their native iOS app using their own take on Apple’s Liquid Glass design language. The new app is about speed and performance – just like Linear is. Check it out.—Chris Lattner is one of the most influential engineers of the past two decades. He created the LLVM compiler infrastructure and the Swift programming language – and Swift opened iOS development to a broader group of engineers. With Mojo, he’s now aiming to do the same for AI, by lowering the barrier to programming AI applications.I sat down with Chris in San Francisco, to talk language design, lessons on designing Swift and Mojo, and – of course! – compilers. It’s hard to find someone who is as enthusiastic and knowledgeable about compilers as Chris is!We also discussed why experts often resist change even when current tools slow them down, what he learned about AI and hardware from his time across both large and small engineering teams, and why compiler engineering remains one of the best ways to understand how software really works.—Timestamps(00:00) Intro(02:35) Compilers in the early 2000s(04:48) Why Chris built LLVM(08:24) GCC vs. LLVM(09:47) LLVM at Apple (19:25) How Chris got support to go open source at Apple(20:28) The story of Swift (24:32) The process for designing a language (31:00) Learnings from launching Swift (35:48) Swift Playgrounds: making coding accessible(40:23) What Swift solved and the technical debt it created(47:28) AI learnings from Google and Tesla (51:23) SiFive: learning about hardware engineering(52:24) Mojo’s origin story(57:15) Modular’s bet on a two-level stack(1:01:49) Compiler shortcomings(1:09:11) Getting started with Mojo (1:15:44) How big is Modular, as a company?(1:19:00) AI coding tools the Modular team uses (1:22:59) What kind of software engineers Modular hires (1:25:22) A programming language for LLMs? No thanks(1:29:06) Why you should study and understand compilers—The Pragmatic Engineer deepdives relevant for this episode:• AI Engineering in the real world• The AI Engineering stack• Uber's crazy YOLO app rewrite, from the front seat• Python, Go, Rust, TypeScript and AI with Armin Ronacher• Microsoft’s developer tools roots—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:• Statsig — The unified platform for flags, analytics, experiments, and more. • Linear – The system for modern product development. —Addy Osmani is Head of Chrome Developer Experience at Google, where he leads teams focused on improving performance, tooling, and the overall developer experience for building on the web. If you’ve ever opened Chrome’s Developer Tools bar, you’ve definitely used features Addy has built. He’s also the author of several books, including his latest, Beyond Vibe Coding, which explores how AI is changing software development.In this episode of The Pragmatic Engineer, I sit down with Addy to discuss how AI is reshaping software engineering workflows, the tradeoffs between speed and quality, and why understanding generated code remains critical. We dive into his article The 70% Problem, which explains why AI tools accelerate development but struggle with the final 30% of software quality—and why this last 30% is tackled easily by software engineers who understand how the system actually works.—Timestamps(00:00) Intro(02:17) Vibe coding vs. AI-assisted engineering(06:07) How Addy uses AI tools(13:10) Addy’s learnings about applying AI for development(18:47) Addy’s favorite tools(22:15) The 70% Problem(28:15) Tactics for efficient LLM usage(32:58) How AI tools evolved(34:29) The case for keeping expectations low and control high(38:05) Autonomous agents and working with them(42:49) How the EM and PM role changes with AI(47:14) The rise of new roles and shifts in developer education(48:11) The importance of critical thinking when working with AI(54:08) LLMs as a tool for learning(1:03:50) Rapid questions—The Pragmatic Engineer deepdives relevant for this episode:• Vibe Coding as a software engineer• How AI-assisted coding will change software engineering: hard truths• AI Engineering in the real world• The AI Engineering stack• How Claude Code is built—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:• Statsig — The unified platform for flags, analytics, experiments, and more. Something interesting is happening with the latest generation of tech giants. Rather than building advanced experimentation tools themselves, companies like Anthropic, Figma, Notion and a bunch of others… are just using Statsig. Statsig has rebuilt this entire suite of data tools that was available at maybe 10 or 15 giants until now. Check out Statsig.• Linear – The system for modern product development. Linear is just so fast to use – and it enables velocity in product workflows. Companies like Perplexity and OpenAI have already switched over, because simplicity scales. Go ahead and check out Linear and see why it feels like a breeze to use.—What is it really like to be an engineer at Google?In this special deep dive episode, we unpack how engineering at Google actually works. We spent months researching the engineering culture of the search giant, and talked with 20+ current and former Googlers to bring you this deepdive with Elin Nilsson, tech industry researcher for The Pragmatic Engineer and a former Google intern.Google has always been an engineering-driven organization. We talk about its custom stack and tools, the design-doc culture, and the performance and promotion systems that define career growth. We also explore the culture that feels built for engineers: generous perks, a surprisingly light on-call setup often considered the best in the industry, and a deep focus on solving technical problems at scale.If you are thinking about applying to Google or are curious about how the company’s engineering culture has evolved, this episode takes a clear look at what it was like to work at Google in the past versus today, and who is a good fit for today’s Google.Jump to interesting parts:(13:50) Tech stack(1:05:08) Performance reviews (GRAD)(2:07:03) The culture of continuously rewriting things—Timestamps(00:00) Intro(01:44) Stats about Google(11:41) The shared culture across Google(13:50) Tech stack(34:33) Internal developer tools and monorepo(43:17) The downsides of having so many internal tools at Google(45:29) Perks(55:37) Engineering roles(1:02:32) Levels at Google (1:05:08) Performance reviews (GRAD)(1:13:05) Readability(1:16:18) Promotions(1:25:46) Design docs(1:32:30) OKRs(1:44:43) Googlers, Nooglers, ReGooglers(1:57:27) Google Cloud(2:03:49) Internal transfers(2:07:03) Rewrites(2:10:19) Open source(2:14:57) Culture shift(2:31:10) Making the most of Google, as an engineer(2:39:25) Landing a job at Google—The Pragmatic Engineer deepdives relevant for this episode:• Inside Google’s engineering culture• Oncall at Google• Performance calibrations at tech companies• Promotions and tooling at Google• How Kubernetes is built• The man behind the Big Tech comics: Google cartoonist Manu Cornet—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:• Statsig — The unified platform for flags, analytics, experiments, and more. Most teams end up in this situation: ship a feature to 10% of users, wait a week, check three different tools, try to correlate the data, and you’re still unsure if it worked. The problem is that each tool has its own user identification and segmentation logic. Statsig solved this problem by building everything within a unified platform. Check out Statsig.• Linear – The system for modern product development. In the episode, Armin talks about how he uses an army of “AI interns” at his startup. With Linear, you can easily do the same: Linear’s Cursor integration lets you add Cursor as an agent to your workspace. This agent then works alongside you and your team to make code changes or answer questions. You’ve got to try it out: give Linear a spin and see how it integrates with Cursor.—Armin Ronacher is the creator of the Flask framework for Python, was one of the first engineers hired at Sentry, and now the co-founder of a new startup. He has spent his career thinking deeply about how tools shape the way we build software.In this episode of The Pragmatic Engineer Podcast, he joins me to talk about how programming languages compare, why Rust may not be ideal for early-stage startups, and how AI tools are transforming the way engineers work. Armin shares his view on what continues to make certain languages worth learning, and how agentic coding is driving people to work more, sometimes to their own detriment. We also discuss: • Why the Python 2 to 3 migration was more challenging than expected• How Python, Go, Rust, and TypeScript stack up for different kinds of work • How AI tools are changing the need for unified codebases• What Armin learned about error handling from his time at Sentry• And much more Jump to interesting parts:• (06:53) How Python, Go, and Rust stack up and when to use each one• (30:08) Why Armin has changed his mind about AI tools• (50:32) How important are language choices from an error-handling perspective?—Timestamps(00:00) Intro(01:34) Why the Python 2 to 3 migration created so many challenges(06:53) How Python, Go, and Rust stack up and when to use each one(08:35) The friction points that make Rust a bad fit for startups(12:28) How Armin thinks about choosing a language for building a startup(22:33) How AI is impacting the need for unified code bases(24:19) The use cases where AI coding tools excel (30:08) Why Armin has changed his mind about AI tools(38:04) Why different programming languages still matter but may not in an AI-driven future(42:13) Why agentic coding is driving people to work more and why that’s not always good(47:41) Armin’s error-handling takeaways from working at Sentry (50:32) How important is language choice from an error-handling perspective(56:02) Why the current SDLC still doesn’t prioritize error handling (1:04:18) The challenges language designers face (1:05:40) What Armin learned from working in startups and who thrives in that environment(1:11:39) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:• Statsig — The unified platform for flags, analytics, experiments, and more. Statsig built a complete set of data tools that allow engineering teams to measure the impact of their work. This toolkit is SO valuable to so many teams, that OpenAI - who was a huge user of Statsig - decided to acquire the company, the news announced last week. Talk about validation! Check out Statsig.• Linear – The system for modern product development. Here’s an interesting story: OpenAI switched to Linear as a way to establish a shared vocabulary between teams. Every project now follows the same lifecycle, uses the same labels, and moves through the same states. Try Linear for yourself.—What does it take to do well at a hyper-growth company? In this episode of The Pragmatic Engineer, I sit down with Charles-Axel Dein, one of the first engineers at Uber, who later hired me there. Since then, he’s gone on to work at CloudKitchens. He’s also been maintaining the popular Professional programming reading list GitHub repo for 15 years, where he collects articles that made him a better programmer. In our conversation, we dig into what it’s really like to work inside companies that grow rapidly in scale and headcount. Charles shares what he’s learned about personal productivity, project management, incidents, interviewing, plus how to build flexible skills that hold up in fast-moving environments. Jump to interesting parts:• 10:41 – the reality of working inside a hyperscale company• 41:10 – the traits of high-performing engineers• 1:03:31 – Charles’ advice for getting hired in today’s job marketWe also discuss:• How to spot the signs of hypergrowth (and when it’s slowing down)• What sets high-performing engineers apart beyond shipping• Charles’s personal productivity tips, favorite reads, and how he uses reading to uplevel his skills• Strategic tips for building your resume and interviewing • How imposter syndrome is normal, and how leaning into it helps you grow• And much more!If you’re at a fast-growing company, considering joining one, or looking to land your next role, you won’t want to miss this practical advice on hiring, interviewing, productivity, leadership, and career growth.—Timestamps(00:00) Intro(04:04) Early days at Uber as engineer #20(08:12) CloudKitchens’ similarities with Uber(10:41) The reality of working at a hyperscale company(19:05) Tenancies and how Uber deployed new features(22:14) How CloudKitchens handles incidents(26:57) Hiring during fast-growth(34:09) Avoiding burnout(38:55) The popular Professional programming reading list repo(41:10) The traits of high-performing engineers (53:22) Project management tactics(1:03:31) How to get hired as a software engineer(1:12:26) How AI is changing hiring(1:19:26) Unexpected ways to thrive in fast-paced environments(1:20:45) Dealing with imposter syndrome (1:22:48) Book recommendations (1:27:26) The problem with survival bias (1:32:44) AI’s impact on software development (1:42:28) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:• Software engineers leading projects• The Platform and Program split at Uber• Inside Uber’s move to the Cloud• How Uber built its observability platform• From Software Engineer to AI Engineer – with Janvi Kalra—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:• Statsig — The unified platform for flags, analytics, experiments, and more. Statsig built a complete set of data tools that allow engineering teams to measure the impact of their work. This toolkit is SO valuable to so many teams, that OpenAI - who was a huge user of Statsig - decided to acquire the company, the news announced last week. Talk about validation! Check out Statsig.• Linear – The system for modern product development. Here’s an interesting story: OpenAI switched to Linear as a way to establish a shared vocabulary between teams. Every project now follows the same lifecycle, uses the same labels, and moves through the same states. Try Linear for yourself.—The Pragmatic Engineer Podcast is back with the Fall 2025 season. Expect new episodes to be published on most Wednesdays, looking ahead.Code Complete is one of the most enduring books on software engineering. Steve McConnell wrote the 900-page handbook just five years into his career, capturing what he wished he’d known when starting out. Decades later, the lessons remain relevant, and Code Complete remains a best-seller.In this episode, we talk about what has aged well, what needed updating in the second edition, and the broader career principles Steve has developed along the way. From his “career pyramid” model to his critique of “lily pad hopping,” and why periods of working in fast-paced, all-in environments can be so rewarding, the emphasis throughout is on taking ownership of your career and making deliberate choices.We also discuss:• Top-down vs. bottom-up design and why most engineers default to one approach• Why rewriting code multiple times makes it better• How taking a year off to write Code Complete crystallized key lessons• The 3 areas software designers need to understand, and why focusing only on technology may be the most limiting • And much more!Steve rarely gives interviews, so I hope you enjoy this conversation, which we recorded in Seattle.—Timestamps(00:00) Intro(01:31) How and why Steve wrote Code Complete(08:08) What code construction is and how it differs from software development(11:12) Top-down vs. bottom-up design approach(14:46) Why design documents frustrate some engineers(16:50) The case for rewriting everything three times(20:15) Steve’s career before and after Code Complete(27:47) Steve’s career advice(44:38) Three areas software designers need to understand(48:07) Advice when becoming a manager, as a developer(53:02) The importance of managing your energy(57:07) Early Microsoft and why startups are a culture of intense focus(1:04:14) What changed in the second edition of Code Complete (1:10:50) AI’s impact on software development: Steve’s take(1:17:45) Code reviews and GenAI(1:19:58) Why engineers are becoming more full-stack (1:21:40) Could AI be the exception to “no silver bullets?”(1:26:31) Steve’s advice for engineers on building a meaningful career—The Pragmatic Engineer deepdives relevant for this episode:• What changed in 50 years of computing• The past and future of modern backend practices• The Philosophy of Software Design – with John Ousterhout• AI tools for software engineers, but without the hype – with Simon Willison (co-creator of Django) • TDD, AI agents and coding – with Kent Beck—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Brought to You By:• WorkOS — The modern identity platform for B2B SaaS.• Statsig — The unified platform for flags, analytics, experiments, and more.• Sonar — Code quality and code security for ALL code.—In this episode of The Pragmatic Engineer, I sit down with Peter Walker, Head of Insights at Carta, to break down how venture capital and startups themselves are changing.We go deep on the numbers: why fewer companies are getting funded despite record VC investment levels, how hiring has shifted dramatically since 2021, and why solo founders are on the rise even though most VCs still prefer teams. We also unpack the growing emphasis on ARR per FTE, what actually happens in bridge and down rounds, and why the time between fundraising rounds has stretched far beyond the old 18-month cycle.We cover what all this means for engineers: what to ask before joining a startup, how to interpret valuation trends, and what kind of advisor roles startups are actually looking for.If you work at a startup, are considering joining one, or just want a clearer picture of how venture-backed companies operate today, this episode is for you.—Timestamps(00:00) Intro(01:21) How venture capital works and the goal of VC-backed startups(03:10) Venture vs. non-venture backed businesses (05:59) Why venture-backed companies prioritize growth over profitability(09:46) A look at the current health of venture capital (13:19) The hiring slowdown at startups(16:00) ARR per FTE: The new metric VCs care about(21:50) Priced seed rounds vs. SAFEs (24:48) Why some founders are incentivized to raise at high valuations(29:31) What a bridge round is and why they can signal trouble(33:15) Down rounds and how optics can make or break startups (36:47) Why working at startups offers more ownership and learning(37:47) What the data shows about raising money in the summer(41:45) The length of time it takes to close a VC deal(44:29) How AI is reshaping startup formation, team size, and funding trends(48:11) Why VCs don’t like solo founders(50:06) How employee equity (ESOPs) work(53:50) Why acquisition payouts are often smaller than employees expect(55:06) Deep tech vs. software startups:(57:25) Startup advisors: What they do, how much equity they get(1:02:08) Why time between rounds is increasing and what that means(1:03:57) Why it’s getting harder to get from Seed to Series A (1:06:47) A case for quitting (sometimes) (1:11:40) How to evaluate a startup before joining as an engineer(1:13:22) The skills engineers need to thrive in a startup environment(1:16:04) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:—See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Supported by Our Partners• Statsig — The unified platform for flags, analytics, experiments, and more.• Graphite — The AI developer productivity platform.—There’s no shortage of bold claims about AI and developer productivity, but how do you separate signal from noise?In this episode of The Pragmatic Engineer, I’m joined by Laura Tacho, CTO at DX, to cut through the hype and share how well (or not) AI tools are actually working inside engineering orgs. Laura shares insights from DX’s research across 180+ companies, including surprising findings about where developers save the most time, why devs don’t use AI at all, and what kinds of rollouts lead to meaningful impact.We also discuss: • The problem with oversimplified AI headlines and how to think more critically about them• An overview of the DX AI Measurement framework• Learnings from Booking.com’s AI tool rollout• Common reasons developers aren’t using AI tools• Why using AI tools sometimes decreases developer satisfaction• Surprising results from DX’s 180+ company study• How AI-generated documentation differs from human-written docs• Why measuring developer experience before rolling out AI is essential• Why Laura thinks roadmaps are on their way out• And much more!—Timestamps(00:00) Intro(01:23) Laura’s take on AI overhyped headlines (10:46) Common questions Laura gets about AI implementation (11:49) How to measure AI’s impact (15:12) Why acceptance rate and lines of code are not sufficient measures of productivity(18:03) The Booking.com case study(20:37) Why some employees are not using AI (24:20) What developers are actually saving time on (29:14) What happens with the time savings(31:10) The surprising results from the DORA report on AI in engineering (33:44) A hypothesis around AI and flow state and the importance of talking to developers(35:59) What’s working in AI architecture (42:22) Learnings from WorkHuman’s adoption of Copilot (47:00) Consumption-based pricing, and the difficulty of allocating resources to AI (52:01) What DX Core 4 measures (55:32) The best outcomes of implementing AI (58:56) Why highly regulated industries are having the best results with AI rollout(1:00:30) Indeed’s structured AI rollout (1:04:22) Why migrations might be a good use case for AI (and a tip for doing it!) (1:07:30) Advice for engineering leads looking to get better at AI tooling and implementation (1:08:49) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:• AI Engineering in the real world• Measuring software engineering productivity• The AI Engineering stack• A new way to measure developer productivity – from the creators of DORA and SPACE—See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Supported by Our Partners• WorkOS — The modern identity platform for B2B SaaS.• Statsig — The unified platform for flags, analytics, experiments, and more.• Sonar — Code quality and code security for ALL code.—Steve Yegge is known for his writing and “rants”, including the famous “Google Platforms Rant” and the evergreen “Get that job at Google” post. He spent 7 years at Amazon and 13 at Google, as well as some time at Grab before briefly retiring from tech. Now out of retirement, he’s building AI developer tools at Sourcegraph—drawn back by the excitement of working with LLMs. He’s currently writing the book Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond.In this episode of The Pragmatic Engineer, I sat down with Steve in Seattle to talk about why Google consistently failed at building platforms, why AI coding feels easy but is hard to master, and why a new role, the AI Fixer, is emerging. We also dig into why he’s so energized by today’s AI tools, and how they’re changing the way software gets built.We also discuss: • The “interview anti-loop” at Google and the problems with interviews• An inside look at how Amazon operated in the early days before microservices • What Steve liked about working at Grab• Reflecting on the Google platforms rant and why Steve thinks Google is still terrible at building platforms• Why Steve came out of retirement• The emerging role of the “AI Fixer” in engineering teams• How AI-assisted coding is deceptively simple, but extremely difficult to steer• Steve’s advice for using AI coding tools and overcoming common challenges• Predictions about the future of developer productivity• A case for AI creating a real meritocracy • And much more!—Timestamps(00:00) Intro(04:55) An explanation of the interview anti-loop at Google and the shortcomings of interviews(07:44) Work trials and why entry-level jobs aren’t posted for big tech companies(09:50) An overview of the difficult process of landing a job as a software engineer(15:48) Steve’s thoughts on Grab and why he loved it(20:22) Insights from the Google platforms rant that was picked up by TechCrunch(27:44) The impact of the Google platforms rant(29:40) What Steve discovered about print ads not working for Google (31:48) What went wrong with Google+ and Wave(35:04) How Amazon has changed and what Google is doing wrong(42:50) Why Steve came out of retirement (45:16) Insights from “the death of the junior developer” and the impact of AI(53:20) The new role Steve predicts will emerge (54:52) Changing business cycles(56:08) Steve’s new book about vibe coding and Gergely’s experience (59:24) Reasons people struggle with AI tools(1:02:36) What will developer productivity look like in the future(1:05:10) The cost of using coding agents (1:07:08) Steve’s advice for vibe coding(1:09:42) How Steve used AI tools to work on his game Wyvern (1:15:00) Why Steve thinks there will actually be more jobs for developers (1:18:29) A comparison between game engines and AI tools(1:21:13) Why you need to learn AI now(1:30:08) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:• The full circle of developer productivity with Steve Yegge• Inside Amazon’s engineering culture• Vibe coding as a software engineer• AI engineering in the real world• The AI Engineering stack• Inside Sourcegraph’s engineering culture—See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Supported by Our Partners• Statsig — The unified platform for flags, analytics, experiments, and more.• Graphite — The AI developer productivity platform. • Augment Code — AI coding assistant that pro engineering teams love.—Steve Huynh spent 17 years at Amazon, including four as a Principal Engineer. In this episode of The Pragmatic Engineer, I join Steve in his studio for a deep dive into what the Principal role actually involves, why the path from Senior to Principal is so tough, and how even strong engineers can get stuck. Not because they’re unqualified, but because the bar is exceptionally high.We discuss what’s expected at the Principal level, the kind of work that matters most, and the trade-offs that come with the title. Steve also shares how Amazon’s internal policies shaped his trajectory, and what made the Principal Engineer community one of the most rewarding parts of his time at the company.We also go into: • Why being promoted from Senior to Principal is one of the hardest jumps in tech• How Amazon’s freedom of movement policy helped Steve work across multiple teams, from Kindle to Prime Video• The scale of Amazon: handling 10k–100k+ requests per second and what that means for engineering• Why latency became a company-wide obsession—and the research that tied it directly to revenue• Why companies should start with a monolith, and what led Amazon to adopt microservices• What makes the Principal Engineering community so special • Amazon’s culture of learning from its mistakes, including COEs (correction of errors) • The pros and cons of the Principal Engineer role• What Steve loves about the leadership principles at Amazon• Amazon’s intense writing culture and 6-pager format • Why Amazon patents software and what that process looks like• And much more!—Timestamps(00:00) Intro(01:11) What Steve worked on at Amazon, including Kindle, Prime Video, and payments(04:38) How Steve was able to work on so many teams at Amazon (09:12) An overview of the scale of Amazon and the dependency chain(16:40) Amazon’s focus on latency and the tradeoffs they make to keep latency low at scale(26:00) Why companies should start with a monolith (26:44) The structure of engineering at Amazon and why Amazon’s Principal is so hard to reach(30:44) The Principal Engineering community at Amazon(36:06) The learning benefits of working for a tech giant (38:44) Five challenges of being a Principal Engineer at Amazon(49:50) The types of managing work you have to do as a Principal Engineer (51:47) The pros and cons of the Principal Engineer role (54:59) What Steve loves about Amazon’s leadership principles(59:15) Amazon’s intense focus on writing (1:01:11) Patents at Amazon (1:07:58) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:• Inside Amazon’s engineering culture—See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Supported by Our Partners• WorkOS — The modern identity platform for B2B SaaS.• Statsig — The unified platform for flags, analytics, experiments, and more.• Sonar — Code quality and code security for ALL code. —What happens when a company goes all in on AI?At Shopify, engineers are expected to utilize AI tools, and they’ve been doing so for longer than most. Thanks to early access to models from GitHub Copilot, OpenAI, and Anthropic, the company has had a head start in figuring out what works.In this live episode from LDX3 in London, I spoke with Farhan Thawar, VP of Engineering, about how Shopify is building with AI across the entire stack. We cover the company’s internal LLM proxy, its policy of unlimited token usage, and how interns help push the boundaries of what’s possible.In this episode, we cover:• How Shopify works closely with AI labs• The story behind Shopify’s recent Code Red• How non-engineering teams are using Cursor for vibecoding• Tobi Lütke’s viral memo and Shopify’s expectations around AI• A look inside Shopify’s LLM proxy—used for privacy, token tracking, and more• Why Shopify places no limit on AI token spending • Why AI-first isn’t about reducing headcount—and why Shopify is hiring 1,000 interns• How Shopify’s engineering department operates and what’s changed since adopting AI tooling• Farhan’s advice for integrating AI into your workflow• And much more!—Timestamps(00:00) Intro(02:07) Shopify’s philosophy: “hire smart people and pair with them on problems”(06:22) How Shopify works with top AI labs (08:50) The recent Code Red at Shopify(10:47) How Shopify became early users of GitHub Copilot and their pivot to trying multiple tools(12:49) The surprising ways non-engineering teams at Shopify are using Cursor(14:53) Why you have to understand code to submit a PR at Shopify(16:42) AI tools' impact on SaaS (19:50) Tobi Lütke’s AI memo(21:46) Shopify’s LLM proxy and how they protect their privacy(23:00) How Shopify utilizes MCPs(26:59) Why AI tools aren’t the place to pinch pennies(30:02) Farhan’s projects and favorite AI tools(32:50) Why AI-first isn’t about freezing headcount and the value of hiring interns(36:20) How Shopify’s engineering department operates, including internal tools(40:31) Why Shopify added coding interviews for director-level and above hires(43:40) What has changed since Spotify added AI tooling (44:40) Farhan’s advice for implementing AI tools—The Pragmatic Engineer deepdives relevant for this episode:• How Shopify built its Live Globe for Black Friday• Inside Shopify's leveling split• Real-world engineering challenges: building Cursor• How Anthropic built Artifacts—See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Supported by Our Partners• Statsig — The unified platform for flags, analytics, experiments, and more.• Graphite — The AI developer productivity platform. • Augment Code — AI coding assistant that pro engineering teams love—GitHub recently turned 17 years old—but how did it start, how has it evolved, and what does the future look like as AI reshapes developer workflows?In this episode of The Pragmatic Engineer, I’m joined by Thomas Dohmke, CEO of GitHub. Thomas has been a GitHub user for 16 years and an employee for 7. We talk about GitHub’s early architecture, its remote-first operating model, and how the company is navigating AI—from Copilot to agents. We also discuss why GitHub hires junior engineers, how the company handled product-market fit early on, and why being a beloved tool can make shipping harder at times.Other topics we discuss include:• How GitHub’s architecture evolved beyond its original Rails monolith• How GitHub runs as a remote-first company—and why they rarely use email • GitHub’s rigorous approach to security• Why GitHub hires junior engineers• GitHub’s acquisition by Microsoft• The launch of Copilot and how it’s reshaping software development• Why GitHub sees AI agents as tools, not a replacement for engineers• And much more!—Timestamps(00:00) Intro(02:25) GitHub’s modern tech stack(08:11) From cloud-first to hybrid: How GitHub handles infrastructure(13:08) How GitHub’s remote-first culture shapes its operations(18:00) Former and current internal tools including Haystack(21:12) GitHub’s approach to security (24:30) The current size of GitHub, including security and engineering teams(25:03) GitHub’s intern program, and why they are hiring junior engineers(28:27) Why AI isn’t a replacement for junior engineers (34:40) A mini-history of GitHub (39:10) Why GitHub hit product market fit so quickly (43:44) The invention of pull requests(44:50) How GitHub enables offline work(46:21) How monetization has changed at GitHub since the acquisition (48:00) 2014 desktop application releases (52:10) The Microsoft acquisition (1:01:57) Behind the scenes of GitHub’s quiet period (1:06:42) The release of Copilot and its impact(1:14:14) Why GitHub decided to open-source Copilot extensions(1:20:01) AI agents and the myth of disappearing engineering jobs(1:26:36) Closing—The Pragmatic Engineer deepdives relevant for this episode:• AI Engineering in the real world• The AI Engineering stack• How Linux is built with Greg Kroah-Hartman• Stacked Diffs (and why you should know about them)• 50 Years of Microsoft and developer tools—See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Supported by Our Partners• Sonar — Code quality and code security for ALL code. • Statsig — The unified platform for flags, analytics, experiments, and more.• Augment Code — AI coding assistant that pro engineering teams love.—Kent Beck is one of the most influential figures in modern software development. Creator of Extreme Programming (XP), co-author of The Agile Manifesto, and a pioneer of Test-Driven Development (TDD), he’s shaped how teams write, test, and think about code.Now, with over five decades of programming experience, Kent is still pushing boundaries—this time with AI coding tools. In this episode of Pragmatic Engineer, I sit down with him to talk about what’s changed, what hasn’t, and why he’s more excited than ever to code.In our conversation, we cover:• Why Kent calls AI tools an “unpredictable genie”—and how he’s using them• Why Kent no longer has an emotional attachment to any specific programming language• The backstory of The Agile Manifesto—and why Kent resisted the word “agile”• An overview of XP (Extreme Programming) and how Grady Booch played a role in the name • Tape-to-tape experiments in Kent’s childhood that laid the groundwork for TDD• Kent’s time at Facebook and how he adapted to its culture and use of feature flags• And much more!—Timestamps(00:00) Intro(02:27) What Kent has been up to since writing Tidy First(06:05) Why AI tools are making coding more fun for Kent and why he compares it to a genie(13:41) Why Kent says languages don’t matter anymore(16:56) Kent’s current project building a small talk server(17:51) How Kent got involved with The Agile Manifesto(23:46) Gergely’s time at JP Morgan, and why Kent didn’t like the word ‘agile’(26:25) An overview of “extreme programming” (XP) (35:41) Kent’s childhood tape-to-tape experiments that inspired TDD(42:11) Kent’s response to Ousterhout’s criticism of TDD(50:05) Why Kent still uses TDD with his AI stack (54:26) How Facebook operated in 2011(1:04:10) Facebook in 2011 vs. 2017(1:12:24) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:• —See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Supported by Our Partners• Statsig — The unified platform for flags, analytics, experiments, and more.• Sinch — Connect with customers at every step of their journey.• Modal — The cloud platform for building AI applications.—How has Microsoft changed since its founding in 1975, especially in how it builds tools for developers?In this episode of The Pragmatic Engineer, I sit down with Scott Guthrie, Executive Vice President of Cloud and AI at Microsoft. Scott has been with the company for 28 years. He built the first prototype of ASP.NET, led the Windows Phone team, led up Azure, and helped shape many of Microsoft’s most important developer platforms.We talk about Microsoft’s journey from building early dev tools to becoming a top cloud provider—and how it actively worked to win back and grow its developer base.In this episode, we cover:• Microsoft’s early years building developer tools • Why Visual Basic faced resistance from devs back in the day: even though it simplified development at the time• How .NET helped bring a new generation of server-side developers into Microsoft’s ecosystem• Why Windows Phone didn’t succeed • The 90s Microsoft dev stack: docs, debuggers, and more• How Microsoft Azure went from being the #7 cloud provider to the #2 spot today• Why Microsoft created VS Code• How VS Code and open source led to the acquisition of GitHub• What Scott’s excited about in the future of developer tools and AI• And much more!—Timestamps(00:00) Intro(02:25) Microsoft’s early years building developer tools(06:15) How Microsoft’s developer tools helped Windows succeed(08:00) Microsoft’s first tools were built to allow less technically savvy people to build things(11:00) A case for embracing the technology that’s coming(14:11) Why Microsoft built Visual Studio and .NET(19:54) Steve Ballmer’s speech about .NET(22:04) The origins of C# and Anders Hejlsberg’s impact on Microsoft (25:29) The 90’s Microsoft stack, including documentation, debuggers, and more(30:17) How productivity has changed over the past 10 years (32:50) Why Gergely was a fan of Windows Phone—and Scott’s thoughts on why it didn’t last(36:43) Lessons from working on (and fixing) Azure under Satya Nadella (42:50) Codeplex and the acquisition of GitHub(48:52) 2014: Three bold projects to win the hearts of developers(55:40) What Scott’s excited about in new developer tools and cloud computing (59:50) Why Scott thinks AI will enhance productivity but create more engineering jobs—The Pragmatic Engineer deepdives relevant for this episode:• Microsoft is dogfooding AI dev tools’ future• Microsoft’s developer tools roots• Why are Cloud Development Environments spiking in popularity, now?• Engineering career paths at Big Tech and scaleups• How Linux is built with Greg Kroah-Hartman—See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Supported by Our Partners• Statsig — The unified platform for flags, analytics, experiments, and more.• Sinch — Connect with customers at every step of their journey.• Cortex — Your Portal to Engineering Excellence.—What does it take to land a job as an AI Engineer—and thrive in the role?In this episode of Pragmatic Engineer, I’m joined by Janvi Kalra, currently an AI Engineer at OpenAI. Janvi shares how she broke into tech with internships at top companies, landed a full-time software engineering role at Coda, and later taught herself the skills to move into AI Engineering: by things like building projects in her free time, joining hackathons, and ultimately proving herself and earning a spot on Coda’s first AI Engineering team.In our conversation, we dive into the world of AI Engineering and discuss three types of AI companies, how to assess them based on profitability and growth, and practical advice for landing your dream job in the field.We also discuss the following: • How Janvi landed internships at Google and Microsoft, and her tips for interview prepping• A framework for evaluating AI startups• An overview of what an AI Engineer does• A mini curriculum for self-learning AI: practical tools that worked for Janvi• The Coda project that impressed CEO Shishir Mehrotra and sparked Coda Brain• Janvi’s role at OpenAI and how the safety team shapes responsible AI• How OpenAI blends startup speed with big tech scale• Why AI Engineers must be ready to scrap their work and start over• Why today’s engineers need to be product-minded, design-aware, full-stack, and focused on driving business outcomes• And much more!—Timestamps(00:00) Intro(02:31) How Janvi got her internships at Google and Microsoft(03:35) How Janvi prepared for her coding interviews (07:11) Janvi’s experience interning at Google(08:59) What Janvi worked on at Microsoft (11:35) Why Janvi chose to work for a startup after college(15:00) How Janvi picked Coda (16:58) Janvi’s criteria for picking a startup now (18:20) How Janvi evaluates ‘customer obsession’ (19:12) Fast—an example of the downside of not doing due diligence(21:38) How Janvi made the jump to Coda’s AI team(25:48) What an AI Engineer does (27:30) How Janvi developed her AI Engineering skills through hackathons(30:34) Janvi’s favorite AI project at Coda: Workspace Q&A (37:40) Learnings from interviewing at 46 companies(40:44) Why Janvi decided to get experience working for a model company (43:17) Questions Janvi asks to determine growth and profitability(45:28) How Janvi got an offer at OpenAI, and an overview of the interview process(49:08) What Janvi does at OpenAI (51:01) What makes OpenAI unique (52:30) The shipping process at OpenAI(55:41) Surprising learnings from AI Engineering (57:50) How AI might impact new graduates (1:02:19) The impact of AI tools on coding—what is changing, and what remains the same(1:07:51) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:• AI Engineering in the real world• The AI Engineering stack• Building, launching, and scaling ChatGPT Images—See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Supported by Our Partners• WorkOS — The modern identity platform for B2B SaaS.• Modal — The cloud platform for building AI applications.• Cortex — Your Portal to Engineering Excellence.—Kubernetes is the second-largest open-source project in the world. What does it actually do—and why is it so widely adopted?In this episode of The Pragmatic Engineer, I’m joined by Kat Cosgrove, who has led several Kubernetes releases. Kat has been contributing to Kubernetes for several years, and originally got involved with the project through K3s (the lightweight Kubernetes distribution).In our conversation, we discuss how Kubernetes is structured, how it scales, and how the project is managed to avoid contributor burnout.We also go deep into: • An overview of what Kubernetes is used for• A breakdown of Kubernetes architecture: components, pods, and kubelets• Why Google built Borg, and how it evolved into Kubernetes• The benefits of large-scale open source projects—for companies, contributors, and the broader ecosystem• The size and complexity of Kubernetes—and how it’s managed• How the project protects contributors with anti-burnout policies• The size and structure of the release team• What KEPs are and how they shape Kubernetes features• Kat’s views on GenAI, and why Kubernetes blocks using AI, at least for documentation• Where Kat would like to see AI tools improve developer workflows• Getting started as a contributor to Kubernetes—and the career and networking benefits that come with it• And much more!—Timestamps(00:00) Intro(02:02) An overview of Kubernetes and who it’s for (04:27) A quick glimpse at the architecture: Kubernetes components, pods, and cubelets(07:00) Containers vs. virtual machines (10:02) The origins of Kubernetes (12:30) Why Google built Borg, and why they made it an open source project(15:51) The benefits of open source projects (17:25) The size of Kubernetes(20:55) Cluster management solutions, including different Kubernetes services(21:48) Why people contribute to Kubernetes (25:47) The anti-burnout policies Kubernetes has in place (29:07) Why Kubernetes is so popular(33:34) Why documentation is a good place to get started contributing to an open-source project(35:15) The structure of the Kubernetes release team (40:55) How responsibilities shift as engineers grow into senior positions(44:37) Using a KEP to propose a new feature—and what’s next(48:20) Feature flags in Kubernetes (52:04) Why Kat thinks most GenAI tools are scams—and why Kubernetes blocks their use(55:04) The use cases Kat would like to have AI tools for(58:20) When to use Kubernetes (1:01:25) Getting started with Kubernetes (1:04:24) How contributing to an open source project is a good way to build your network(1:05:51) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:• Backstage: an open source developer portal• How Linux is built with Greg Kroah-Hartman• Software engineers leading projects• What TPMs do and what software engineers can learn from them• Engineering career paths at Big Tech and scaleups—See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Supported by Our Partners• Modal — The cloud platform for building AI applications• CodeRabbit — Cut code review time and bugs in half. Use the code PRAGMATIC to get one month free.—What happens when LLMs meet real-world codebases? In this episode of The Pragmatic Engineer, I am joined by Varun Mohan, CEO and Co-Founder of Windsurf. Varun talks me through the technical challenges of building an AI-native IDE (Windsurf) —and how these tools are changing the way software gets built. We discuss: • What building self-driving cars taught the Windsurf team about evaluating LLMs• How LLMs for text are missing capabilities for coding like “fill in the middle”• How Windsurf optimizes for latency• Windsurf’s culture of taking bets and learning from failure• Breakthroughs that led to Cascade (agentic capabilities)• Why the Windsurf teams build their LLMs• How non-dev employees at Windsurf build custom SaaS apps – with Windsurf!• How Windsurf empowers engineers to focus on more interesting problems• The skills that will remain valuable as AI takes over more of the codebase• And much more!—Timestamps(00:00) Intro(01:37) How Windsurf tests new models(08:25) Windsurf’s origin story (13:03) The current size and scope of Windsurf(16:04) The missing capabilities Windsurf uncovered in LLMs when used for coding(20:40) Windsurf’s work with fine-tuning inside companies (24:00) Challenges developers face with Windsurf and similar tools as codebases scale(27:06) Windsurf’s stack and an explanation of FedRAMP compliance(29:22) How Windsurf protects latency and the problems with local data that remain unsolved(33:40) Windsurf’s processes for indexing code (37:50) How Windsurf manages data (40:00) The pros and cons of embedding databases (42:15) “The split brain situation”—how Windsurf balances present and long-term (44:10) Why Windsurf embraces failure and the learnings that come from it(46:30) Breakthroughs that fueled Cascade(48:43) The insider’s developer mode that allows Windsurf to dogfood easily (50:00) Windsurf’s non-developer power user who routinely builds apps in Windsurf(52:40) Which SaaS products won’t likely be replaced(56:20) How engineering processes have changed at Windsurf (1:00:01) The fatigue that goes along with being a software engineer, and how AI tools can help(1:02:58) Why Windsurf chose to fork VS Code and built a plugin for JetBrains (1:07:15) Windsurf’s language server (1:08:30) The current use of MCP and its shortcomings (1:12:50) How coding used to work in C#, and how MCP may evolve (1:14:05) Varun’s thoughts on vibe coding and the problems non-developers encounter(1:19:10) The types of engineers who will remain in demand (1:21:10) How AI will impact the future of software development jobs and the software industry(1:24:52) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:• IDEs with GenAI features that Software Engineers love• AI tooling for Software Engineers in 2024: reality check• How AI-assisted coding will change software engineering: hard truths• AI tools for software engineers, but without the hype—See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe























