DiscoverTraining Data
Training Data
Claim Ownership

Training Data

Author: Sequoia Capital

Subscribed: 203Played: 3,144
Share

Description

Join us as we train our neural nets on the theme of the century: AI. Sonya Huang, Pat Grady and more Sequoia Capital partners host conversations with leading AI builders and researchers to ask critical questions and develop a deeper understanding of the evolving technologies—and their implications for technology, business and society.


The content of this podcast does not constitute investment advice, an offer to provide investment advisory services, or an offer to sell or solicitation of an offer to buy an interest in any investment fund.

64 Episodes
Reverse
Thomas Wolf, co-founder and Chief Science Officer of Hugging Face, explains how his company is applying the same community-driven approach that made transformers accessible to everyone to the emerging field of robotics. Thomas discusses LeRobot, Hugging Face's ambitious project to democratize robotics through open-source tools, datasets, and affordable hardware. He shares his vision for turning millions of software developers into roboticists, the challenges of data scarcity in robotics versus language models, and why he believes we're at the same inflection point for physical AI that we were for LLMs just a few years ago. Hosted by: Sonya Huang and Pat Grady, Sequoia Capital
Ryan Daniels and John Sarihan are reimagining legal services by building Crosby, an AI-powered law firm that focuses on contract negotiations to start. Rather than building legal software, they've structured their company as an actual law firm with lawyers and AI engineers working side-by-side to automate human negotiations. They've eliminated billable hours in favor of per-document pricing, achieving contract turnaround times under an hour. Ryan and John explain why the law firm structure enables faster innovation cycles, how they're using AI to predict negotiation outcomes, and their vision for agents that can simulate entire contract negotiations between parties. Hosted by Josephine Chen, Sequoia Capital Mentioned in this episode: Data processing agreement (DPA): GDPR-mandated contract between controllers and processors. Crosby handles DPAs as part of B2B contracting. Credence good: Economic term for services whose quality is hard to judge even after consumption. Used to explain why legal buyers value lawyers-in-the-loop and malpractice coverage.
When the AI wave hit, n8n founder Jan Oberhauser faced a critical choice: become irrelevant or become indispensable. He chose the latter, transforming n8n from a simple workflow tool into a comprehensive AI automation platform that lets users connect any LLM to any application. The result? Four times the revenue growth in eight months compared to the previous six years. Jan explains how n8n’s “connect everything to anything” philosophy, combined with a thriving open source community, positioned the company to ride the AI automation wave while avoiding vendor lock-in that plagues enterprise software. Hosted by George Robson and Pat Grady, Sequoia Capital Mentioned in this episode: Model Context Protocol (MCP): Open protocol that lets AI models safely use external tools and data that is used extensively by n8n for orchestration. Vector database: A database optimized for storing and searching embeddings. These “vector stores” can pair with LLMs for retrieval-augmented workflows. Granola: AI productivity tool mentioned by Jan as a recent favorite.  Her: A film that Jan says, “a few years ago, it was sci fi, and it’s now suddenly this thing that is just around the corner.”
Before ChatGPT made AI mainstream, John Noronha was building Gamma with a simple insight: everyone hates making slides but needs visual communication for high-stakes ideas. His background at Optimizely proved crucial as Gamma became a testing laboratory for AI models, running hundreds of experiments to discover that Claude excels at creative taste, Gemini wins on cost efficiency, and reasoning models actually hurt creativity. John explains how solving their own blank page problem inadvertently solved it for millions of users, turning a near-failing startup into a cash flow positive platform with 250 million presentations created. He discusses competing with PowerPoint's 500 million users while expanding beyond slides into documents, websites and visual storytelling. Hosted by Sonya Huang, Sequoia Capital
Dara Ladjevardian, founder and CEO of Delphi, is creating digital minds that allow people to scale their thoughts and availability without replacing human connection. Inspired by Ray Kurzweil’s theory of mind as a hierarchy of pattern recognizers, Dara built an adaptive temporal knowledge graph that captures how people think and reason. From helping CEOs train new hires to enabling coaches to monetize their expertise 24/7, Delphi represents a new form of conversational media. Dara explains why authentic human representation matters, how digital minds actually increase desire for real human connection, and why he believes 2026 will be the tipping point for adoption for digital minds. Hosted by Sonya Huang and Jess Lee, Sequoia Capital Mentioned in this episode: How to Create a Mind: 2012 book by Ray Kurzweil that inspired Dara The Memoirs of Akbar Ladjevardian: 2008 book about Dara’s grandfather, an Iranian industrialist, that led him to create his first “digital mind” Build: 2022 book by Tony Fadell that refers to itself as “a mentor in a box”; another inspiration for Dara The 2 Sigma Problem: 1984 paper by Benjamin Bloom about how students that receive one-on-one tutoring perform two standard deviations better than students educated in a classroom environment
Vercel CEO Guillermo Rauch has spent years obsessing over reducing the friction between having an idea and getting it online. Now with AI, he's achieving something even more ambitious: making software creation accessible to anyone with a keyboard. Guillermo explains how v0 has grown to 3 million users by focusing on reliability and quality, why ChatGPT has become their fastest-growing customer acquisition channel, and how AI is enabling “virtual coworkers” across design, development, and marketing. He shares his contrarian view that the future belongs to ephemeral, generated-on-demand applications rather than traditional installed software, and why he believes we're on the cusp of the biggest transformation to the web in its history. Hosted by Sonya Huang and Pat Grady, Sequoia Capital
In just two months, a scrappy three-person team at OpenAI sprinted to fulfill what the entire AI field has been chasing for years—gold-level performance on the International Mathematical Olympiad problems. Alex Wei, Sheryl Hsu and Noam Brown discuss their unique approach using general-purpose reinforcement learning techniques on hard-to-verify tasks rather than formal verification tools. The model showed surprising self-awareness by admitting it couldn’t solve problem six, and revealed the humbling gap between solving competition problems and genuine mathematical research breakthroughs. Hosted by Sonya Huang, Sequoia Capital
Isa Fulford, Casey Chu, and Edward Sun from OpenAI's ChatGPT agent team reveal how they combined Deep Research and Operator into a single, powerful AI agent that can perform complex, multi-step tasks lasting up to an hour. By giving the model access to a virtual computer with text browsing, visual browsing, terminal access, and API integrations—all with shared state—they've created what may be the first truly embodied AI assistant. The team discusses their reinforcement learning approach, safety mitigations for real-world actions, and how small teams can build transformative AI products through close research-applied collaboration. Hosted by Sonya Huang and Lauren Reeder, Sequoia Capital
Pushmeet Kohli leads AI for Science at DeepMind, where his team has created AlphaEvolve, an AI system that discovers entirely new algorithms and proves mathematical results that have eluded researchers for decades. From improving 50-year-old matrix multiplication algorithms to generating interpretable code for complex problems like data center scheduling, AlphaEvolve represents a new paradigm where LLMs coupled with evolutionary search can outperform human experts. Pushmeet explains the technical architecture behind these breakthroughs and shares insights from collaborations with mathematicians like Terence Tao, while discussing how AI is accelerating scientific discovery across domains from chip design to materials science. Hosted by Sonya Huang and Pat Grady, Sequoia Capital
Eric Ho is building Goodfire to solve one of AI’s most critical challenges: understanding what’s actually happening inside neural networks. His team is developing techniques to understand, audit and edit neural networks at the feature level. Eric discusses breakthrough results in resolving superposition through sparse autoencoders, successful model editing demonstrations and real-world applications in genomics with Arc Institute's DNA foundation models. He argues that interpretability will be critical as AI systems become more powerful and take on mission-critical roles in society. Hosted by Sonya Huang and Roelof Botha, Sequoia Capital Mentioned in this episode: Mech interp: Mechanistic interpretability, list of important papers here Phineas Gage: 19th century railway engineer who lost most of his brain’s left frontal lobe in an accident. Became a famous case study in neuroscience. Human Genome Project: Effort from 1990-2003 to generate the first sequence of the human genome which accelerated the study of human biology Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs Zoom In: An Introduction to Circuits: First important mechanistic interpretability paper from OpenAI in 2020 Superposition: Concept from physics applied to interpretability that allows neural networks to simulate larger networks (e.g. more concepts than neurons) Apollo Research: AI safety company that designs AI model evaluations and conducts interpretability research Towards Monosemanticity: Decomposing Language Models With Dictionary Learning. 2023 Anthropic paper that uses a sparse autoencoder to extract interpretable features; followed by Scaling Monosemanticity Under the Hood of a Reasoning Model: 2025 Goodfire paper that interprets DeepSeek’s reasoning model R1 Auto-interpretability: The ability to use LLMs to automatically write explanations for the behavior of neurons in LLMs Interpreting Evo 2: Arc Institute's Next-Generation Genomic Foundation Model. (see episode with Arc co-founder Patrick Hsu) Paint with Ember: Canvas interface from Goodfire that lets you steer an LLM’s visual output  in real time (paper here) Model diffing: Interpreting how a model differs from checkpoint to checkpoint during finetuning Feature steering: The ability to change the style of LLM output by up or down weighting features (e.g. talking like a pirate vs factual information about the Andromeda Galaxy) Weight based interpretability: Method for directly decomposing neural network parameters into mechanistic components, instead of using features The Urgency of Interpretability: Essay by Anthropic founder Dario Amodei On the Biology of a Large Language Model: Goodfire collaboration with Anthropic
Mati Staniszewski, co-founder and CEO of ElevenLabs, explains how staying laser-focused on audio innovation has allowed his company to thrive despite the push into multimodality from foundation models. From a high school friendship in Poland to building one of the fastest-growing AI companies, Mati shares how ElevenLabs transformed text-to-speech with contextual understanding and emotional delivery. He discusses the company's viral moments (from Harry Potter by Balenciaga to powering Darth Vader in Fortnite), and explains how ElevenLabs is creating the infrastructure for voice agents and real-time translation that could eliminate language barriers worldwide. Hosted by: Pat Grady, Sequoia Capital Mentioned in this episode: Attention Is All You Need: The original Transformers paper Tortoise-tts: Open source text to speech model that was a starting point for ElevenLabs (which now maintains a v2) Harry Potter by Balenciaga: ElevenLabs’ first big viral moment from 2023 The first AI that can laugh: 2022 blog post backing up ElevenLab’s claim of laughter (it got better in v3) Darth Vader's voice in Fortnite: ElevenLabs used actual voice clips provided by James Earl Jones before he died Lex Fridman interviews Prime Minister Modi: ElevenLabs enabled Fridman to speak in Hindi and Modi to speak in English. Time Person of the Year 2024: ElevenLabs-powered experiment with “conversational journalism” Iconic Voices: Richard Feynman, Deepak Chopra, Maya Angelou and more available in ElevenLabs reader app SIP trunking: a method of delivering voice, video, and other unified communications over the internet using the Session Initiation Protocol (SIP) Genesys: Leading enterprise CX platform for agentic AI Hitchhiker’s Guide to the Galaxy: Comedy/science-fiction series by Douglas Adams that contains the concept of the Babel Fish instantaneous translator, cited by Mati FYI: communication and productivity app for creatives that Mati uses, founded by will.i.am Lovable: prototyping app that Mati loves
Anish Agarwal and Raj Agrawal, co-founders of Traversal, are transforming how enterprises handle critical system failures. Their AI agents can perform root cause analysis in 2-4 minutes instead of the hours typically spent by teams of engineers scrambling in Slack channels. Drawing from their academic research in causal inference and gene regulatory networks, they’ve built agents that systematically traverse complex dependency maps to identify the smoking gun logs and problematic code changes. As AI-generated code becomes more prevalent, Traversal addresses a growing challenge: debugging systems where humans didn’t write the original code, making AI-powered troubleshooting essential for maintaining reliable software at scale. Hosted by Sonya Huang and Bogomil Balkansky, Sequoia Capital Mentioned in this episode: SRE: Site reliability engineering. The function within engineering teams that monitors and improves the availability and performance of software systems and services. Golden signals: four key metrics used by Site Reliability Engineers (SREs) to monitor the health and performance of IT systems: latency, traffic, errors and saturation. MELT data:  Metrics, events, log, and traces. A framework for observability. The Bitter Lesson: Another mention of Nobel Prize  winner Rich Sutton’s influential post.
As OpenAI's former Head of Research, Bob McGrew witnessed the company's evolution from GPT-3’s breakthrough to today's reasoning models. He argues that there are three legs of the stool for AGI—Transformers, scaled pre-training, and reasoning—and that the fundamentals that will shape the next decade-plus are already in place. He thinks 2025 will be defined by reasoning while pre-training hits diminishing returns. Bob discusses why the agent economy will price services at compute costs due to near-infinite supply, fundamentally disrupting industries like law and medicine, and how his children use ChatGPT to spark curiosity and agency. From robotics breakthroughs to managing brilliant researchers, Bob offers a unique perspective on AI’s trajectory and where startups can still find defensible opportunities. Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital Mentioned in this episode:  Solving Rubik’s Cube with a robot hand: OpenAI’s original robotics research Computer Use and Operator: Anthropic and OpenAI reasoning breakthroughs that originated with OpenAi researchers Skild and Physical Intelligence: Robotics-oriented companies Bob sees as well-positioned now Distyl: AI company founded by ex-Palintir alums to create enterprise workflows driven by proprietary data Member of the technical staff: Title at OpenAI designed to break down barriers between AI researchers and engineers Howie.ai: Scheduling app that Bob uses
Hanson Wang and Alexander Embiricos from OpenAI's Codex team discuss their latest AI coding agent that works independently in its own environment for up to 30 minutes, generating full pull requests from simple task descriptions. They explain how they trained the model beyond competitive programming to match real-world software engineering needs, the shift from pairing with AI to delegating to autonomous agents, and their vision for a future where the majority of code is written by agents working on their own computers. The conversation covers the technical challenges of long-running inference, the importance of creating realistic training environments, and how developers are already using Codex to fix bugs and implement features at OpenAI. Hosted by Sonya Huang and Lauren Reeder, Sequoia Capital  Mentioned in this episode:  The Culture: Sci-Fi series by Iain Banks portraying an optimistic view of AI The Bitter Lesson: Influential paper by Rich Sutton on the importance of scale as a strategic unlock for AI.
Fresh off impressive releases at Google’s I/O event, three Google Labs leaders explain how they’re reimagining creative tools and productivity workflows. Thomas Iljic details how video generation is merging filmmaking with gaming through generative AI cameras and world-building interfaces in Whisk and Veo. Jaclyn Konzelmann demonstrates how Project Mariner evolved from a disruptive browser takeover to an intelligent background assistant that remembers context across multiple tasks. Simon Tokumine reveals NotebookLM’s expansion beyond viral audio overviews into a comprehensive platform for transforming information into personalized formats. The conversation explores the shift from prompting to showing and telling, the economics of AI-powered e-commerce, and why being “too early” has become Google Labs’ biggest challenge and advantage. Hosted by Sonya Huang, Sequoia Capital 00:00 Introduction 02:12 Google's AI models and public perception 04:18 Google's history in image and video generation 06:45 Where Whisk and Flow fit 10:30 How close are we to having the ideal tool for the craft? 13:05 Where do the movie and game worlds start to merge? 16:25 Introduction to Project Mariner 17:15 How Mariner works 22:34 Mariner user behaviors 27:07 Temporary tattoos and URL memory 27:53 Project Mariner's future 29:26 Agent capabilities and use cases 31:09 E-commerce and agent interaction 35:03 Notebook LM evolution 48:26 Predictions and future of AI Mentioned in this episode:  Whisk: Image and video generation app for consumers Flow: AI-powered filmmaking with new Veo 3 model Project Mariner: research prototype exploring the future of human-agent interaction, starting with browsers NotebookLM: tool for understanding and engaging with complex information including Audio Overviews and now a mobile app Shop with AI Mode: Shopping app with a virtual try-on tool based on your own photos Stitch: New prompt-based interface to design UI for mobile and web applications. ControlNet paper: Outlined an architecture for adding conditional language to direct the outputs of image generation with diffusion models
Former Airbus CTO Paul Eremenko shares his vision for bringing AI to physical engineering, starting with Archie—an AI agent that works alongside human engineers. P-1 AI is tackling the challenge of generating synthetic training data to teach AI systems about complex physical systems, from data center cooling to aircraft design and beyond. Eremenko explains how Archie breaks down engineering tasks into primitive operations and uses a federated approach combining multiple AI models. The goal is to progress from entry-level engineering capabilities to eventually achieving engineering AGI that can design things humans cannot. Hosted by Sonya Huang and Pat Grady, Sequoia Capital
CEO Amit Bendov shares how Gong evolved from a meeting transcription tool to an AI-powered revenue platform that's increasing sales capacity by up to 60%. He explains why task-specific AI agents are the key to enterprise adoption, and why human accountability will remain crucial even as AI takes over routine sales tasks. Amit also reveals how Gong survived recent market headwinds by expanding their product suite while maintaining their customer-first approach. Hosted by Sonya Huang and Pat Grady, Sequoia Capital Mentioned in this episode:  “New paradigm of AI architectures”: Yann LeCun’s talk at Davos where he talks about the world beyond transformers and LLMs in the next 3-5 years. The Beginning of Infinity: Book by David Deutsch that Amit says is “mind-changing.”
At AI Ascent 2025, Jeff Dean makes bold predictions. Discover how the pioneer behind Google's TPUs and foundational AI research sees the technology evolving, from specialized hardware to more organic systems, and future engineering capabilities.
Recorded live at Sequoia’s AI Ascent 2025: LangChain CEO Harrison Chase introduces the concept of ambient agents, AI systems that operate continuously in the background responding to events rather than direct human prompts. Learn how these agents differ from traditional chatbots, why human oversight remains essential and how this approach could dramatically scale our ability to leverage AI.
Recorded live at Sequoia’s AI Ascent 2025: Sierra co-founder Bret Taylor discusses why AI is driving a fundamental shift from subscription-based pricing to outcomes-based models. Learn why this transition is harder for incumbents than startups, why applied AI and vertical specialization represent the biggest opportunities for entrepreneurs and how to position your AI company for success in this new paradigm.
loading
Comments 
loading