Introducing "Training Data," a new podcast from Sequoia about the future of A.I.
Description
Crucible Moments will be back shortly with season 2. You’ll hear from the founders of YouTube, DoorDash, Reddit, and more. In the meantime, we’d love to introduce you to a new original podcast, Training Data, where Sequoia partners learn from builders, researchers and founders who are defining the technology wave of the future: AI. The following conversation with Harrison Chase of LangChain is all about the future of AI agents—why they’re suddenly seeing a step change in performance, and why they’re key to the promise of AI.
Follow Training Data wherever you listen to podcasts, and keep an eye out for Season 2 of Crucible Moments, coming soon.
LangChain’s Harrison Chase on Building the Orchestration Layer for AI Agents
Hosted by: Sonya Huang and Pat Grady, Sequoia Capital
Mentioned:
ReAct: Synergizing Reasoning and Acting in Language Models, the first cognitive architecture for agents
SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering, small-model open-source software engineering agent from researchers at Princeton
Devin, autonomous software engineering from Cognition
V0: Generative UI agent from Vercel
GPT Researcher, a research agent
Language Model Cascades: 2022 paper by Google Brain and now OpenAI researcher David Dohan that was influential for Harrison in developing LangChain
Transcript: https://www.sequoiacap.com/podcast/training-data-harrison-chase/