Discover
High Agency: The Podcast for AI Builders

High Agency: The Podcast for AI Builders
Author: Raza Habib
Subscribed: 12Played: 170Subscribe
Share
© 2025 Raza Habib
Description
High Agency is the podcast for AI builders. If you’re trying to understand how to successfully build AI products with Large Language Models and Generative AI then this podcast is made for you. Each week we interview leaders at companies building on the frontier who have already succeeded with AI in production. We share their stories, lessons and playbooks so you can build more quickly and with confidence.
AI is moving incredibly fast and no-one is truly an expert yet, High Agency is for people who are learning by doing and will share knowledge through the community.
Where to find us: https://hubs.ly/Q02z2HR40
AI is moving incredibly fast and no-one is truly an expert yet, High Agency is for people who are learning by doing and will share knowledge through the community.
Where to find us: https://hubs.ly/Q02z2HR40
31 Episodes
Reverse
In this episode of High Agency, Patrick Leung from Faro Health explains how they're using AI to revolutionize clinical trial design by both generating regulatory documents and extracting insights from thousands of existing trials. Patrick emphasises the essential collaboration between clinical experts and AI engineers when building reliable systems in healthcare's high-stakes environment. Chapters:00:00 - Introduction04:26 - Clinical trials before: Microsoft Word Documents08:17 - Document generation using AI12:26 - What makes clinical trials so expensive16:26 - Parsing and processing clinical trial data18:04 - Challenges with traditional evaluation metrics21:28 - Importance of domain experts in the evaluation process24:35 - Collaboration between domain experts and engineering31:26 - Building a graph-based knowledge system34:27 - Roles and skillsets required38:06 - Lessons learned building LLM products40:56 - Discussion on AI capabilities and limitations46:07 - Is AI overhyped or underhyped------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is the LLM evals platform for enterprises. We give you the tools that top teams use to ship and scale AI with confidence. To find out more go to humanloop.com
In this episode, Raza is joined by Shahriar Tajbakhsh, the co-founder of Metaview. They discuss how Metaview’s AI scribe automates interview note-taking, how AI agents can surface top candidates from thousands of resumes, and why hiring managers should think of AI as a co-worker, not just a tool. Raza's recomended reading: Creating a LLM-as-a-Judge That Drives Business Results.Chapters:00:00 - Introduction03:32 - How AI Co-Workers Are Transforming Recruiting06:21 - Inside MetaView: AI Scribe and Workflow Automation09:11 - Unlocking Hiring Insights with AI-Driven Conversations11:30 - Balancing AI Innovation and User Adoption14:05 - Metaview’s Tech Stack and the Role of LLMs18:29 - How MetaView Generates Superhuman Interview Notes23:18 - The Challenges of Building Reliable AI Hiring Agents32:40 - The Future of AI in Hiring: Automating Job Descriptions40:26 - AI Co-Workers That Work While You Sleep47:08 - Why Vertical AI Will Win Over General AI Agents50:24 - The Underrated Power of Graph-Based AI------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is the LLM evals platform for enterprises. We give you the tools that top teams use to ship and scale AI with confidence. To find out more go to humanloop.com
In this episode, Raza Habib chats with Zach Lloyd, CEO and founder of Warp, about how AI is transforming the developer experience. They explore how Warp is reimagining the command line, the power of AI-driven automation, and what the future holds for coding workflows.Chapters:00:00 - Introduction04:06 - Why the terminal needed reinvention07:11 - AI’s role in Warp’s evolution08:55 - Key AI features in Warp12:49 - Balancing safety, reliability, and usability19:43 - Challenges in AI-Powered development22:33 - Changing developer behavior with AI27:24 - Prompt engineering and context optimization31:05 - Lessons for building AI products37:50 - The future of AI in software development46:42 - Underappreciated AI innovations------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is the LLM evals platform for enterprises. We give you the tools that top teams use to ship and scale AI with confidence. To find out more go to humanloop.com
Will Bryk, CEO of Exa, sits down with Raza Habib to reveal why traditional search engines are becoming obsolete and how his startup is building an AI-powered search engine for the future. From constructing a massive GPU cluster to predicting AI will surpass human mathematicians by 2026, Will shares fascinating insights about the technological breakthroughs that will reshape society in the coming months.Chapters:00:00 - Introduction 05:13 - Exa as a Tool for LLMs and Neural Search 06:19 - Introducing "Websets" and Its Use Cases 10:16 - Building a Compute Cluster: Why Own vs. Rent? 12:00 - The Bitter Lesson and Scalability in AI 17:11 - Interesting Use Cases for Exa 19:44 - People Search and CRM Opportunities 21:10 - Predictions for AI Progress and Test-Time Compute 27:10 - Implications of AI on Creative Tasks and Society 29:15 - Automation, Jobs, and the Knowledge Economy 33:57 - What Could Stop AI Progress? 36:22 - Advice for AI Builders and Entrepreneurs------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is the LLM evals platform for enterprises. We give you the tools that top teams use to ship and scale AI with confidence. To find out more go to humanloop.com
In this episode, Jesse Zhang joins Raza to discuss building cutting-edge AI agents for customer support. They explore how his early passion for LLMs led to creating a company that’s transforming the way businesses like Rippling, Duolingo, and Webflow interact with customers. Jesse breaks down the challenges of scaling AI systems, the importance of customer feedback, and his predictions for the future of AI.Chapters:00:00 - Introduction and Jesse Zhang's Background 01:17 - First Exposure to LLMs and Building Early Projects 04:32 - Decagon’s Rapid Growth and Differentiation in AI 06:37 - Understanding Decagon’s AI Customer Support Product 10:21 - Challenges in Building High-Performance AI Systems 13:14 - Evolution from Simple RAG to Agent Architectures 16:54 - Measuring Accuracy with Evals and Customer Feedback 19:05 - Balancing Customization and Reusability Across Clients 22:35 - Handling Customer Data and Incremental Deployment 25:21 - Restructuring Support Teams for AI Integration 27:03 - Team Composition and the Role of Domain Expertise 29:19 - Advice for New AI Builders: Customer-Driven Development 32:21 - Key Insights on AI Agents and Enterprise Adoption 36:34 - Predictions for AI Advancements in 2025 39:41 - Is AI Overhyped or Underhyped? 41:07 - Closing Remarks and Final Thoughts------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is the LLM evals platform for enterprises. We give you the tools that top teams use to ship and scale AI with confidence. To find out more go to humanloop.com
On this week's episode, former GitHub Copilot lead Ryan Salva breaks down how AI coding tools became ubiquitous almost overnight. They discuss the critical differences between what novice and expert developers expect from AI, why starting with predictive text was both a blessing and a curse, and how the rapid adoption of AI assistance is reshaping the future of software development.Chapters:00:00 - Introduction 01:09 - The Creation of GitHub Copilot 05:39 - From Prototype to Product: Challenges in Scaling 07:37 - How GitHub Copilot Works Behind the Scenes 11:18 - Metrics That Matter: Evaluating AI Success 14:43 - Building Momentum: What It Feels Like to Launch a Hit 17:51 - The Evolution of AI Tools for Developers 21:13 - Evaluations and Testing in AI Development 26:00 - The Role of Automation and the Future of Coding 30:53 - Will Engineers Still Write Code in the Future? 33:16 - Advice for Aspiring AI Builders 36:51 - Is AI Overhyped or Underhyped? 38:17 - Closing Reflections ----------------------------------------------------------------------------------------------------------------------------------------------Humanloop is the LLM evals platform for enterprises. We give you the tools that top teams use to ship and scale AI with confidence. To find out more go to humanloop.com
In this week's episode, Raza speaks with James Theuerkauf, CEO of Syrup Tech, and Sara Ittelson, Partner at Accel, to explore the challenges and opportunities for entrepreneurs in this transformative era. They discuss building AI-first companies and the lessons learned from scaling in a rapidly evolving space. With practical tips on leveraging data, creating competitive advantages, and sustaining passion for the long haul, this episode offers invaluable guidance for founders in AI.Chapters:00:00 - Introduction and Guest Backgrounds 01:27 - Syrup Tech’s Approach to AI in Retail 03:29 - The Role of AI in Demand Forecasting 08:49 - Building Effective AI Systems and Teams 15:30 - How Generative AI is Shaping Businesses 19:18 - Advice for Founders in the AI Era 28:15 - Building an AI-First Company 33:26 - Innovations and Trends in AI 38:47 - Is AI Overhyped or Underhyped? 42:46 - Closing Thoughts and Reflections--------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is the LLM evals platform for enterprises. We give you the tools that top teams use to ship and scale AI with confidence. To find out more go to humanloop.com
In this episode, Noam Rubin, a Software Developer at Vanta reveals how his team uses data-driven strategies to design, test, and improve cutting-edge AI features. Learn how customer insights, rapid prototyping, and iterative development transform raw ideas into tools that make compliance and security easier for businesses everywhere.Chapters:00:00 - Introduction02:47 - The process of building AI products at Vanta04:51 - The role of customer feedback in product development06:59 - Integrating AI into security and compliance workflows08:06 - Using data specifications to guide product development10:10 - Collaborating with subject matter experts to refine AI models12:14 - Iterative testing and refining AI features14:10 - Quality control and ensuring AI accuracy16:00 - The importance of dogfooding and internal feedback loops18:23 - Scaling AI features and rolling them out to wider audiences20:50 - Educating engineers and democratizing AI at Vanta22:20 - Key lessons learned from building AI products24:12 - Maintaining AI quality through continuous feedback26:00 - The future of AI in business and product development
In this episode of High Agency, former OpenAI researcher Stan Polu shares his journey from AI research to founding Dust, an enterprise AI platform. Stan offers a contrarian view on the future of AI, suggesting we may be hitting a plateau in model capabilities since GPT-4. He discusses why startups should focus on product-market fit before investing in GPUs, shares practical lessons for building AI products, and predicts increased competition between AI labs and API developers. Chapters:00:00 - Introducing Dust: an enterprise AI platform06:07 - From Stripe to OpenAI: Stan's journey10:29 - Why research wasn't enough: building Dust15:10 - Best practices for building an AI product20:50 - Is prompt engineering here to stay23:40 - Understanding language models and their limitations32:56 - Predictions for AI in 202539:53 - Measuring progress toward AGI42:26 - The true value of AI technology--------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is the LLM evals platform for enterprises. We give you the tools that top teams use to ship and scale AI with confidence. To find out more go to humanloop.com
In this episode, we explore how Replicate is breaking down barriers in AI development through its open-source platform. CEO Ben Firshman shares how Replicate enables developers without machine learning expertise to run AI models in the cloud.00:00 Introduction 00:29 Overview of Replicate 03:13 Replicate's user base 05:45 Enterprise use cases and lowering the AI barrier 07:45 The complexity of traditional AI deployment 10:24 Simplifying AI with Replicate's API 13:50 ControlNets and the challenges of image models 19:42 Fragmentation in AI models: images vs. language 25:05 Customization and multi-model pipelines in production 26:33 Learning by doing: skills for AI engineers 28:44 Applying AI in governments 31:12 Iterative development and co-evolution of AI specs 33:13 Final reflections on AI hype 35:18 Conclusion--------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com
How do you build AI tools that actually meet users’ needs? In this episode of High Agency, Raza speaks with Lorilyn McCue, the driving force behind Superhuman’s AI-powered features. Lorilyn lays out the principles that guide her team’s work, from continuous learning to prioritizing user feedback. Learn how Superhuman’s "learning-first" approach allows them to fine-tune features like Ask AI and AI-driven summaries, creating practical solutions for today’s professionals. 00:00 - Introduction04:20 - Overview of the Superhuman06:50 - Instant Reply and Ask AI10:00 - Building On-Demand vs. Always-On AI Features13:45 - Prompt Engineering for Effective Summarization22:35 - The Importance of Seamless AI Integration in User Workflows25:10 - Developing Advanced Email Search with Contextual Reasoning29:45 - Leveraging User Feedback32:15 - Balancing Customization and Scalability in AI-Generated Emails36:05 - Approach to Prioritization39:30 - Real-World Use Cases: The Versatility of Current AI Capabilities43:15 - Learning and Staying Updated in the Rapidly Evolving AI Field46:00 - Is AI Overhyped or Underhyped?49:20 - Final Thoughts and Closing Remarks--------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com
This week on High Agency, Raza Habib is joined by Chroma founder Jeff Huber. They cover the evolution of vector databases in AI engineering, challenge common assumptions about RAG and share insights from Chroma's journey. Jeff shares insights from Chroma's development, including their focus on developer experience and observations about real-world usage patterns. They also get into whether or not we can expect a super AI any time soon and what is over and under hyped in the industry today. 00:00 - Introduction02:30 - Why vector databases matter for AI06:00 - Understanding embeddings and similarity search12:00 - Chroma early days15:45 - Problems with existing vector database solutions19:30 - Workload patterns in AI applications23:40 - Real-world use cases and search applications27:15 - The problem with RAG terminology31:45 - Dynamic retrieval and model interactions35:30 - Email processing and instruction management39:15 - Context windows vs vector databases42:30 - Enterprise adoption and production systems45:45 - The journey from GPT-3 to production AI48:15 - Internal vs customer-facing applications51:00 - Advice for AI engineers--------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com
In this episode of High Agency podcast, Peter Gostev shares his experiences implementing LLMs at NatWest and Moonpig. He discusses creating an AI strategy, talks about challenges in deploying LLMs in large organizations, and shares thoughts on underappreciated AI developments.00:00 - Introduction00:44 - OpenAI dev day reactions 03:47 - Using AI to automate customer service 10:43 - Impact of AI products13:41 - Who are the users of LLMs14:47 - Challenges building with AI in a large enterprise 21:22 - AI use cases at Moonpig24:34 - How to create an AI strategy28:10 - Underappreciated AI developments--------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is an LLM evals platform for enterprises. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com
In this episode of High Agency, we're joined by Surojit Chatterjee, former CPO of Coinbase and now CEO of Ema. Surojit unveils his audacious plan to create universal AI employees and revolutionize Fortune 1000 workforce. Drawing from his career at tech giants like Google and Coinbase, he shares how these experiences fueled his vision for Ema. Surojit dives into the challenges of building AI agents, explores the concept of artificial humans, and predicts how this technology could transform the future of SaaS(00:00:00) Introduction and Surojit’s background(00:03:00) Founding story of Ema (Universal AI Employee)(00:04:53) How the Universal AI Employee works(00:08:39) Ema’s data integration and security(00:12:57) AI employee use cases in enterprises(00:15:02) Challenges with building AI agents(00:16:45) Evaluations, hallucinations, customizing models(00:19:52) Artificial human metaphor (00:25:42) AI employee vs humans(00:31:25) Advice for AI builders(00:37:14) Is AI overhyped or underhyped?(00:39:28) How the business model of SaaS will change--------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com
Hamel Husain is a seasoned AI consultant and engineer with experience at companies like GitHub, DataRobot, and Airbnb. He is a trailblazer in AI development, known for his innovative work in literate programming and AI-assisted development tools. Shawn Wang (aka Swyx) is the host of the Latent Space podcast, the author of the essay 'Rise of the AI Engineer,' and the founder of the AI Engineer World Fair. In this episode, Hamel and Swyx share their unique insights on building effective AI products, the critical importance of evaluations, and their vision for the future of AI engineering.Chapters00:00 - Introduction and recent AI advancements06:14 - The critical role of evals in AI product development15:33 - Common pitfalls in AI product development26:33 - Literate programming: A new paradigm for AI development39:58 - Answer AI and innovative approaches to software development51:56 - Integrating AI with literate programming environments58:47 - The importance of understanding AI prompts01:00:37 - Assessing the current state of AI adoption01:07:10 - Challenges in evaluating AI models--------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com
Raz Nussbaum is a Senior Product Manager in AI at Gong — the leading AI platform for revenue teams. He is an absolute legend when it comes to building and scaling AI products that genuinely deliver value. In this episode, he opens up about what it takes to build successful AI products in an era where things change at lightning speed.Chapters00:00 - Introduction01:16 - How LLMs Changed Product Development at Gong AI08:32 - Including Product Managers in Development Process13:05 - Testing and Monitoring Pre vs Post-deployment17:53 - New Challenges in the Face of Generative AI19:39 - Shipping Fast and Interacting with the Market23:25 - What's Next For Gong AI25:13 - The Psychology of Trusting AI 28:19 - Is AI Overhyped or Underhyped?--------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com
In this episode, we dive deep into the world of AI-assisted creative writing with James Yu, founder of Sudowrite. James shares the journey of building an AI assistant for novelists, helping writers develop ideas, manage complex storylines, and avoid clichés. James gets into the backlash the company faced when they first released Story Engine and how they're working to build a community of users.00:00 - Introduction and Background of Sudowrite02:26 - The Early Days: Concept, Skepticism, and User Adoption05:20 - Sudowrite's Interface, Features, and User Base10:23 - Developing and Iterating Features in Sudowrite17:29 - The Evolution of Story Bible and Writing Assistance24:27 - Challenges in Maintaining Coherence and AI-Assisted Writing29:12 - Evaluating AI Features and the Role of Prompt Engineering33:35 - Handling Tropes, Clichés, and Fine-Tuning for Author Voice40:43 - The Controversy and Future of AI in Creative Work51:37 - Predictions for AI in the Next Five Years--------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com
In this episode, LiveKit CEO Russ d'Sa explores the critical role of real-time communication infrastructure in the AI revolution. From building voice demos to powering OpenAI's ChatGPT, he shares insights on technical challenges around building multimodal AI on the web and what new possibilities are opening up.00:00 - Introduction and Background01:34 - The Evolution of AI and Lessons for Founders05:20 - Timelines and Technological Progress10:32 - Overview of LiveKit and Its Impact on AI Development13:39 - Why LiveKit Matters for AI Developers19:08 - Partnership with OpenAI21:25 - Challenges in Streaming and Real-Time Data Transmission30:07 - Building a global network for AI communication37:21 - Real-world applications of LiveKit in AI systems40:55 - Future of AI and the Concept of Abundance43:38 - The Irony of Wealth in an Age of AII hope you enjoy the conversation and if you do, please subscribe!--------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com
This week we’re talking to Lin Qiao, former PyTorch lead at Meta and current CEO of Fireworks AI. We discuss the evolution of AI frameworks, the challenges of optimizing inference for generative AI, the future of AI hardware, and open-source models. Lin shares insights on PyTorch design philosophy, how to achieve low latency, and the potential for AI to become as ubiquitous as electricity in our daily lives.Chapters: 00:00 - Introduction and PyTorch Background04:28 - PyTorch's Success and Design Philosophy08:20 - Lessons from PyTorch and Transition to Fireworks AI14:52 - Challenges in Gen AI Application Development22:03 - Fireworks AI's Approach24:24 - Technical Deep Dive: How to Achieve Low Latency29:32 - Hardware Competition and Future Outlook31:21 - Open Source vs. Proprietary Models37:54 - Future of AI and ConclusionI hope you enjoy the conversation and if you do, please subscribe!--------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com
In this episode of High Agency, we are speaking to Paras Jain who is the CEO of AI video generation startup Genmo. Paras shares insights from his experience working on autonomous vehicles, why he chose academia over an offer from Tesla, and the research-minded approach that has lead to Genmo's rapid success.Chapters:(00:00) Introduction(01:52) Lessons from selling an AI company to Tesla(07:01) Working within GPU constraints and transformer architecture(11:18) Moving from research to startup success(14:36) Leading the video generation industry(16:05) Training diffusion models for videos(19:36) Evaluating AI video generation(24:06) Scaling laws and data architecture(28:34) Issues with scaling diffusion models (33:09) Business use cases for video generation models(36:43) Potential and limitations of video generation(40:59) Ethical training of video models
In this week’s episode of the High Agency podcast, Humanloop Co-Founder and CEO Raza Habib sat down with Eddie Kim, co-founder and Head of Technology at Gusto and guest host Ali Rowghani to discuss how Gusto has applied AI to revolutionize ops-heavy processes like payroll and HR admin. Eddie also elaborates why Gusto is choosing to build, and not buy, the majority of their GenAI tech stack.Chapters00:00 - Introduction and Background02:15 - Overview of Gusto's Business05:59 - Operational Complexity and AI Opportunities08:51 - Build vs. Buy: Internal vs. External AI Tools10:07 - Prioritizing AI Use Cases13:53 - Human-in-the-Loop Approach19:39 - Centralized AI Team and Approach22:53 - Measuring ROI from AI Initiatives32:25 - AI-Powered Reporting Feature38:46 - Code Generation and Developer Tools42:52 - Impact of AI on Companies and Society47:22 - AI Safety and Risks49:54 - Closing ThoughtsI hope you enjoy the conversation and if you do, please subscribe!--------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com/podcast
In this episode, we sit down with Michael Royzen, CEO and co-founder of Phind. Michael shares insights from his journey in building the first LLM-based search engine for developers, the challenges of creating reliable AI models, and his vision for how AI will transform the work of developers in the near future. Tune in to discover the groundbreaking advancements and practical implications of AI technology in coding and beyond.I hope you enjoy the conversation and if you do, please subscribe!--------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com
Jason Liu is a true Renaissance Man in the world of AI. He began his career working on traditional ML recommender systems at tech giants like Meta and Stitch Fix and quickly pivoted into LLMs app development when ChatGPT opened its API in 2022. As the creator of Instructor, a Python library that structures LLM outputs for RAG applications, Jason has made significant contributions to the AI community. Today, Jason is a sought-after speaker, course creator, and Fortune 500 advisor. In this episode, we cut through the AI hype to explore effective strategies for building valuable AI products and discuss the future of AI across industries.Chapters:00:00 - Introduction and Background08:55 - The Role of Iterative Development and Metrics10:43 - The Importance of Hyperparameters and Experimentation18:22 - Introducing Instructor: Ensuring Structured Outputs20:26 - Use Cases for Instructor: Reports, Memos, and More28:13 - Automating Research, Due Diligence, and Decision-Making31:12 - Challenges and Limitations of Language Models32:50 - Aligning Evaluation Metrics with Business Outcomes35:09 - Improving Recommendation Systems and Search Algorithms46:05 - The Future of AI and the Role of Engineers and Product Leaders51:45 - The Raptor Paper: Organizing and Summarizing Text ChunksI hope you enjoy the conversation and if you do, please subscribe!--------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com
If you need to understand the future trajectory of AI, Logan Kilpatrick will help you do just that. Having seen the frontier at both OpenAI and Google. Logan led developer relations at OpenAI before leading product on the Google AI Studio. He's been closer than anyone to developers building with LLMs and has seen behind the curtain at two frontier labs.Logan and I talked about:🔸 What it was like joining OpenAI the day ChatGPT hit 1 million users 🔸 What you might expect from GPT5🔸 Google's latest innovations and the battle with OpenAI🔸 How can you stay ahead and achieve real ROI🔸 Logan's insights into the form factor of AI and what will replace chatbotsChapters:00:00 - Introduction01:50 - OpenAI and the Release of ChatGPT07:43 - Characteristics of Successful AI Products and Teams10:00 - The Rate of Change in AI 12:22 - The Future of AI and the Role of Systems13:47 - ROI in AI and Challenges with Cost18:07 - Advice for Builders and the Potential of Fine-Tuning20:52 - The Role of Prompt Engineering in AI Development25:27 - The Current State of Gemini34:07 - Future Form Factors of AI39:34 - Challenges and Opportunities in Building AI StartupsI hope you enjoy the conversation and if you do, please subscribe!--------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com
I'm excited to share this conversation with Max Rumpf the founder of Sid.AI. I wanted to speak to Max because Retrieval Augmented generation has become core to building AI applications and he knows more about RAG than anyone I know.We get deep into the challenges of building RAG systems and the episode is full of technical detail and practical insights. We cover:00:00 - Introduction to Max Rumpf and SID.ai03:39 - How SID.ai's RAG approach differs from basic tutorials07:30 - Challenges of document processing and chunking strategies13:07 - Retrieval techniques and hybrid search approaches15:06 - Discussion on knowledge graphs and their limitations20:58 - Reranking in RAG systems and performance improvements32:14 - Impact of longer context windows on RAG systems35:10 - The future of RAG and information retrieval39:47 - Recent research papers on AI and hallucination detection42:04 - Value-augmented sampling for language model alignment43:11 - Future trends and investment opportunities in AI43:50 - SEO optimization for LLMs and its potential as a business45:20 - Closing thoughts and wrap-upI hope you enjoy the conversation and if you do, please subscribe!
In this episode, I had the pleasure of speaking with Wade Foster, the founder and CEO of Zapier. We discussed Zapier's journey with AI, from their early experiments to the company-wide AI hackathon they held in March. Wade shared insights on how they prioritize AI projects, the challenges they've faced, and the opportunities they see in the AI space. We also talked about the future of AI and how it might impact the way we work
In this episode, I chatted with Shawn Wang about his upcoming AI engineering conference and what an AI engineer really is. It's been a year since he penned the viral essay "Rise of the AI Engineer' and we discuss if this new role will be enduring, the make up of the optimal AI team and trends in machine learning.The Rise of the AI Engineer Blog Post: https://www.latent.space/p/ai-engineerChapters00:00 - Introduction and background on Shawn Wang (Swyx)03:45 - Reflecting on the "Rise of the AI Engineer" essay07:30 - Skills and characteristics of AI Engineers12:15 - Team composition for AI products16:30 - Vertical vs. horizontal AI startups23:00 - Advice for AI product creators and leaders28:15 - Tools and buying vs. building for AI products33:30 - Key trends in AI research and development41:00 - Closing thoughts and information on the AI Engineer World Fair SummitHumanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to https://hubs.ly/Q02yV72D0
Sourcegraph have built the most popular open source AI coding tool in both the dev community and the Fortune 500. I sat down with Beyang Liu their CTO and cofounder to find out how they did it.We dive into the technical details of Cody's architecture, discussing how Sourcegraph handles the challenges of limited context windows in LLMs, why they don't use embeddings in their RAG system, and the importance of starting with the simplest approach before adding complexity.We also touch on the future of software engineering, open-source vs closed LLM models and what areas of AI are overhyped vs underhypedI hope you enjoy the conversation!Chapters- 00:00:00 - Introduction- 00:02:30 - What is Cody, and how does it help developers?- 00:04:15 - Challenges of building AI for large, legacy codebases- 00:07:30 - The importance of starting with the simplest approach- 00:11:00 - Sourcegraph's multi-layered context retrieval architecture using RAG- 00:15:30 - Adapting to the evolving landscape of LLMs and model selection- 00:19:00 - The future of software engineering in the age of AI- [00:23:00 - Advice for individuals navigating the AI wave- 00:26:00 - Predictions for the future of AI in software development- 00:30:00 - Is AI overhyped, underhyped, or both?- 00:33:00 - Exciting AI startups to watch--------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to https://hubs.ly/Q02yV72D0
I recently sat down with Bryan Bischof, AI lead at Hex, to dive deep into how they evaluate LLMs to ship reliable AI agents. Hex has deployed AI assistants that can automatically generate SQL queries, transform data, and create visualizations based on natural language questions. While many teams struggle to get value from LLMs in production, Hex has cracked the code.In this episode, Bryan shares the hard-won lessons they've learned along the way. We discuss why most teams are approaching LLM evaluation wrong and how Hex's unique framework enabled them to ship with confidence. Bryan breaks down the key ingredients to Hex's success:- Choosing the right tools to constrain agent behavior- Using a reactive DAG to allow humans to course-correct agent plans- Building granular, user-centric evaluators instead of chasing one "god metric"- Gating releases on the metrics that matter, not just gaming a score- Constantly scrutinizing model inputs & outputs to uncover insightsFor show notes and a transcript go to:https://hubs.ly/Q02BdzVP0-----------------------------------------------------Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to https://hubs.ly/Q02yV72D0
50% of AI contracts at Ironclad’s largest customers are now automatically negotiated with the help of generative AI. Ironclad were one of the earliest adopters of LLMs, starting when the best model was still GPT-3. There’s a lot of hype around AI agents without many successful examples but Ironclad had successfully deployed them in one of the most sensitive industries imaginable. In this episode Cai explains how they achieved this. Why they had to build their own visual programming language to make agents reliable and shares his advice for AI leaders starting to build products today.Where to find us: https://hubs.ly/Q02z2J6v0
Welcome to very first episode of the High Agency podcast! High Agency is a new podcast from Humanloop.Every week, I (Raza Habib) will interview leaders from companies, who have already succeeded with AI in production. We'll share their stories, lessons and playbooks to help you build with LLMs more quickly and with confidence.To get notified of the first episodes with Cai Gogwilt or Ironclad, Bryan Bishof of Hex, Beyang Liu of Sourcegraph and Wade Foster of Zapier please subscribe on youtube, spotify or apple podcasts! (just search for High Agency podcast)----------------------------------------------------------------------------------------------------------------------------------------Humanloop is an Integrated Development Environment for Large Language Models.It enables product teams to develop LLM-based applications that are reliable and scalable.Principally, it lets you rigorously measure and improve LLM performance during development and in production. The evalutation tools are. combiined with a collaborative workspace where engineers, PMs and subject matter experts improve prompts, tools and agents together.By adopting Humanloop, teams save 6-8 engineering hours each week through better workflows and they feel confident that their AI is reliable.To find out more go to https://hubs.ly/Q02z2J6v0
Comments
Top Podcasts
The Best New Comedy Podcast Right Now – June 2024The Best News Podcast Right Now – June 2024The Best New Business Podcast Right Now – June 2024The Best New Sports Podcast Right Now – June 2024The Best New True Crime Podcast Right Now – June 2024The Best New Joe Rogan Experience Podcast Right Now – June 20The Best New Dan Bongino Show Podcast Right Now – June 20The Best New Mark Levin Podcast – June 2024