Discover
Augmented Mind Podcast
Augmented Mind Podcast
Author: Yijia Shao, Shannon Shen, Michael Ryan
Subscribed: 0Played: 1Subscribe
Share
© Yijia Shao, Shannon Shen, Michael Ryan
Description
With AI making major waves in society people often lose focus of the reason to build such technology: uplifting humanity. The Augmented Mind Podcast highlights technical human-centered AI research contributions by interviewing the leading minds driving the human side of the AI revolution.
Hosted by Yijia Shao, Shannon Shen, and Michael Ryan
Hosted by Yijia Shao, Shannon Shen, and Michael Ryan
4 Episodes
Reverse
Woosuk Kwon is CTO of Inferact and creator of the vLLM inference library. Woosuk shares what it takes to build the most popular open-source LLM inference engine from a human-centered perspective.Outline:0:00 - Prelude: Introducing Woosuk and Inferact3:00 - Woosuk’s First PhD Project6:00 - How the vLLM Project Got Started9:18 - AI Infra Needs More Than Just Efficiency14:08 - How AI Infra and Human-centered AI Are Connected15:01 - How to Prioritize Feature Requests for Popular AI Infra18:18 - Streaming Requests and Realtime API24:05 - Multi-turn, Agentic, Proactive LLMs27:03 - How to Design AI Infra in a Principled Way29:13 - How to Design an AI Inference Engine for Continue Learning with RL35:05 - Would LoRA Training Affect RL Infra Design?37:28 - Why Start an AI Inference Infra Startup?40:46 - What Effortless Inference with Open-source Models Means for Developers43:46 - A Vision for On-device AI Inference46:19- Can Today’s Coding Agents Create vLLM?References:Inferact: https://inferact.ai/Efficient Memory Management for Large Language Model Serving with PagedAttention: https://arxiv.org/abs/2309.06180Streaming Requests & Realtime API in vLLM: https://vllm.ai/blog/streaming-realtimeRL’s Razor: Why Online Reinforcement Learning Forget Less: https://arxiv.org/abs/2509.04259Podcast Links:Podcast website: https://augmented-mind.github.io/Apple Podcasts: https://podcasts.apple.com/us/podcast/augmented-mind-podcast/id1868102170Spotify: https://open.spotify.com/show/40KculkYTe2tOpqJm6TAYr?si=PU_UncsMT4mXjVNCRwoXog&nd=1&dlsi=6d9bed7a43d64085RSS: https://anchor.fm/s/10dbf5b7c/podcast/rssAbout the Hosts:The AM Podcast is hosted by Yijia Shao, Shannon Shen, and Michael Ryan, CS PhD students at Stanford University and MIT.
Sherry Wu is a professor at CMU whose research sits at the intersection of human-computer interaction and natural language processing. From making AI work for imperfect humans to making humans work better with AI — Sherry's work challenges us to rethink both sides of the equation.Outline:0:00 - Teaser1:13 - Prelude: Introducing Sherry Wu2:30 - How the AI Field Has Changed in the Last Four Years4:22 - Making AI Systems Work for Imperfect Humans6:54 - Models vs. Scaffolding10:36 - Understanding Human Imperfection in Teaching Contexts19:28 - AI Literacy Skills22:04 - How AI Is Changing CS Education25:38 - Suppose We Have AGI, What Does It Mean to Be Human?29:14 - Training Models to be More Human-centered31:46 - Checklists Are Better Than Reward Models https://arxiv.org/abs/2507.1862436:56 - Challenge in Aligning Models43:22 - Advice for Interdisciplinary Research45:37 - Reflection on Her Own ResearchReferences:Sherry Wu’s Research Homepage: https://www.cs.cmu.edu/~sherryw/Sherry Wu’s course page (PMDS, Spring 2025): https://www.cs.cmu.edu/~sherryw/courses/2025s-pmds.htmlAI Fluency Index: https://www.anthropic.com/research/AI-fluency-indexChecklists Are Better Than Reward Models: https://arxiv.org/abs/2507.18624Not Everyone Wins with LLMs: https://arxiv.org/pdf/2509.21890Podcast Links:Podcast website: https://augmented-mind.github.io/Apple Podcasts: https://podcasts.apple.com/us/podcast/augmented-mind-podcast/id1868102170Spotify: https://open.spotify.com/show/40KculkYTe2tOpqJm6TAYr?si=PU_UncsMT4mXjVNCRwoXogRSS: https://anchor.fm/s/10dbf5b7c/podcast/rssAbout the Hosts:The AM Podcast is hosted by Yijia Shao, Shannon Shen, and Michael Ryan, CS PhD students at Stanford University and MIT.
Omar Shaikh is a Stanford PhD student, HCI and NLP researcher, and author of the award-winning UIST 2025 paper “Creating General User Models from Computer Use”. Omar’s pioneering work aims to bridge the Human-AI grounding gap.Outline:0:00 - Teaser1:21 - Prelude: Introducing Omar Shaikh2:07 - Monologue: Better Context for AI4:22 - Bridging the Human-AI Grounding Gap6:14 - Confidence scores in General User Models (GUMs)7:32 - Calibration of General User Models13:20 - Uses of General User Models15:01 - Mixed Initiative Interactions22:10 - Motivation for GUM25:31 - Tabracadabra: tab everywhere!27:01 - Design decisions in GUM28:26 - Designing Interactive Experiences32:11 - DITTO: https://arxiv.org/abs/2406.0088833:06 - Work on Domains without Existing Benchmarks34:45 - Challenges of the GUM Project37:26 - Privacy and Data Ownership38:57 - Finetuning a User Model44:09 - Mindblowing GUM Inferences49:02 - Social Problems of GUMs50:27 - GUM as a Reflection ToolReferences:Omar Shaikh’s research homepage: https://oshaikh.com/Creating General User Models from Computer Use: https://arxiv.org/abs/2505.10831Tabracadabra: https://x.com/oshaikh13/status/1967626897837494479Aligning Language Models with Demonstrated Feedback: https://arxiv.org/abs/2406.00888Principles of Mixed-Initiative User Interfaces: https://erichorvitz.com/chi99horvitz.pdfVerification of Forecasts Expressed in Terms of Probability: https://journals.ametsoc.org/view/journals/mwre/78/1/1520-0493_1950_078_0001_vofeit_2_0_co_2.xml Podcast Links:Podcast website: https://augmented-mind.github.io/Apple Podcasts: https://podcasts.apple.com/us/podcast/augmented-mind-podcast/id1868102170Spotify: https://open.spotify.com/show/40KculkYTe2tOpqJm6TAYr?si=PU_UncsMT4mXjVNCRwoXogRSS: https://anchor.fm/s/10dbf5b7c/podcast/rssAbout the Hosts:The AM Podcast is hosted by Yijia Shao, Shannon Shen, and Michael Ryan, CS PhD students at Stanford University and MIT.
Introducing The Augmented Mind Podcast (The AM Podcast). We explore techniques for building AI models that collaborate with people and augment human intelligence. In Episode 0, we share who we are, why we started this podcast, and what we're looking forward to.Outline:0:00 - Prelude: the problems we care about1:48 - Host introduction2:03 - Why we started the AM Podcast2:31 - Hot takes on human-centered AI2:45 - Hot take #1: learning on outcome rewards over long horizons will directly solve human-agent collaboration3:00 - The Bitter Lesson3:53 - How to define rewards is a human problem4:50 - Empathetic AI5:48 - Hot take #2: even with an automation-vs-augmentation view, as AI gets stronger, there will be less for us to work on6:09 - Creative Destruction7:21 - Task vs. goal10:45 - Format of our podcast11:28 - Unique technical challenges in human-centered AI11:43 - Example #1: human variation13:58 - Example #2: revolution of annotation and data collection15:10 - Example #3: making sense of noisy data16:45 - Let the journey begin!External Clips Referenced:Eric Horvitz; 1:02:38 - 1:03:07 https://www.youtube.com/watch?v=ddjNTxtyEnwFei-Fei Li ; 12:40 - 12:58 https://www.youtube.com/watch?v=be0gLzeBX5wPodcast Links:Podcast website: https://augmented-mind.github.io/Apple Podcasts: https://podcasts.apple.com/us/podcast/augmented-mind-podcast/id1868102170Spotify: https://open.spotify.com/show/40KculkYTe2tOpqJm6TAYr?si=PU_UncsMT4mXjVNCRwoXogRSS: https://anchor.fm/s/10dbf5b7c/podcast/rssAbout the Hosts:The AM Podcast is hosted by Yijia Shao, Shannon Shen, and Michael Ryan, CS PhD students at Stanford University and MIT.




