AGI, Alignment, and the Future of AI Power With Emmett Shear
Description
Emmett Shear is the founder and CEO of Softmax, an alignment research company, and previously co-founded and led Twitch as CEO. He was also a Y Combinator partner and briefly served as interim CEO of OpenAI.
What you'll learn:
- Why AI alignment and AGI are fundamentally the same problem
- How theory of mind is the critical missing piece in current AI systems
- Why continuous learning requires self-modeling capabilities
- The dangerous truth: alignment is a capacity for both great good and great evil
- Why "aligned AI" really means "aligned to me"—and why that's concerning
- How societies of smaller AIs will outcompete singleton superintelligences
- Why AI needs to be integrated with humans, not segregated into AI-only societies
- The Twitch lesson: people don't want easy, they want good
- Why 99% of AI startups are building labor-saving tools instead of value-creating products
- How parenting and AI development mirror each other in surprising ways
- Why current AI labs are confused about continuous learning
- Conway's Law applied to AI: you ship your org chart
- The problem with mode collapse in self-learning systems
- Why emotions are training signals, not irrational noise
- Emmett's biggest mistake at Twitch: chasing new products instead of perfecting the core
In this episode, we cover:
(00:00 ) The dangerous truth about AI alignment
(01:13 ) Introduction to Softmax and organic alignment
(02:05 ) What alignment actually means (and why most people are confused)
(03:33 ) The output: training environments for theory of mind
(05:01 ) Continuous learning and why it's so hard
(06:25 ) Multiplayer reasoning training in open-ended environments
(07:14 ) Aligned to what? The critical question everyone ignores
(08:40 ) Why alignment is always relative to the aligning being
(11:07 ) Cooperation vs. competition: training for the real world
(12:56 ) Is AGI an urgent problem or do we have time?
(13:15 ) AGI and alignment are the same problem
(15:25 ) Alignment capacity enables both good and evil
(17:13 ) The singleton problem and why societies of AIs make sense
(20:41 ) Building alignment between AIs and humans
(22:09 ) Why Elon's "biggest cluster" strategy might be wrong
(23:06 ) AI must be aligned to individual humans, not humanity
(25:03 ) What does the atomic unit of AI look like?
(28:02 ) Adding a new kind of person to society
(29:06 ) Everything will be alive: from spreadsheets to cars
(30:00 ) From Twitch retirement to Softmax founding
(31:26 ) Research vs. product engineering at early-stage startups
(32:41 ) Raising money for AI research in the current era
(34:30 ) Why Softmax will ship products
(34:50 ) Ilya's closed-loop research vs. open-loop learning
(36:36 ) How you do anything is how you do everything
(37:28 ) The continuous learning problem explained simply
(38:29 ) Mode collapse: why AIs become stereotypes of themselves
(39:33 ) The reward problem and why humans need emotions
(40:48 ) Why LLMs are trained to avoid emotions
(41:52 ) Watching children learn while building learning AI
(43:04 ) Advice for first-time AI founders
(45:08 ) Treat AI as clay to be molded, not a genie granting wishes
(45:50 ) The Twitch lesson: people want good things, not easy things
(47:22 ) Why 99% of AI companies are building the wrong thing
(48:16 ) Rapid fire: biggest career mistake at Twitch
(50:15 ) Which founders inspire Emmett most
(50:56 ) The passing fad: AI slop generators




