Future of Life Institute Podcast

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

We're Not Ready for AGI (with Will MacAskill)

William MacAskill is a senior research fellow at Forethought. He joins the podcast to discuss his Better Futures essay series. We explore moral error risks, AI character design, space governance, and persistent path dependence. The conversation also covers risk-averse AI systems, moral trade between value systems, an d improving model specifications for ethical reasoning.LINKS:- Better Futures Research Series: https://www.forethought.org/research/better-futures- William MacAskill Forethought Profile: https://www.forethought.org/people/william-macaskillCHAPTERS:(00:00) Episode Preview(01:03) Improving The Future's Quality(09:58) Moral Errors and AI Rights(18:24) AI's Impact on Thinking(27:17) Utopias and Population Ethics(36:41) The Danger of Moral Lock-in(44:38) Deals with Misaligned AI(57:25) AI and Moral Trade(01:08:21) Improving AI Ethical Reasoning(01:16:05) The Risk of Path Dependence(01:27:41) Avoiding Future Lock-in(01:36:22) The Urgency of Space Governance(01:46:19) A Future Research Agenda(01:57:36) Is Intelligence a Good Bet?PRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

11-14
02:03:08

What Happens When Insiders Sound the Alarm on AI? (with Karl Koch)

Karl Koch is founder of the AI Whistleblower Initiative. He joins the podcast to discuss transparency and protections for AI insiders who spot safety risks. We explore current company policies, legal gaps, how to evaluate disclosure decisions, and whistleblowing as a backstop when oversight fails. The conversation covers practical guidance for potential whistleblowers and challenges of maintaining transparency as AI development accelerates.LINKS:About the AI Whistleblower InitiativeKarl KochPRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) Episode Preview(00:55) Starting the Whistleblower Initiative(05:43) Current State of Protections(13:04) Path to Optimal Policies(23:28) A Whistleblower's First Steps(32:29) Life After Whistleblowing(39:24) Evaluating Company Policies(48:19) Alternatives to Whistleblowing(55:24) High-Stakes Future Scenarios(01:02:27) AI and National SecuritySOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyPDISCLAIMERS: - AIWI does not request, encourage or counsel potential whistleblowers or listeners of this podcast to act unlawfully. - This is not legal advice and if you, the listener, find yourself needing legal counsel, please visit https://aiwi.org/contact-hub/ for detailed profiles of the world's leading whistleblower support organizations.

11-07
01:08:16

Can Machines Be Truly Creative? (with Maya Ackerman)

Maya Ackerman is an AI researcher, co-founder and CEO of WaveAI, and author of the book "Creative Machines: AI, Art & Us." She joins the podcast to discuss creativity in humans and machines. We explore defining creativity as novel and valuable output, why evolution qualifies as creative, and how AI alignment can reduce machine creativity. The conversation covers humble creative machines versus all-knowing oracles, hallucination's role in thought, and human-AI collaboration strategies that elevate rather than replace human capabilities.LINKS:- Maya Ackerman: https://en.wikipedia.org/wiki/Maya_Ackerman- Creative Machines: AI, Art & Us: https://maya-ackerman.com/creative-machines-book/PRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) Episode Preview(01:00) Defining Human Creativity(02:58) Machine and AI Creativity(06:25) Measuring Subjective Creativity(10:07) Creativity in Animals(13:43) Alignment Damages Creativity(19:09) Creativity is Hallucination(26:13) Humble Creative Machines(30:50) Incentives and Replacement(40:36) Analogies for the Future(43:57) Collaborating with AI(52:20) Reinforcement Learning & Slop(55:59) AI in EducationSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

10-24
01:01:51

From Research Labs to Product Companies: AI's Transformation (with Parmy Olson)

Parmy Olson is a technology columnist at Bloomberg and the author of Supremacy, which won the 2024 Financial Times Business Book of the Year. She joins the podcast to discuss the transformation of AI companies from research labs to product businesses. We explore how funding pressures have changed company missions, the role of personalities versus innovation, the challenges faced by safety teams, and power consolidation in the industry.LINKS:- Parmy Olson on X (Twitter): https://x.com/parmy- Parmy Olson’s Bloomberg columns: https://www.bloomberg.com/opinion/authors/AVYbUyZve-8/parmy-olson- Supremacy (book): https://www.panmacmillan.com/authors/parmy-olson/supremacy/9781035038244PRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) Episode Preview(01:18) Introducing Parmy Olson(02:37) Personalities Driving AI(06:45) From Research to Products(12:45) Has the Mission Changed?(19:43) The Role of Regulators(21:44) Skepticism of AI Utopia(28:00) The Human Cost(33:48) Embracing Controversy(40:51) The Role of Journalism(41:40) Big Tech's InfluenceSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

10-14
46:37

Can Defense in Depth Work for AI? (with Adam Gleave)

Adam Gleave is co-founder and CEO of FAR.AI. In this cross-post from The Cognitive Revolution Podcast, he joins to discuss post-AGI scenarios and AI safety challenges. The conversation explores his three-tier framework for AI capabilities, gradual disempowerment concerns, defense-in-depth security, and research on training less deceptive models. Topics include timelines, interpretability limitations, scalable oversight techniques, and FAR.AI’s vertically integrated approach spanning technical research, policy advocacy, and field-building.LINKS:Adam Gleave - https://www.gleave.meFAR.AI - https://www.far.aiThe Cognitive Revolution Podcast - https://www.cognitiverevolution.aiPRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) A Positive Post-AGI Vision(10:07) Surviving Gradual Disempowerment(16:34) Defining Powerful AIs(27:02) Solving Continual Learning(35:49) The Just-in-Time Safety Problem(42:14) Can Defense-in-Depth Work?(49:18) Fixing Alignment Problems(58:03) Safer Training Formulas(01:02:24) The Role of Interpretability(01:09:25) FAR.AI's Vertically Integrated Approach(01:14:14) Hiring at FAR.AI(01:16:02) The Future of GovernanceSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

10-03
01:18:35

How We Keep Humans in Control of AI (with Beatrice Erkers)

Beatrice works at the Foresight Institute running their Existential Hope program. She joins the podcast to discuss the AI pathways project, which explores two alternative scenarios to the default race toward AGI. We examine tool AI, which prioritizes human oversight and democratic control, and d/acc, which emphasizes decentralized, defensive development. The conversation covers trade-offs between safety and speed, how these pathways could be combined, and what different stakeholders can do to steer toward more positive AI futures.LINKS:AI Pathways - https://ai-pathways.existentialhope.comBeatrice Erkers - https://www.existentialhope.com/team/beatrice-erkersCHAPTERS:(00:00) Episode Preview(01:10) Introduction and Background(05:40) AI Pathways Project(11:10) Defining Tool AI(17:40) Tool AI Benefits(23:10) D/acc Pathway Explained(29:10) Decentralization Trade-offs(35:10) Combining Both Pathways(40:10) Uncertainties and Concerns(45:10) Future Evolution(01:01:21) Funding PilotsPRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

09-26
01:06:45

Why Building Superintelligence Means Human Extinction (with Nate Soares)

Nate Soares is president of the Machine Intelligence Research Institute. He joins the podcast to discuss his new book "If Anyone Builds It, Everyone Dies," co-authored with Eliezer Yudkowsky. We explore why current AI systems are "grown not crafted," making them unpredictable and difficult to control. The conversation covers threshold effects in intelligence, why computer security analogies suggest AI alignment is currently nearly impossible, and why we don't get retries with superintelligence. Soares argues for an international ban on AI research toward superintelligence.LINKS:If Anyone Builds It, Everyone Dies - https://ifanyonebuildsit.comMachine Intelligence Research Institute -  https://intelligence.orgNate Soares - https://intelligence.org/team/nate-soares/PRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) Episode Preview(01:05) Introduction and Book Discussion(03:34) Psychology of AI Alarmism(07:52) Intelligence Threshold Effects(11:38) Growing vs Crafting AI(18:23) Illusion of AI Control(26:45) Why Iteration Won't Work(34:35) The No Retries Problem(38:22) Computer Security Lessons(49:13) The Cursed Problem(59:32) Multiple Curses and Complications(01:09:44) AI's Infrastructure Advantage(01:16:26) Grading Humanity's Response(01:22:55) Time Needed for Solutions(01:32:07) International Ban NecessitySOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

09-18
01:39:38

Breaking the Intelligence Curse (with Luke Drago)

Luke Drago is the co-founder of Workshop Labs and co-author of the essay series "The Intelligence Curse". The essay series explores what happens if AI becomes the dominant factor of production thereby reducing incentives to invest in people. We explore pyramid replacement in firms, economic warning signs to monitor, automation barriers like tacit knowledge, privacy risks in AI training, and tensions between centralized AI safety and democratization. Luke discusses Workshop Labs' privacy-preserving approach and advises taking career risks during this technological transition.  "The Intelligence Curse" essay series by Luke Drago & Rudolf Laine: https://intelligence-curse.ai/ Luke's Substack: https://lukedrago.substack.com/ Workshop Labs: https://workshoplabs.ai/ CHAPTERS: (00:00) Episode Preview(00:55) Intelligence Curse Introduction(02:55) AI vs Historical Technology(07:22) Economic Metrics and Indicators(11:23) Pyramid Replacement Theory(17:28) Human Judgment and Taste(22:25) Data Privacy and Control(28:55) Dystopian Economic Scenario(35:04) Resource Curse Lessons(39:57) Culture vs Economic Forces(47:15) Open Source AI Debate(54:37) Corporate Mission Evolution(59:07) AI Alignment and Loyalty(01:05:56) Moonshots and Career Advice

09-10
01:09:38

What Markets Tell Us About AI Timelines (with Basil Halperin)

Basil Halperin is an assistant professor of economics at the University of Virginia. He joins the podcast to discuss what economic indicators reveal about AI timelines. We explore why interest rates might rise if markets expect transformative AI, the gap between strong AI benchmarks and limited economic effects, and bottlenecks to AI-driven growth. We also cover market efficiency, automated AI research, and how financial markets may signal progress. Basil's essay on "Transformative AI, existential risk, and real interest rates": https://basilhalperin.com/papers/agi_emh.pdf Read more about Basil's work here: https://basilhalperin.com/CHAPTERS:(00:00) Episode Preview(00:49) Introduction and Background(05:19) Efficient Market Hypothesis Explained(10:34) Markets and Low Probability Events(16:09) Information Diffusion on Wall Street(24:34) Stock Prices vs Interest Rates(28:47) New Goods Counter-Argument(40:41) Why Focus on Interest Rates(45:00) AI Secrecy and Market Efficiency(50:52) Short Timeline Disagreements(55:13) Wealth Concentration Effects(01:01:55) Alternative Economic Indicators(01:12:47) Benchmarks vs Economic Impact(01:25:17) Open Research QuestionsSOCIAL LINKS:Website: https://future-of-life-institute-podcast.aipodcast.ingTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple Podcasts: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyPPRODUCED BY: https://aipodcast.ing

09-01
01:36:10

AGI Security: How We Defend the Future (with Esben Kran)

Esben Kran joins the podcast to discuss why securing AGI requires more than traditional cybersecurity, exploring new attack surfaces, adaptive malware, and the societal shifts needed for resilient defenses. We cover protocols for safe agent communication, oversight without surveillance, and distributed safety models across companies and governments.   Learn more about Esben's work at: https://blog.kran.ai  00:00 – Intro and preview 01:13 – AGI security vs traditional cybersecurity 02:36 – Rebuilding societal infrastructure for embedded security 03:33 – Sentware: adaptive, self-improving malware 04:59 – New attack surfaces 05:38 – Social media as misaligned AI 06:46 – Personal vs societal defenses 09:13 – Why private companies underinvest in security 13:01 – Security as the foundation for any AI deployment 14:15 – Oversight without a surveillance state 17:19 – Protocols for safe agent communication 20:25 – The expensive internet hypothesis 23:30 – Distributed safety for companies and governments 28:20 – Cloudflare’s “agent labyrinth” example 31:08 – Positive vision for distributed security 33:49 – Human value when labor is automated 41:19 – Encoding law for machines: contracts and enforcement 44:36 – DarkBench: detecting manipulative LLM behavior 55:22 – The AGI endgame: default path vs designed future 57:37 – Powerful tool AI 01:09:55 – Fast takeoff risk 01:16:09 – Realistic optimism

08-22
01:18:21

Reasoning, Robots, and How to Prepare for AGI (with Benjamin Todd)

Benjamin Todd joins the podcast to discuss how reasoning models changed AI, why agents may be next, where progress could stall, and what a self-improvement feedback loop in AI might mean for the economy and society. We explore concrete timelines (through 2030), compute and power bottlenecks, and the odds of an industrial explosion. We end by discussing how people can personally prepare for AGI: networks, skills, saving/investing, resilience, citizenship, and information hygiene.  Follow Benjamin's work at: https://benjamintodd.substack.com  Timestamps: 00:00 What are reasoning models?  04:04 Reinforcement learning supercharges reasoning 05:06 Reasoning models vs. agents 10:04 Economic impact of automated math/code 12:14 Compute as a bottleneck 15:20 Shift from giant pre-training to post-training/agents 17:02 Three feedback loops: algorithms, chips, robots 20:33 How fast could an algorithmic loop run? 22:03 Chip design and production acceleration 23:42 Industrial/robotics loop and growth dynamics 29:52 Society’s slow reaction; “warning shots” 33:03 Robotics: software and hardware bottlenecks 35:05 Scaling robot production 38:12 Robots at ~$0.20/hour?  43:13 Regulation and humans-in-the-loop 49:06 Personal prep: why it still matters 52:04 Build an information network 55:01 Save more money 58:58 Land, real estate, and scarcity in an AI world 01:02:15 Valuable skills: get close to AI, or far from it 01:06:49 Fame, relationships, citizenship 01:10:01 Redistribution, welfare, and politics under AI 01:12:04 Try to become more resilient  01:14:36 Information hygiene 01:22:16 Seven-year horizon and scaling limits by ~2030

08-15
01:27:01

From Peak Horse to Peak Human: How AI Could Replace Us (with Calum Chace)

On this episode, Calum Chace joins me to discuss the transformative impact of AI on employment, comparing the current wave of cognitive automation to historical technological revolutions. We talk about "universal generous income", fully-automated luxury capitalism, and redefining education with AI tutors. We end by examining verification of artificial agents and the ethics of attributing consciousness to machines.  Learn more about Calum's work here: https://calumchace.com  Timestamps:  00:00:00  Preview and intro 00:03:02  Past tech revolutions and AI-driven unemployment 00:05:43  Cognitive automation: from secretaries to every job 00:08:02  The “peak horse” analogy and avoiding human obsolescence 00:10:55  Infinite demand and lump of labor 00:18:30  Fully-automated luxury capitalism 00:23:31  Abundance economy and a potential employment cliff 00:29:37  Education reimagined with personalized AI tutors 00:36:22  Real-world uses of LLMs: memory, drafting, emotional insight 00:42:56  Meaning beyond jobs: aristocrats, retirees, and kids 00:49:51  Four futures of superintelligence 00:57:20  Conscious AI and empathy as a safety strategy 01:10:55  Verifying AI agents 01:25:20  Over-attributing vs under-attributing machine consciousness

07-31
01:37:21

How AI Could Help Overthrow Governments (with Tom Davidson)

On this episode, Tom Davidson joins me to discuss the emerging threat of AI-enabled coups, where advanced artificial intelligence could empower covert actors to seize power. We explore scenarios including secret loyalties within companies, rapid military automation, and how AI-driven democratic backsliding could differ significantly from historical precedents. Tom also outlines key mitigation strategies, risk indicators, and opportunities for individuals to help prevent these threats.  Learn more about Tom's work here: https://www.forethought.org  Timestamps:  00:00:00  Preview: why preventing AI-enabled coups matters 00:01:24  What do we mean by an “AI-enabled coup”? 00:01:59  Capabilities AIs would need (persuasion, strategy, productivity) 00:02:36  Cyber-offense and the road to robotized militaries 00:05:32  Step-by-step example of an AI-enabled military coup 00:08:35  How AI-enabled coups would differ from historical coups 00:09:24  Democratic backsliding (Venezuela, Hungary, U.S. parallels) 00:12:38  Singular loyalties, secret loyalties, exclusive access 00:14:01  Secret-loyalty scenario: CEO with hidden control 00:18:10  From sleeper agents to sophisticated covert AIs 00:22:22  Exclusive-access threat: one project races ahead 00:29:03  Could one country outgrow the rest of the world? 00:40:00  Could a single company dominate global GDP? 00:47:01  Autocracies vs democracies 00:54:43  Mitigations for singular and secret loyalties 01:06:25  Guardrails, monitoring, and controlled-use APIs 01:12:38  Using AI itself to preserve checks-and-balances 01:24:53  Risk indicators to watch for AI-enabled coups 01:33:05  Tom’s risk estimates for the next 5 and 30 years 01:46:50  How you can help – research, policy, and careers

07-17
01:53:50

What Happens After Superintelligence? (with Anders Sandberg)

Anders Sandberg joins me to discuss superintelligence and its profound implications for human psychology, markets, and governance. We talk about physical bottlenecks, tensions between the technosphere and the biosphere, and the long-term cultural and physical forces shaping civilization. We conclude with Sandberg explaining the difficulties of designing reliable AI systems amidst rapid change and coordination risks.  Learn more about Anders's work here: https://mimircenter.org/anders-sandberg  Timestamps:  00:00:00 Preview and intro 00:04:20 2030 superintelligence scenario 00:11:55 Status, post-scarcity, and reshaping human psychology 00:16:00 Physical limits: energy, datacenter, and waste-heat bottlenecks 00:23:48 Technosphere vs biosphere 00:28:42 Culture and physics as long-run drivers of civilization 00:40:38 How superintelligence could upend markets and governments 00:50:01 State inertia: why governments lag behind companies 00:59:06 Value lock-in, censorship, and model alignment 01:08:32 Emergent AI ecosystems and coordination-failure risks 01:19:34 Predictability vs reliability: designing safe systems 01:30:32 Crossing the reliability threshold 01:38:25 Personal reflections on accelerating change

07-11
01:44:55

Why the AI Race Ends in Disaster (with Daniel Kokotajlo)

On this episode, Daniel Kokotajlo joins me to discuss why artificial intelligence may surpass the transformative power of the Industrial Revolution, and just how much AI could accelerate AI research. We explore the implications of automated coding, the critical need for transparency in AI development, the prospect of AI-to-AI communication, and whether AI is an inherently risky technology. We end by discussing iterative forecasting and its role in anticipating AI's future trajectory.  You can learn more about Daniel's work at: https://ai-2027.com and https://ai-futures.org  Timestamps:  00:00:00 Preview and intro 00:00:50 Why AI will eclipse the Industrial Revolution  00:09:48 How much can AI speed up AI research?  00:16:13 Automated coding and diffusion 00:27:37 Transparency in AI development  00:34:52 Deploying AI internally  00:40:24 Communication between AIs  00:49:23 Is AI inherently risky? 00:59:54 Iterative forecasting

07-03
01:10:27

Preparing for an AI Economy (with Daniel Susskind)

On this episode, Daniel Susskind joins me to discuss disagreements between AI researchers and economists, how we can best measure AI’s economic impact, how human values can influence economic outcomes, what meaningful work will remain for humans in the future, the role of commercial incentives in AI development, and the future of education.  You can learn more about Daniel's work here: https://www.danielsusskind.com  Timestamps:  00:00:00 Preview and intro  00:03:19 AI researchers versus economists  00:10:39 Measuring AI's economic effects  00:16:19 Can AI be steered in positive directions?  00:22:10 Human values and economic outcomes 00:28:21 What will remain for people to do?  00:44:58 Commercial incentives in AI 00:50:38 Will education move towards general skills? 00:58:46 Lessons for parents

06-27
01:03:38

Will AI Companies Respect Creators' Rights? (with Ed Newton-Rex)

Ed Newton-Rex joins me to discuss the issue of AI models trained on copyrighted data, and how we might develop fairer approaches that respect human creators. We talk about AI-generated music, Ed’s decision to resign from Stability AI, the industry’s attitude towards rights, authenticity in AI-generated art, and what the future holds for creators, society, and living standards in an increasingly AI-driven world.  Learn more about Ed's work here: https://ed.newtonrex.com  Timestamps:  00:00:00 Preview and intro  00:04:18 AI-generated music  00:12:15 Resigning from Stability AI  00:16:20 AI industry attitudes towards rights 00:26:22 Fairly Trained  00:37:16 Special kinds of training data  00:50:42 The longer-term future of AI  00:56:09 Will AI improve living standards?  01:03:10 AI versions of artists  01:13:28 Authenticity and art  01:18:45 Competitive pressures in AI 01:24:06 Priorities going forward

06-20
01:27:15

AI Timelines and Human Psychology (with Sarah Hastings-Woodhouse)

On this episode, Sarah Hastings-Woodhouse joins me to discuss what benchmarks actually measure, AI’s development trajectory in comparison to other technologies, tasks that AI systems can and cannot handle, capability profiles of present and future AIs, the notion of alignment by default, and the leading AI companies’ vague AGI plans. We also discuss the human psychology of AI, including the feelings of living in the "fast world" versus the "slow world", and navigating long-term projects given short timelines.  Timestamps:  00:00:00 Preview and intro00:00:46 What do benchmarks measure?  00:08:08 Will AI develop like other tech?  00:14:13 Which tasks can AIs do? 00:23:00 Capability profiles of AIs  00:34:04 Timelines and social effects 00:42:01 Alignment by default?  00:50:36 Can vague AGI plans be useful? 00:54:36 The fast world and the slow world 01:08:02 Long-term projects and short timelines

06-13
01:15:50

Could Powerful AI Break Our Fragile World? (with Michael Nielsen)

On this episode, Michael Nielsen joins me to discuss how humanity's growing understanding of nature poses dual-use challenges, whether existing institutions and governance frameworks can adapt to handle advanced AI safely, and how we might recognize signs of dangerous AI. We explore the distinction between AI as agents and tools, how power is latent in the world, implications of widespread powerful hardware, and finally touch upon the philosophical perspectives of deep atheism and optimistic cosmism.Timestamps:  00:00:00 Preview and intro 00:01:05 Understanding is dual-use  00:05:17 Can we handle AI like other tech?  00:12:08 Can institutions adapt to AI?  00:16:50 Recognizing signs of dangerous AI 00:22:45 Agents versus tools 00:25:43 Power is latent in the world 00:35:45 Widespread powerful hardware 00:42:09 Governance mechanisms for AI 00:53:55 Deep atheism and optimistic cosmism

06-06
01:01:29

Facing Superintelligence (with Ben Goertzel)

On this episode, Ben Goertzel joins me to discuss what distinguishes the current AI boom from previous ones, important but overlooked AI research, simplicity versus complexity in the first AGI, the feasibility of alignment, benchmarks and economic impact, potential bottlenecks to superintelligence, and what humanity should do moving forward.   Timestamps:  00:00:00 Preview and intro  00:01:59 Thinking about AGI in the 1970s  00:07:28 What's different about this AI boom?  00:16:10 Former taboos about AGI 00:19:53 AI research worth revisiting  00:35:53 Will the first AGI be simple?  00:48:49 Is alignment achievable?  01:02:40 Benchmarks and economic impact  01:15:23 Bottlenecks to superintelligence 01:23:09 What should we do?

05-23
01:32:34

Maciej M

Brilliant!

04-18 Reply

Marion Grau

What is with the demographics of the people interviewed? White male circle jerk? Few women and fewer POC.

07-02 Reply

masoud hajian

as great as usual

04-11 Reply

Salar Basiri

great insightful conversation, thanks for sharing!

03-18 Reply

Marco Gorelli

her best advice is to buy organic? wtf?

10-13 Reply

07-25

Recommend Channels