ForeCast

ForeCast is a podcast from Forethought, where we hear from the authors about new research.

Should There Be an International AGI Project? (with Rose Hadshar)

Rose Hadshar is a researcher at Forethought. She discusses: Why governments might pursue an international AGI government The chance that the first AGI developer becomes a de facto world government The "Intelsat model" – treating AGI as commercial infrastructure rather than a weapon or science project Membership and voting structures that might make a US-led coalition viable The "AGI convention" proposal Watch the video version on YouTube. Read a full transcript here. To see all Forethought’s published research, visit forethought.org/research. To subscribe to our newsletter, visit forethought.org/subscribe.

01-27
01:22:03

When Will AI Transform the Physical World? (with Tom Davidson and Will MacAskill)

Tom Davidson and Will MacAskill are both researchers at Forethought. They discuss: What is the industrial explosion? Why the case for recursive self-improvement is stronger for physical industry than for software How fast the physical economy could grow, the case for weekly doubling times, and limits from natural resources Why authoritarian regimes might have a structural advantage in the industrial explosion Could a leading country outgrow the entire world to achieve decisive dominance? Why does the industrial explosion get ~1% of the attention of the intelligence explosion? You can read a full transcript here. To see all Forethought’s published research, visit forethought.org/research. To subscribe to our newsletter, visit forethought.org/subscribe.

01-22
02:48:21

Why Make Deals with Misaligned AIs? (with Lukas Finnveden)

Lukas Finnveden is a Research Analyst at Redwood Research. We talk about making deals with early scheming AIs: paying them to cooperate rather than take over. You can read a full transcript here. To see all Forethought’s published research, visit forethought.org/research. To subscribe to our newsletter, visit forethought.org/subscribe.

12-19
01:12:26

Checks, Balances, and Power Concentration (with Rose Hadshar and Nora Ammann)

Rose Hadshar (Forethought) and Nora Ammann (ARIA) talk about power concentration, checks and balances, coups, and post-AGI political economy. You can read a full transcript here. This is the audio version of a video podcast. Watch the video on YouTube: youtu.be/PB855Xpx1kk?si=v05LVM-2VQ3XiT7N Prompted in part by Rose's recent article for 80,000 Hours on ‘Extreme power concentration’ — 80000hours.org/problem-profiles/extreme-power-concentration Timstamps: 00:00:00 Intro 00:01:26 What is ‘power concentration’? Is power concentration the right framing? 00:05:25 When can it be good for power to be centralised? 00:09:40 Is ‘checks and balances’ a better framing? 00:11:16 How AI undermines existing checks and balances 00:15:25 Economic power, meme complexes, and cultural influence 00:28:50 AI companies vs governments as centres of power 00:31:25 The difficulty of knowing where power actually lies 00:40:05 Do humans and AIs concentrate power differently? 00:48:38 Should we be trying to imagine a better post-AGI political economy? 00:56:28 Concrete actions: transparency & whistleblower protections 00:59:07 Human-AI teaming and collaborative intelligence 01:01:16 AI agent economies and proving trustworthiness 01:13:16 Can we train AI to follow the law? 01:16:33 Building resilient coalitions of trustworthy agents 01:18:42 Closing reflections

12-12
01:23:07

Consciousness and Competition (with Joe Carlsmith)

Joe Carlsmith is a writer, researcher, and philosopher. He works at Anthropic on the character/constitution/spec for Claude. Before that, he was a senior advisor at Open Philanthropy. You can read a full transcript here. To see all Forethought’s published research, visit forethought.org/research. To subscribe to our newsletter, visit forethought.org/subscribe.

11-28
02:07:41

Forethought is Hiring Researchers (with Mia Taylor)

This is a bonus episode to say that Forethought is hiring researchers. After an overview of the roles, we hear from Research Fellow Mia Taylor about working at Forethought. You can read a full transcript here. The application deadline has been extended to November 1st 2025. Apply here: forethought.org/careers/researcher Chapters (00:00:00) Forethought hiring overview and roles (00:03:21) Interview with Mia begins (00:03:34) Why Mia joined Forethought (00:05:59) Daily work, how work at Forethought was different from expected (00:08:38) Research examples (00:10:58) Who should and shouldn't apply (00:14:11) Disagreements and closing thoughts Links Apply for researcher positions Referral form (if you know someone who might be a good fit)

10-13
14:50

[Article] Introducing Better Futures

This is a narration of ‘Introducing Better Futures ’ by William MacAskill; published 3rd August 2025. Narration by Perrin Walker (@perrinjwalker).

10-07
15:26

Politics and Power Post-Automation (with David Duvenaud)

David Duvenaud is an associate professor at the University of Toronto. He recently organised the workshop on ‘Post-AGI Civilizational Equilibria’ , and he is a co-author of ‘Gradual Disempowerment’. He recently finished an extended sabbatical on the Alignment Science team at Anthropic. You can read a full transcript here. To see all Forethought’s published research, visit forethought.org/research. To subscribe to our newsletter, visit forethought.org/subscribe.

09-25
01:30:42

[Article] AI Tools for Existential Security

This is a narration of ‘AI Tools for Existential Security’ by Lizka Vaintrob and Owen Cotton-Barratt; published 14th March 2025. Narration by Perrin Walker (@perrinjwalker).

09-17
27:34

Is Gradual Disempowerment Inevitable? (with Raymond Douglas)

Raymond Douglas is a researcher focused on the societal effects of AI. In this episode, we discuss Gradual Disempowerment. You can read a full transcript here. To see all our published research, visit forethought.org/research. To subscribe to our newsletter, visit forethought.org/subscribe.

09-09
01:45:20

[Article] Intelsat as a Model for International AGI Governance

This is a narration of ‘Intelsat as a Model for International AGI Governance ’ by Will MacAskill and Rose Hadshar; published 13th March 2025. Narration by Perrin Walker (@perrinjwalker).

09-07
55:07

[Article] Will AI R&D Automation Cause a Software Intelligence Explosion?

This is a narration of ‘Will AI R&D Automation Cause a Software Intelligence Explosion?’ by Daniel Eth and Tom Davidson; published 26th March 2025. Narration by Perrin Walker (@perrinjwalker).

08-31
01:50:40

Should AI Agents Obey Human Laws? (with Cullen O'Keefe)

Cullen O'Keefe is Director of Research at the Institute for Law & AI. In this episode, we discuss 'Law-Following AI: designing AI agents to obey human laws'. You can read a full transcript here. To see all our published research, visit forethought.org/research. To subscribe to our newsletter, visit forethought.org/subscribe.

08-28
01:23:54

[Article] AI-Enabled Coups: How a Small Group Could Use AI to Seize Power

This is a narration of ‘AI-Enabled Coups: How a Small Group Could Use AI to Seize Power’ by Tom Davidson, Lukas Finnveden, and Rose Hadshar; published 16th April 2025. Narration by Perrin Walker (@perrinjwalker).

08-26
02:09:32

[AI Narration] Could One Country Outgrow the Rest of the World After AGI?

This is an AI narration of "Could One Country Outgrow the Rest of the World After AGI? Economic Analysis of Superexponential Growth" by Tom Davidson. The article was first released on 20th August 2025. You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.

08-20
29:41

How Can We Prevent AI-Enabled Coups? (with Tom Davidson)

Tom Davidson is a Senior Research Fellow at Forethought. In this episode, he discusses concrete mitigations against AI-enabled coups. You can read a full transcript here. Listen to Tom's appearance on the 80,000 Hours podcast here, and read the original paper here. To see all our published research, visit forethought.org/research.

08-17
02:13:50

Should We Aim for Flourishing Over Mere Survival? (with Will MacAskill)

Will MacAskill discusses his new research series ‘Better Futures’. You can read a full transcript here. To see all our published research, visit forethought.org/research.

08-04
02:54:24

[AI Narration] How quick and big would a software intelligence explosion be?

This is an AI narration of "How quick and big would a software intelligence explosion be?" by Tom Davidson, Tom Houlden. The article was first released on 4th August 2025. You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.

08-04
01:07:17

[AI Narration] The Basic Case for Better Futures: SF Model Analysis

This is an AI narration of "The Basic Case for Better Futures: SF Model Analysis" by William MacAskill, Philip Trammell. The article was first released on 3rd August 2025. You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.

08-03
32:40

[AI Narration] No Easy Eutopia

This is an AI narration of "No Easy Eutopia" by Fin Moorhouse, William MacAskill. The article was first released on 3rd August 2025. You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.

08-03
01:09:17

Recommend Channels