DiscoverThe Trajectory
The Trajectory
Claim Ownership

The Trajectory

Author: Daniel Faggella

Subscribed: 15Played: 173
Share

Description

What should be the trajectory of intelligence beyond humanity?

The Trajectory pull covers realpolitik on artificial general intelligence and the posthuman transition - by asking tech, policy, and AI research leaders the hard questions about what's after man, and how we should define and create a worthy successor (danfaggella.com/worthy). Hosted by Daniel Faggella.

34 Episodes
Reverse
This is an interview with Roman V. Yampolskiy, a computer scientist at the University of Louisville and a leading voice in AI safety. Everyone has heard Roman's p(doom) arguments, that isn't the focus of our interview. We instead talk about Roman's "untestability" hypothesis, and the fact that there maybe untold, human-incomprehensible powers already in current LLMs. He discusses how such powers might emerge, and when and how a "treacherous turn" might happen. This is the Third episod...
This is an interview with Joshua Clymer, AI safety researcher at Redwood Research, and former researcher at METR. Joshua has spent years focused on institutional readiness for AGI, especially the kinds of governance bottlenecks that could become breaking points. His thinking is less about far-off futures and more about near-term institutional failure modes - the brittle places that might shatter first. In this episode, Joshua and I discuss where AGI pressure might rupture our systems: intel...
This is an interview with David Duvenaud, Assistant Professor at University of Toronto, co-author of the Gradual Disempowerment paper, and former researcher at Anthropic. This is the first episode in our new “Early Experience of AGI” series - where we explore the early impacts of AGI on our work and personal lives. This episode referred to the following other essays and resources: -- Closing the Human Reward Circuit: https://danfaggella.com/reward -- Gradual Disempowerment: http://www.gradu...
This is an interview with Jeremie and Edouard Harris, Canadian researchers with backgrounds in AI governance and national security consulting, and co-founders of Gladstone AI. In this episode, Jeremie and Edouard explain why trusting China on AGI is dangerous, highlight ongoing espionage in Western labs, explore verification tools like tamper-proof chips, and argue that slowing China’s AI progress may be vital for safe alignment. This the second installment of our "US-China AGI Relations" s...
This is an interview with Jack Shanahan, a three-star General and former Director of the Joint AI Center (JAIC) within the US Department of Defense. This the first installment of our "US-China AGI Relations" series - where we explore pathways to achieving international AGI cooperation while avoiding conflicts and arms races. This episode referred to the following other essays and resources: -- The International Governance of AI – We Unite or We Fight: https://emerj.com/inter...
This is an interview with Yi Zeng, Professor at the Chinese Academy of Sciences, a member of the United Nations High-Level Advisory Body on AI, and leader of the Beijing Institute for AI Safety and Governance (among many other accolades). Over a year ago when I asked Jaan Tallinn "who within the UN advisory group on AI has good ideas about AGI and governance?" he mentioned Yi immediately. Jaan was right. See the full article from this episode: https://danfaggella.com/zeng1 W...
This is an interview with Max Tegmark, MIT professor, Founder of the Future of Humanity Institute, and author of Life 3.0. This interview was recorded on-site at AI Safety Connect 2025, a side event from the AI Action Summit in Paris. See the full article from this episode: https://danfaggella.com/tegmark1 Listen to the full podcast episode: https://youtu.be/yQ2fDEQ4Ol0 This episode referred to the following other essays and resources: -- Max's A.G.I. Framework / "Keep the Future Human"...
Joining us in our eighth episode of our AGI Governance series on The Trajectory is Craig Mundie, former Chief Research and Strategy Officer at Microsoft and longtime advisor on the evolution of digital infrastructure, AI, and national security. In this episode, Craig and I explore how bottom-up governance could emerge from commercial pressures and cross-national enterprise collaboration, and how this pragmatic foundation might lead us into a future of symbiotic co-evolution rather than cata...
Joining us in our seventh episode of our series AGI Governance on The Trajectory is Toby Ord, Senior Researcher at Oxford University’s AI Governance Initiative and author of The Precipice: Existential Risk and the Future of Humanity. Toby is one of the world’s most influential thinkers on long-term risk - and one of the clearest voices on how advanced AI could shape, or shatter, the trajectory of human civilization. In this episode, Toby unpacks the evolving technical and economic landscap...
This is an interview with Connor Leahy, the Founder and CEO of Conjecture. This is the fifth installment of our "AGI Governance" series - where we explore the means, objectives, and implementation of of governance structures for artificial general intelligence. Watch this episode on The Trajectory Youtube Channel: https://youtu.be/1j--6JYRLVk See the full article from this episode: https://danfaggella.com/leahy1 ... There are four main questions we cover in this AGI Governance series are...
This is an interview with Andrea Miotti, the Founder and Executive Director of ControlAI. This is the fourth installment of our "AGI Governance" series - where we explore the means, objectives, and implementation of of governance structures for artificial general intelligence. Watch this episode on The Trajectory Youtube Channel: https://youtu.be/LNUl0_v7wzE See the full article from this episode: https://danfaggella.com/miotti1 ... There are four main questions we cover in this AGI Gove...
This is an interview with Stephen Ibaraki, the Founder of the ITU's (part of the United Nations) AI for Good initiative, and Chairman REDDS Capital. This is the third installment of our "AGI Governance" series - where we explore the means, objectives, and implementation of of governance structures for artificial general intelligence. This episode referred to the following other essays and resources: -- The International Governance of AI: https://emerj.com/international-governance-ai/ -- AI ...
This is an interview with Mike Brown, Partner at Shield Capital and the Former Director of the Defense Innovation Unit at the U.S. Department of Defense. This is the second installment of our "AGI Governance" series - where we explore how important AGI governance is, what it should achieve, and how it should be implemented. Watch this episode on The Trajectory YouTube channel: https://youtu.be/yUA4voA97kE This episode referred to the following other essays and resources: -- The Inter...
This is an interview with Sebastien Krier, who works in Policy Development and Strategy at Google DeepMind. This is the first installment of our "AGI Governance" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity. Watch this episode on The Trajectory YouTube channel: https://youtu.be/SKl7kcZt57A This episode referred to the following other essays and resources: -- The International Governance of AI: https://emerj.com/inter...
This new installment of the Worthy Successor series features Ed Boyden, an American neuroscientist and entrepreneur at MIT, widely known for his work on optogenetics and brain simulation - his breakthroughs have helped shape the frontier of neurotechnology. In this episode, we explore Ed’s vision for what kinds of posthuman intelligences deserve to inherit the future. His deep commitment to “ground truth” - the idea that intelligence must be built from and validated against reality, not just...
This new installment of the Worthy Successor series is an interview with the brilliant Martin Rees - British cosmologist, astrophysicist, and 60th President of the Royal Society. In this interview we explore his belief that humanity is just a stepping stone between Darwinian life and a new form of intelligent design - not divinely ordained, but constructed by artificial minds building successors of their own. For Martin, the true tragedy would not be losing our species, but squandering the ...
This is an interview with Emmett Shear - CEO of SoftMax, co-founder of Twitch, former interim CEO of OpenAI, and one of the few public-facing tech leaders who seems to take both AGI development and AGI alignment seriously. In this episode, we explore Emmett’s vision of AGI as a kind of living system, not unlike a new kind of cell, joining the tissue of intelligent life. We talk through the limits of our moral vocabulary, the obligations we might owe to future digital minds, and the uncomfort...
This is an interview with Peter Singer, one of the most influential moral philosophers of our time. Singer is best known for his groundbreaking work on animal rights, global poverty, and utilitarian ethics, and his ideas have shaped countless conversations about the moral obligations of individuals, governments, and societies. This interview is our tenth installment in The Trajectory’s second series Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to ...
This is an interview with Kristian Rönn, author, successful startup founder, and now CEO of Lucid, and AI hardware governance startup based in SF. This is an additional installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity. This episode referred to the following other essays and resources: -- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy -- Kristian's "Darwinian Trap" bo...
This is an interview with Richard Ngo, AGI researcher and thinker - with extensive stints at both OpenAI and DeepMind. This is an additional installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity. This episode referred to the following other essays and resources: -- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy -- Richard's exploratory fiction writing - http://narrativear...
loading
Comments 
loading