DiscoverThe Intelligence Horizon
The Intelligence Horizon
Claim Ownership

The Intelligence Horizon

Author: The Intelligence Horizon

Subscribed: 1Played: 0
Share

Description

Interviewing the AI experts at the frontier of emerging tech

3 Episodes
Reverse
Zoë Hitzig (Research Scientist at OpenAI and Junior Fellow at the Harvard Society of Fellows) recently made headlines with her New York Times piece criticizing OpenAI’s corporate incentive structure and raising broader questions about how AI companies should be governed. Before that piece reached mainstream, we sat down with her to explore where incentives can diverge between companies and users, and why building strong governance processes is essential for keeping frontier AI aligned with societal values. From there, we dive into what real-world data reveals about how people actually use ChatGPT and what it suggests about work, welfare, and the evolving economy. We discuss why access matters as these models scale, the surprisingly large share of everyday “how-to” and medical questions, and the limits of policy responses like UBI in addressing AI-driven change.Follow the rest of The Intelligence Horizon!Instagram: @theintelligencehorizonTikTok: @theintelligencehorizonSpotify: The Intelligence HorizonLinkedIn: The Intelligence HorizonEmail: theintelligencehorizon@gmail.comCo-hosts: Owen Zhang and Will Sanok DufalloProduction Lead: Kaitlyn SmithSocial Media Heads: Chloe Park and Yasmin Rodriguez Rascon
Thomas Woodside (Co-Founder & Senior Policy Advisor at the Secure AI Project, and a lead advocate for California’s SB 53) joins The Intelligence Horizon Podcast to break down what SB 53 actually does and what it signals about where AI regulation is headed. We cover the bill’s core requirements, the logic behind them, and how they aim to reduce catastrophic risk from frontier models. We also situate SB 53 in the broader policy landscape, compare it to SB 1047, and discuss what Thomas thinks the next concrete governance steps should be as AI capabilities continue to scale.Follow the rest of The Intelligence Horizon!Instagram: @theintelligencehorizonTikTok: @theintelligencehorizonYouTube: @TheIntelligenceHorizonSpotify: The Intelligence HorizonLinkedIn: The Intelligence HorizonEmail: theintelligencehorizon@gmail.comCo-hosts: Owen Zhang and Will Sanok DufalloProduction Lead: Kaitlyn SmithSocial Media Heads: Hailey Love, Chloe Park, and Yasmin Rodriguez Rascon
In this episode, Thomas Larsen of the AI Futures Project joins us to dissect the public's reaction to the widely influential paper "AI 2027," which he co-authored, and makes the case that superintelligent AI is highly likely within our lifetimes — and plausibly imminent in the next few years. Thomas also lays out why he’s pessimistic that risks from misaligned and misused AI will be handled in time. This was a fascinating and thought-provoking discussion on the challenges ahead in AI security.Check out "AI 2027" here: https://ai-2027.comLearn more about the AI Futures Project here: https://ai-futures.orgFollow the rest of The Intelligence Horizon!Instagram: @theintelligencehorizonTikTok: @theintelligencehorizonSpotify: The Intelligence HorizonLinkedIn: The Intelligence HorizonFeel free to also reach out at theintelligencehorizon@gmail.comCo-hosts: Owen Zhang and Will Sanok DufalloVideo Producer: Kaitlyn SmithSocial Media Manager: Nancy Javkhlan
Comments 
loading