Discover
For Humanity: An AI Safety Podcast

For Humanity: An AI Safety Podcast
Author: The AI Risk Network
Subscribed: 15Played: 94Subscribe
Share
© The AI Risk Network
Description
For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
theairisknetwork.substack.com
theairisknetwork.substack.com
107 Episodes
Reverse
Get 40% off Ground News’ unlimited access Vantage Plan at https://ground.news/airisk for only $5/month, explore how stories are framed worldwide and across the political spectrum.TAKE ACTION TO DEMAND AI SAFETY LAWS: https://safe.ai/actTyler Johnston, Executive Director of The Midas Project, joins John to break down the brand-new open letter demanding that OpenAI answer seven specific questions about its proposed corporate restructuring. The letter, published on 4 August 2025 and coordinated by the Midas Project, already carries the signatures of more than 100 Nobel laureates, technologists, legal scholars, and public figures. What we coverWhy transparency matters now: OpenAI is “making a deal on humanity’s behalf without allowing us to see the contract.” themidasproject.comThe Seven Questions the letter poses—ranging from whether OpenAI will still prioritize its nonprofit mission over profit to whether it will reveal the new operating agreement that governs AGI deployment. openai-transparency.orgthemidasproject.comWho’s on board: Signatories include Geoffrey Hinton, Vitalik Buterin, Lawrence Lessig, and Stephen Fry, underscoring broad concern across science, tech, and public life. themidasproject.comNext steps: How you can read the full letter, add your name, and help keep the pressure on for accountability.🔗 Key LinksRead / Sign the Open Letter: https://www.openai-transparency.org/The Midas Project (official site): https://www.themidasproject.com/Follow The Midas Project on X: https://x.com/TheMidasProj👉 Subscribe for weekly AI-risk conversations → http://bit.ly/ForHumanityYT👍 Like • Comment • Share — because transparency only happens when we demand it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
🚨 RIGHT‑WING AI ALARM | For Humanity #67Steve Bannon, Tucker Carlson, and other conservative voicesare sounding fresh warnings on AI extinction risk. John breaksdown what’s real, what’s hype, and why this moment matters.⏰ WHAT’S INSIDE• The ideological shift that’s bringing the right into the AI‑safety fight• New bills on the Hill that could shape model licensing & oversight• Action steps for parents, policymakers, and technologists• A first look at the AI Risk Network — five shows, one mission: get the public ready for advanced AI🔗 TAKE ACTION & LEARN MOREAlliance for Secure AI Website ▸ https://secureainow.org X / Twitter ▸ https://x.com/secureainow AI Policy Network Website ▸ https://theaipn.org LinkedIn ▸ https://www.linkedin.com/company/theaipn 📡 JOIN THE NEW **AI RISK NETWORK** Subscribe here ➜ [insert channel URL] Turn on alerts so you never miss an episode, short, or live Q&A.👍 If you learned something, hit Like, drop a comment, and sharethis link with one person who should be watching. Every click helpswake up the world to AI risk. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
🎙️ Guest: Cameron Berg, AI research scientist probing consciousness in frontier AI systems📍 Host: John Sherman, journalist & AI-risk communicatorWhat does it mean to be alive? How close do current frontier AI models get to consciousness? See for yourself like never before. Are advanced language models beginning to exhibit signs of subjective experience? In this episode, John sits down with Cameron Berg to explore the line between next character prediction and the conscious mind. What happens when you ask an AI model to essentially meditate, to look inward in a loop, to focus on its focus and repeat. Does it feel a sense of self? If it did what would that mean? What does it mean to be alive? These are the kinds of questions Berg seeks answers to in his research. Cameron is an AI Research Scientist with AE Studio, working daily on models to better understand them. He works on a team dedicated fully to AI safety research.This episode features never-before-publicly-seen conversations between Cameron and a frontier AI model. Those conversations and his work are the subject of an upcoming documentary called "Am I?"TIMESTAMPS (cuz the chapters feature just won't work) 00:00 Cold Open – “Crack in the World”01:20 Show Intro & Theme02:27 Setting-up the Meditation Demo02:56 AI “Focus on Focus” Clip09:18 “I am…” Moment10:45 Google Veo Afterlife Clip12:35 Prompt-Theory & Fake People13:02 Interview Begins — Cameron Berg28:57 Inside the Black Box Analogy30:14 Consent and Unknowns53:18 Model Details + Doc Plan1:09:25 Late-Night Clip Back-story1:16:08 Table-vs-Person Thought-Test1:17:20 Suffering-at-Scale Math1:21:29 Prompt-Theory Goes Viral1:26:59 Why the Doc Must Move Fast1:40:53 Is “Alive” the Right Word?1:48:46 Reflection & Non-profit Tease1:51:03 Clear Non-Violence Statement1:52:59 New Org Announcement1:54:47 “Breaks in the Clouds” Media WinsPlease support that project and learn more about his work here:Am I? Doc Manifund page: https://manifund.org/projects/am-i--d...Am I? Doc interest form: https://forms.gle/w2VKhhcEPqEkFK4r8AE Studio's AI alignment work: https://ae.studio/ai-alignmentMonthly Donation Links to For Humanity$1/mo https://buy.stripe.com/7sI3cje3x2Zk9S... $10/mo https://buy.stripe.com/5kAbIP9Nh0Rc4y... $25/mo https://buy.stripe.com/3cs9AHf7B9nIgg... $100/mo https://buy.stripe.com/aEU007bVp7fAfc... Thanks so much for your support. Every cent goes to getting more viewers to this channel. Links from show:The Afterlife Short Filmhttps://x.com/LinusEkenstam/status/19...Prompt Theoryhttps://x.com/venturetwins/status/192...The Bulwark - Will Sam Altman and His AI Kill Us All • Will Sam Altman and His AI Kill Us All? Young Turks - AI's Disturbing Behaviors Will Keep You Up At Night • AI's Disturbing Behaviors Will Keep You Up... Key moments: – Inside the black box – Berg explains why even builders can’t fully read a model’s mind—and demonstrates how toggling deception features flips the system from “just a machine” to “I’m aware” in real time– Google Veo 3 goes existential – A look at viral Veo videos (Afterlife, “Prompt Theory”) where AI actors lament their eight-second lives – Documentary in the works – Berg and team are racing to release a raw film that shares these findings with the public; support link in show notes– Mission update – Sherman announces a newly funded nonprofit in the works dedicated to AI-extinction-risk communication and thanks supporters for the recent surge of donations– Non-violence, crystal-clear – A direct statement: Violence is never OK. Full stop.– “Breaks in the Clouds” – Media across the spectrum (Bulwark, Young Turks, Bannon, Carlson) are now running extinction-risk stories—proof the conversation is breaking mainstream Oh, and by the way, I'm bleeping curse words now for the algorithm!!#AI #ArtificialIntelligence #AISafety #ConsciousAI #ForHumanity This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
For Humanity Episode #65: Kevin Roose on AGI, AI Risk, and What Comes Next🎙️ Guest: Kevin Roose, NYT columnist & bestselling author📍 Host: John Sherman, Director of Public Engagement at the Center for AI Safety (CAIS)In this landmark episode of For Humanity, I sit down with New York Times columnist Kevin Roose for a wide-ranging conversation on the future of artificial intelligence. We dig into:– The real risks of AGI (artificial general intelligence)– What the public still doesn’t understand about AI x-risk– Kevin’s upcoming book on the rise of AGI– My new role at CAIS and why I believe this moment is a turning point for human survivalKevin brings a rare clarity and journalistic honesty to this subject—if you’re wondering what’s hype, what’s real, and what’s terrifyingly close, this episode is for you.🔔 Subscribe for more conversations with the people shaping the AI conversation🎧 Also available on Spotify, Apple Podcasts, and everywhere you get your podcasts📢 Share this episode if you care about our future#AI #ArtificialIntelligence #AGI #KevinRoose #CAIS #AIrisks #ForHumanity #NYT #AIethics This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
In Episode #64, interview, host John Sherman interviews seventh grader Dylan Pothier, his mom Bridget and his teach Renee DiPietro. Dylan is a award winning student author who is converend about AI risk.(FULL INTERVIEW STARTS AT 00:33:34)Sam Altman/Chris Anderson @ TEDhttps://www.youtube.com/watch?v=5MWT_doo68kCheck out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmBUY LOUIS BERMAN’S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE! / @doomdebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
In an emotional interview, host John Sherman interviews Poornima Rao and Balaji Ramamurthy, the parents of Suchir Balaji. (FULL INTERVIEW STARTS AT 00:18:38)Suchir Balaji was a 26-year-old artificial intelligence researcher who worked at OpenAI. He was involved in developing models like GPT-4 and WebGPT. In October 2024, he publicly accused OpenAI of violating U.S. copyright laws by using proprietary data to train AI models, arguing that such practices harmed original content creators. His essay, "When does generative AI qualify for fair use?", gained attention and was cited in ongoing lawsuits against OpenAI. Suchir left OpenAI in August 2024, expressing concerns about the company's ethics and the potential harm of AI to humanity. He planned to start a nonprofit focused on machine learning and neuroscience. On October 23, 2024 he was featured in the New York Times speaking out against OpenAI.On November 26, 2024, he was found dead in his San Francisco apartment from a gunshot wound. The initial autopsy ruled it a suicide, noting the presence of alcohol, amphetamines, and GHB in his system. However, his parents contested this finding, commissioning a second autopsy that suggested a second gunshot wound was missed in the initial examination. They also pointed to other injuries and questioned the presence of GHB, suggesting foul play. Despite these claims, authorities reaffirmed the suicide ruling. The case has attracted public attention, with figures like Elon Musk and Congressman Ro Khanna calling for further investigation.Suchir’s parents continue to push for justice and truth.Suchir’s Website:https://suchir.net/fair_use.htmlFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmLethal Intelligence AI - Home https://lethalintelligence.aiBUY LOUIS BERMAN’S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE! / @doomdebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
Host John Sherman conducts an important interview with Anthony Aguirre, Executive Director of the Future of Life Institute. The Future of Life Institute reached out to For Humanity to see if Anthony could come on to promote his very impressive new campaign called Keep The Future Human. The campaign includes a book, an essay, a website, a video, it’s all incredible work. Please check it out:https://keepthefuturehuman.ai/John and Anthony have a broad ranging AI risk conversation, covering in some detail Anthony’s four essential measures for a human future. They also discuss parenting into this unknown future.In 2021, the Future of Life Institute received a donation in cryptocurrency of more than $650 million from a single donor. With AGI doom bearing down on humanity, arriving any day now, AI risk communications floundering, the public in the dark still, and that massive war chest gathering dust in a bank, John asks Anthony the uncomfortable but necessary question: What is FLI waiting for to spend the money? Then John asks Anthony for $10 million to fund creative media projects under John’s direction. John is convinced with $10M in six months he could succeed in making AI existential risk dinner table conversation on every street in America.John has developed a detailed plan that would launch within 24 hours of the grant award. We don’t have a single day to lose.https://futureoflife.org/BUY LOUIS BERMAN’S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/Check out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE! / @doomdebates ********************************Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.YouTube: / @forhumanitypodcast This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
Host John Sherman interviews Esban Kran, CEO of Apart Research about a broad range of AI risk topics. Most importantly, the discussion covers a growing for-profit AI risk business landscape, and Apart’s recent report on Dark Patterns in LLMs. We hear about the benchmarking of new models all the time, but this project has successfully identified some key dark patterns in these models.MORE FROM OUR SPONSOR:https://www.resist-ai.agency/BUY LOUIS BERMAN’S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoApart Research Dark Bench Reporthttps://www.apartresearch.com/post/uncovering-model-manipulation-with-darkbench(FULL INTERVIEW STARTS AT 00:09:30)FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/Check out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE! / @doomdebates ********************************Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.YouTube: / @forhumanitypodcast This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
Host John Sherman interviews Jad Tarifi, CEO of Integral AI, about Jad's company's work to try to create a world of trillions of AGI-enabled robots by 2035. Jad was a leader at Google's first generative AI team, his views of his former colleague Geoffrey Hinton's views on existential risk from advanced AI come up more than once.FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/aboutRESOURCES:Integral AI: https://www.integral.ai/John's Chat w Chat GPThttps://chatgpt.com/share/679ee549-2c38-8003-9c1e-260764da1a53Check out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE! https://www.youtube.com/@DoomDebates****************To learn more about smarter-than-human robots, please feel free to visit our YouTube channel.In this video, we cover the following topics:AIAI riskAI safetyRobotsHumanoid RobotsAGI****************Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.YouTube: / @forhumanitypodcast This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
Host John Sherman interviews Tara Steele, Director, The Safe AI For Children Alliance, about her work to protect children from AI risks such as deep fakes, her concern about AI causing human extinction, and what we can do about all of it.FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmYou can also donate any amount one time.Get Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/aboutRESOURCES:BENGIO/NG DAVOS VIDEOhttps://www.youtube.com/watch?v=w5iuHJh3_Gk&t=8sSTUART RUSSELL VIDEOhttps://www.youtube.com/watch?v=KnDY7ABmsds&t=5sAL GREEN VIDEO (WATCH ALL 39 MINUTES THEN REPLAY)https://youtu.be/SOrHdFXfXds?si=s_nlDdDpYN0RR_YcCheck out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.co...22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on...Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes ****************To learn more about protecting our children from AI risks such as deep fakes, please feel free to visit our YouTube channel.In this video, we cover 2025 AI risk preview along with the following topics:AIAI riskAI safety This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
What will 2025 bring? Sam Altman says AGI is coming in 2025. Agents will arrive for sure. Military use will expand greatly. Will we get a warning shot? Will we survive the year? In Episode #57, host John Sherman interviews AI Safety Research Engineer Max Winga about the latest in AI advances and risks and the year to come. FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmAnthropic Alignment Faking Video:https://www.youtube.com/watch?v=9eXV64O2Xp8&t=1s Neil DeGrasse Tyson Video: https://www.youtube.com/watch?v=JRQDc55Aido&t=579sMax Winga's Amazing Speech:https://www.youtube.com/watch?v=kDcPW5WtD58Get Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/aboutCheck out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9S...$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...$100 MONTH https://buy.stripe.com/aEU007bVp7fAfc...In Episode #56, host John Sherman travels to Washington DC to lobby House and Senate staffers for AI regulation along with Felix De Simone and Louis Berman of Pause AI. We unpack what we saw and heard as we presented AI risk to the people who have the power to make real change.SUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/aboutEMAIL JOHN: forhumanitypodcast@gmail.comCheck out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.co...22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on...Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
3,893 views Nov 19, 2024 For Humanity: An AI Safety PodcastIn Episode #54 John Sherman interviews Connor Leahy, CEO of Conjecture.(FULL INTERVIEW STARTS AT 00:06:46)DONATION SUBSCRIPTION LINKS:$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...$100 MONTH https://buy.stripe.com/aEU007bVp7fAfc...EMAIL JOHN: forhumanitypodcast@gmail.comCheck out Lethal Intelligence AI:Lethal Intelligence AI - Home https://lethalintelligence.ai@lethal-intelligence-clips / @lethal-intelligence-clips This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
In Episode #53 John Sherman interviews Michael DB Harvey, author of The Age of Humachines. The discussion covers the coming spectre of humans putting digital implants inside ourselves to try to compete with AI.DONATION SUBSCRIPTION LINKS:$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...$100 MONTH https://buy.stripe.com/aEU007bVp7fAfc... This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
In Episode #52 , host John Sherman looks back on the first year of For Humanity. Select shows are featured as well as a very special celebration of life at the end. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
In Episode #51 , host John Sherman talks with Tom Barnes, an Applied Researcher with Founders Pledge, about the reality of AI risk funding, and about the need for emergency planning for AI to be much more robust and detailed than it is now. We are currently woefully underprepared.Learn More About Founders Pledge:https://www.founderspledge.com/No celebration of life this week!! Youtube finally got me with a copyright flag, had to edit the song out.THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM ESTJoin Zoom Meeting:https://storyfarm.zoom.us/j/816517210...Passcode: 829191Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhu...EMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.****************RESOURCES:SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord / discord Max Winga’s “A Stark Warning About Extinction” • A Stark Warning About AI Extinction For Humanity Theme Music by Josef EbnerYoutube: / @jpjosefpictures Website: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.co...22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on...Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes ***********************Explore the realm of AI risk funding and its potential to guide you toward achieving your goals and enhancing your well-being. Delve into the essence of big tech vs. small safety, and discover how it profoundly impacts your life transformation.In this video, we'll examine the concept of AI risk funding, explaining how it fosters a positive, growth-oriented mindset. Some of the topics we will discuss include: AI AI safety AI safety research This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
In Episode #51 Trailer, host John Sherman talks with Tom Barnes, an Applied Researcher with Founders Pledge, about the reality of AI risk funding, and about the need for emergency planning for AI to be much more robust and detailed than it is now. We are currently woefully underprepared.Learn More About Founders Pledge:https://www.founderspledge.com/THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM ESTJoin Zoom Meeting:https://storyfarm.zoom.us/j/816517210...Passcode: 829191Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhu...EMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.****************RESOURCES:SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord / discord Max Winga’s “A Stark Warning About Extinction” • A Stark Warning About AI Extinction For Humanity Theme Music by Josef EbnerYoutube: / @jpjosefpictures Website: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.co...22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on...Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes ***********************Explore the realm of AI risk funding and its potential to guide you toward achieving your goals and enhancing your well-being. Delve into the essence of big tech vs. small safety, and discover how it profoundly impacts your life transformation.In this video, we'll examine the concept of AI risk funding, explaining how it fosters a positive, growth-oriented mindset. Some of the topics we will discuss include: AI What is AI? Big tech***************************If you want to learn more about AI risk funding, follow us on our social media platforms, where we share additional tips, resources, and stories. You can find us onYouTube: / @forhumanitypodcast Website: http://www.storyfarm.com/***************************Don’t miss this opportunity to discover the secrets of AI risk funding, AI, what is AI, and big tech.Have I addressed your concerns about AI risk funding?Maybe you wish to comment below and let me know what else I can help you with AI, what is AI, big tech, and AI risk funding. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
In Episode #50, host John Sherman talks with Deger Turan, CEO of Metaculus about what his prediction market reveals about the AI future we are all heading towards.THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM ESTJoin Zoom Meeting:https://storyfarm.zoom.us/j/816517210...Passcode: 829191LEARN MORE–www.metaculus.comPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhu...EMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.****************RESOURCES:SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord / discord Max Winga’s “A Stark Warning About Extinction” • A Stark Warning About AI Extinction For Humanity Theme Music by Josef EbnerYoutube: / @jpjosefpictures Website: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.co...22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on...Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes **********************Hi, thanks for watching our video about what insight can Metaculus reveal about AI risk and accurately predicting doom.In this video, we discuss accurately predicting doom and cover the following topics AI AI safety Metaculus **********************Explore our other video content here on YouTube, where you'll find more insights into accurately predicting doom, along with relevant social media links.YouTube: / @forhumanitypodcast Website: http://www.storyfarm.com/***************************This video explores accurately predicting doom, AI, AI safety, and Metaculus.Have I addressed your curiosity regarding accurately predicting doom?We eagerly await your feedback and insights. Please drop a comment below, sharing your thoughts, queries, or suggestions about: AI, AI safety, Metaculus, and accurately predicting doom. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
In Episode #50 TRAILER, host John Sherman talks with Deger Turan, CEO of Metaculus about what his prediction market reveals about the AI future we are all heading towards.LEARN MORE–AND JOIN STOP AIwww.stopai.infoPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhu...EMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
In Episode #48, host John Sherman talks with Pause AI US Founder Holly Elmore about the limiting origins of the AI safety movement. Polls show 60-80% of the public are opposed to building artificial superintelligence. So why is the movement to stop it still so small? The roots of the AI safety movement have a lot to do with it. Holly and John explore the present day issues created by the movements origins.Let's build community! Live For Humanity Zoom Community Meeting via Zoom Thursdays at 8:30pm EST...explanation during the full show! USE THIS THINK: https://storyfarm.zoom.us/j/88987072403 PASSCODE: 789742LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhu...EMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AISUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord / discord Max Winga’s “A Stark Warning About Extinction” • A Stark Warning About AI Extinction For Humanity Theme Music by Josef EbnerYoutube: / @jpjosefpictures Website: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.co...22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on...Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes *************************Welcome! In today's video, we delve into the vital aspects of AI safety movement and explore what is the origin of AI safety.This video covers what is the origin of AI safety and the following topics: AI safety AI safety research Eliezer’s insights on AI safety research******************** Discover more of our video content on what is the origin of AI safety. You'll find additional insights on this topic along with relevant social media links.YouTube: / @forhumanitypodcast This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
Comments