DiscoverFor Humanity: An AI Safety Podcast
Claim Ownership
For Humanity: An AI Safety Podcast
Author: John Sherman
Subscribed: 5Played: 79Subscribe
Share
© John Sherman
Description
For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
104 Episodes
Reverse
In Episode #53 John Sherman interviews Michael DB Harvey, author of The Age of Humachines. The discussion covers the coming spectre of humans putting digital implants inside ourselves to try to compete with AI.
DONATION SUBSCRIPTION LINKS:
$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...
$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...
$100 MONTH https://buy.stripe.com/aEU007bVp7fAfc...
In Episode #52 , host John Sherman looks back on the first year of For Humanity. Select shows are featured as well as a very special celebration of life at the end.
In Episode #51 , host John Sherman talks with Tom Barnes, an Applied Researcher with Founders Pledge, about the reality of AI risk funding, and about the need for emergency planning for AI to be much more robust and detailed than it is now. We are currently woefully underprepared.
Learn More About Founders Pledge:
https://www.founderspledge.com/
No celebration of life this week!! Youtube finally got me with a copyright flag, had to edit the song out.
THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM EST
Join Zoom Meeting:
https://storyfarm.zoom.us/j/816517210...
Passcode: 829191
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhu...
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
****************
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
/ @doomdebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
/ discord
Max Winga’s “A Stark Warning About Extinction”
• A Stark Warning About AI Extinction
For Humanity Theme Music by Josef Ebner
Youtube: / @jpjosefpictures
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.co...
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on...
Best Account on Twitter: AI Notkilleveryoneism Memes
/ aisafetymemes
***********************
Explore the realm of AI risk funding and its potential to guide you toward achieving your goals and enhancing your well-being. Delve into the essence of big tech vs. small safety, and discover how it profoundly impacts your life transformation.
In this video, we'll examine the concept of AI risk funding, explaining how it fosters a positive, growth-oriented mindset. Some of the topics we will discuss include:
AI
AI safety
AI safety research
In Episode #51 Trailer, host John Sherman talks with Tom Barnes, an Applied Researcher with Founders Pledge, about the reality of AI risk funding, and about the need for emergency planning for AI to be much more robust and detailed than it is now. We are currently woefully underprepared.
Learn More About Founders Pledge:
https://www.founderspledge.com/
THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM EST
Join Zoom Meeting:
https://storyfarm.zoom.us/j/816517210...
Passcode: 829191
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhu...
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
****************
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
/ @doomdebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
/ discord
Max Winga’s “A Stark Warning About Extinction”
• A Stark Warning About AI Extinction
For Humanity Theme Music by Josef Ebner
Youtube: / @jpjosefpictures
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.co...
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on...
Best Account on Twitter: AI Notkilleveryoneism Memes
/ aisafetymemes
***********************
Explore the realm of AI risk funding and its potential to guide you toward achieving your goals and enhancing your well-being. Delve into the essence of big tech vs. small safety, and discover how it profoundly impacts your life transformation.
In this video, we'll examine the concept of AI risk funding, explaining how it fosters a positive, growth-oriented mindset. Some of the topics we will discuss include:
AI
What is AI?
Big tech
***************************
If you want to learn more about AI risk funding, follow us on our social media platforms, where we share additional tips, resources, and stories. You can find us on
YouTube: / @forhumanitypodcast
Website: http://www.storyfarm.com/
***************************
Don’t miss this opportunity to discover the secrets of AI risk funding, AI, what is AI, and big tech.
Have I addressed your concerns about AI risk funding?
Maybe you wish to comment below and let me know what else I can help you with AI, what is AI, big tech, and AI risk funding.
In Episode #50, host John Sherman talks with Deger Turan, CEO of Metaculus about what his prediction market reveals about the AI future we are all heading towards.
THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM EST
Join Zoom Meeting:
https://storyfarm.zoom.us/j/816517210...
Passcode: 829191
LEARN MORE–
www.metaculus.com
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhu...
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
****************
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
/ @doomdebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
/ discord
Max Winga’s “A Stark Warning About Extinction”
• A Stark Warning About AI Extinction
For Humanity Theme Music by Josef Ebner
Youtube: / @jpjosefpictures
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.co...
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on...
Best Account on Twitter: AI Notkilleveryoneism Memes
/ aisafetymemes
**********************
Hi, thanks for watching our video about what insight can Metaculus reveal about AI risk and accurately predicting doom.
In this video, we discuss accurately predicting doom and cover the following topics
AI
AI safety
Metaculus
**********************
Explore our other video content here on YouTube, where you'll find more insights into accurately predicting doom, along with relevant social media links.
YouTube: / @forhumanitypodcast
Website: http://www.storyfarm.com/
***************************
This video explores accurately predicting doom, AI, AI safety, and Metaculus.
Have I addressed your curiosity regarding accurately predicting doom?
We eagerly await your feedback and insights. Please drop a comment below, sharing your thoughts, queries, or suggestions about: AI, AI safety, Metaculus, and accurately predicting doom.
In Episode #50 TRAILER, host John Sherman talks with Deger Turan, CEO of Metaculus about what his prediction market reveals about the AI future we are all heading towards.
LEARN MORE–AND JOIN STOP AI
www.stopai.info
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhu...
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
In Episode #49, host John Sherman talks with Sam Kirchner and Remmelt Ellen, co-founders of Stop AI. Stop AI is a new AI risk protest organization, coming at it with different tactics and goals than Pause AI.
LEARN MORE–AND JOIN STOP AI
www.stopai.info
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #49 TRAILER, host John Sherman talks with Sam Kirchner and Remmelt Ellen, co-founders of Stop AI. Stop AI is a new AI risk protest organization, coming at it with different tactics and goals than Pause AI.
LEARN MORE–AND JOIN STOP AI
www.stopai.info
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhu...
EMAIL JOHN: forhumanitypodcast@gmail.com
*********************
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
*********************
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
/ @doomdebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
/ discord
Max Winga’s “A Stark Warning About Extinction”
• A Stark Warning About AI Extinction
For Humanity Theme Music by Josef Ebner
Youtube: / @jpjosefpictures
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.co...
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on...
Best Account on Twitter: AI Notkilleveryoneism Memes
/ aisafetymemes
PayPal.MePayPal.Me
Pay John Sherman using PayPal.Me
Go to paypal.me/forhumanitypodcast and type in the amount. Since it’s PayPal, it's easy and secure. Don’t have a PayPal account? No worries.
PayPal.MePayPal.Me
Pay John Sherman using PayPal.Me
Go to paypal.me/forhumanitypodcast and type in the amount. Since it’s PayPal, it's easy and secure. Don’t have a PayPal account? No worries.
YouTubeYouTube
Doom Debates
Urgent disagreements that must be resolved before the world ends. Hosted by Liron Shapira.
Discord
Join the PauseAI Discord Server!
Community of volunteers working towards an international pause on the development of AI systems more powerful than GPT-4 | 2077 members (93 kB)
*************
Welcome! In today's video, we delve into the vital aspects of stopping AI and explore go to jail to stop AI.
This video covers go to jail to stop AI and the following topics:
AI safety
AI risks
AI legal issues
********************
Discover more of our video content on go to jail to stop AI. You'll find additional insights on this topic along with relevant social media links.
YouTube: / @forhumanitypodcast
Website: http://www.storyfarm.com/
***************************
This video explores go to jail to stop AI, AI safety, AI risks, and AI legal issues.
Have I addressed your curiosity regarding go to jail to stop AI?
We eagerly await your feedback and insights. Please drop a comment below, sharing your thoughts, queries, or suggestions about: AI safety, AI risks, AI legal issues, and go to jail to stop AI.
In Episode #48, host John Sherman talks with Pause AI US Founder Holly Elmore about the limiting origins of the AI safety movement. Polls show 60-80% of the public are opposed to building artificial superintelligence. So why is the movement to stop it still so small? The roots of the AI safety movement have a lot to do with it. Holly and John explore the present day issues created by the movements origins.
Let's build community! Live For Humanity Zoom Community Meeting via Zoom Thursdays at 8:30pm EST...explanation during the full show! USE THIS THINK: https://storyfarm.zoom.us/j/88987072403 PASSCODE: 789742
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhu...
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
/ @doomdebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
/ discord
Max Winga’s “A Stark Warning About Extinction”
• A Stark Warning About AI Extinction
For Humanity Theme Music by Josef Ebner
Youtube: / @jpjosefpictures
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.co...
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on...
Best Account on Twitter: AI Notkilleveryoneism Memes
/ aisafetymemes
*************************
Welcome! In today's video, we delve into the vital aspects of AI safety movement and explore what is the origin of AI safety.
This video covers what is the origin of AI safety and the following topics:
AI safety
AI safety research
Eliezer’s insights on AI safety research
********************
Discover more of our video content on what is the origin of AI safety. You'll find additional insights on this topic along with relevant social media links.
YouTube: / @forhumanitypodcast
In Episode #48 Trailer, host John Sherman talks with Pause AI US Founder Holly Elmore about the limiting origins of the AI safety movement. Polls show 60-80% of the public are opposed to building artificial superintelligence. So why is the movement to stop it still so small? The roots of the AI safety movement have a lot to do with it. Holly and John explore the present day issues created by the movements origins.
Let's build community! Live For Humanity Zoom Community Meeting via Zoom Thursdays at 8:30pm EST...explanation during the full show! USE THIS THINK: https://storyfarm.zoom.us/j/88987072403
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #47, host John Sherman talks with Buck Shlegeris, CEO of Redwood Research, a non-profit company working on technical AI risk challenges. The discussion includes Buck’s thoughts on the new OpenAI o1-preview model, but centers on two questions: is there a way to control AI models before alignment is achieved if it can be, and how would the system that’s supposed to save the world actually work if an AI lab found a model scheming. Check out these links to Buck’s writing on these topics below:
https://redwoodresearch.substack.com/p/the-case-for-ensuring-that-powerful
https://redwoodresearch.substack.com/p/would-catching-your-ais-trying-to
Senate Hearing:
https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-insiders-perspectives
Harry Macks Youtube Channel
https://www.youtube.com/channel/UC59ZRYCHev_IqjUhremZ8Tg
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #47 Trailer, host John Sherman talks with Buck Shlegeris, CEO of Redwood Research, a non-profit company working on technical AI risk challenges. The discussion includes Buck’s thoughts on the new OpenAI o1-preview model, but centers on two questions: is there a way to control AI models before alignment is achieved if it can be, and how would the system that’s supposed to save the world actually work if an AI lab found a model scheming. Check out these links to Buck’s writing on these topics below:
https://redwoodresearch.substack.com/p/the-case-for-ensuring-that-powerful
https://redwoodresearch.substack.com/p/would-catching-your-ais-trying-to
Senate Hearing:
https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-insiders-perspectives
Harry Macks Youtube Channel
https://www.youtube.com/channel/UC59ZRYCHev_IqjUhremZ8Tg
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #46, host John Sherman talks with Daniel Faggella, Founder and Head of Research at Emerj Artificial Intelligence Research. Dan has been speaking out about AI risk for a long time but comes at it from a different perspective than many. Dan thinks we need to talk about how we can make AGI and whatever comes after become humanity’s worthy successor.
More About Daniel Faggella
https://danfaggella.com/
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #46 Trailer, host John Sherman talks with Daniel Faggella, Founder and Head of Research at Emerj Artificial Intelligence Research. Dan has been speaking out about AI risk for a long time but comes at it from a different perspective than many. Dan thinks we need to talk about how we can make AGI and whatever comes after become humanity’s worthy successor.
More About Daniel Faggella
https://danfaggella.com/
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #45, host John Sherman talks with Dr. Mike Brooks, a Psychologist focusing on kids and technology. The conversation is broad-ranging, touching on parenting, happiness and screens, the need for human unity, and the psychology of humans facing an ever more unknown future.FULL INTERVIEW STARTS AT (00:05:28)
Mike’s book: Tech Generation: Raising Balanced Kids in a Hyper-Connected World
An article from Mike in Psychology Today: The Happiness Illusion: Facing the Dark Side of Progress
Fine Dr. Brooks on Social Media
LinkedIn | X/Twitter | YouTube | TikTok | Instagram | Facebook
https://www.linkedin.com/in/dr-mike-brooks-b1164120
https://x.com/drmikebrooks
https://www.youtube.com/@connectwithdrmikebrooks
https://www.tiktok.com/@connectwithdrmikebrooks?lang=en
https://www.instagram.com/drmikebrooks/?hl=en
Chris Gerrby’s Twitter: https://x.com/ChrisGerrby
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #45 TRAILER, host John Sherman talks with Dr. Mike Brooks, a Psychologist focusing on kids and technology. The conversation is broad-ranging, touching on parenting, happiness and screens, the need for human unity, and the psychology of humans facing an ever more unknown future.
Mike’s book: Tech Generation: Raising Balanced Kids in a Hyper-Connected World
An article from Mike in Psychology Today: The Happiness Illusion: Facing the Dark Side of Progress
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #44, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Let us know in the comments!
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
BUY ROMAN’S NEW BOOK ON AMAZON
https://a.co/d/fPG6lOB
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #43, host John Sherman talks with DevOps Engineer Aubrey Blackburn about the vague, elusive case the big AI companies and accelerationists make for the good case AI future.
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #44 Trailer, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Watch the full episode and let us know in the comments.
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
BUY ROMAN’S NEW BOOK ON AMAZON
https://a.co/d/fPG6lOB
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
In Episode #43 TRAILER, host John Sherman talks with DevOps Engineer Aubrey Blackburn about the vague, elusive case the big AI companies and accelerationists make for the good case AI future.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Comments
Top Podcasts
The Best New Comedy Podcast Right Now – June 2024The Best News Podcast Right Now – June 2024The Best New Business Podcast Right Now – June 2024The Best New Sports Podcast Right Now – June 2024The Best New True Crime Podcast Right Now – June 2024The Best New Joe Rogan Experience Podcast Right Now – June 20The Best New Dan Bongino Show Podcast Right Now – June 20The Best New Mark Levin Podcast – June 2024
United States