DiscoverYour Undivided Attention
Your Undivided Attention
Claim Ownership

Your Undivided Attention

Author: Tristan Harris and Aza Raskin, The Center for Humane Technology

Subscribed: 6,933Played: 189,431
Share

Description

In our podcast, Your Undivided Attention, co-hosts Tristan Harris, Aza Raskin and Daniel Barcay explore the unprecedented power of emerging technologies: how they fit into our lives, and how they fit into a humane future.

Join us every other Thursday as we confront challenges and explore solutions with a wide range of thought leaders and change-makers — like Audrey Tang on digital democracy, neurotechnology with Nita Farahany, getting beyond dystopia with Yuval Noah Harari, and Esther Perel on Artificial Intimacy: the other AI.

Your Undivided Attention is produced by Executive Editor Sasha Fegan and Senior Producer Julia Scott. Our Researcher/Producer is Joshua Lash. We are a top tech podcast worldwide with more than 20 million downloads and a member of the TED Audio Collective.
120 Episodes
Reverse
It’s a confusing moment in AI. Depending on who you ask, we’re either on the fast track to AI that’s smarter than most humans, or the technology is about to hit a wall. Gary Marcus is in the latter camp. He’s a cognitive psychologist and computer scientist who built his own successful AI start-up. But he’s also been called AI’s loudest critic.On Your Undivided Attention this week, Gary sits down with CHT Executive Director Daniel Barcay to defend his skepticism of generative AI and to discuss what we need to do as a society to get the rollout of this technology right… which is the focus of his new book, Taming Silicon Valley: How We Can Ensure That AI Works for Us.The bottom line: No matter how quickly AI progresses, Gary argues that our society is woefully unprepared for the risks that will come from the AI we already have.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ RECOMMENDED MEDIALink to Gary’s book: Taming Silicon Valley: How We Can Ensure That AI Works for UsFurther reading on the deepfake of the CEO of India's National Stock ExchangeFurther reading on the deepfake of of an explosion near the Pentagon.The study Gary cited on AI and false memories.Footage from Gary and Sam Altman’s Senate testimony. RECOMMENDED YUA EPISODESFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnTaylor Swift is Not Alone: The Deepfake Nightmare Sweeping the InternetNo One is Immune to AI Harms with Dr. Joy Buolamwini Correction: Gary mistakenly listed the reliability of GPS systems as 98%. The federal government’s standard for GPS reliability is 95%.
AI is moving fast. And as companies race to rollout newer, more capable models–with little regard for safety–the downstream risks of those models become harder and harder to counter. On this week’s episode of Your Undivided Attention, CHT’s policy director Casey Mock comes on the show to discuss a new legal framework to incentivize better AI, one that holds AI companies liable for the harms of their products. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIAThe CHT Framework for Incentivizing Responsible AI DevelopmentFurther Reading on Air Canada’s Chatbot Fiasco Further Reading on the Elon Musk Deep Fake Scams The Full Text of SB1047, California’s AI Regulation Bill Further reading on SB1047 RECOMMENDED YUA EPISODESFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnCan We Govern AI? with Marietje SchaakeA First Step Toward AI Regulation with Tom WheelerCorrection: Casey incorrectly stated the year that the US banned child labor as 1937. It was banned in 1938.
[This episode originally aired on August 17, 2023] For all the talk about AI, we rarely hear about how it will change our relationships. As we swipe to find love and consult chatbot therapists, acclaimed psychotherapist and relationship expert Esther Perel warns that there’s another harmful “AI” on the rise — Artificial Intimacy — and how it is depriving us of real connection. Tristan and Esther discuss how depending on algorithms can fuel alienation, and then imagine how we might design technology to strengthen our social bonds.RECOMMENDED MEDIA Mating in Captivity by Esther PerelEsther's debut work on the intricacies behind modern relationships, and the dichotomy of domesticity and sexual desireThe State of Affairs by Esther PerelEsther takes a look at modern relationships through the lens of infidelityWhere Should We Begin? with Esther PerelListen in as real couples in search of help bare the raw and profound details of their storiesHow’s Work? with Esther PerelEsther’s podcast that focuses on the hard conversations we're afraid to have at work Lars and the Real Girl (2007)A young man strikes up an unconventional relationship with a doll he finds on the internetHer (2013)In a near future, a lonely writer develops an unlikely relationship with an operating system designed to meet his every needRECOMMENDED YUA EPISODESBig Food, Big Tech and Big AI with Michael MossThe AI DilemmaThe Three Rules of Humane TechDigital Democracy is Within Reach with Audrey Tang CORRECTION: Esther refers to the 2007 film Lars and the Real Doll. The title of the film is Lars and the Real Girl. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Today, the tech industry is  the second-biggest lobbying power in Washington, DC, but that wasn’t true as recently as ten years ago. How did we get to this moment? And where could we be going next? On this episode of Your Undivided Attention, Tristan and Daniel sit down with historian Margaret O’Mara and journalist Brody Mullins to discuss how Silicon Valley has changed the nature of American lobbying. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIAThe Wolves of K Street: The Secret History of How Big Money Took Over Big Government - Brody’s book on the history of lobbying.The Code: Silicon Valley and the Remaking of America - Margaret’s book on the historical relationship between Silicon Valley and Capitol HillMore information on the Google antitrust rulingMore Information on KOSPAMore information on the SOPA/PIPA internet blackoutDetailed breakdown of Internet lobbying from Open Secrets RECOMMENDED YUA EPISODESU.S. Senators Grilled Social Media CEOs. Will Anything Change?Can We Govern AI? with Marietje SchaakeThe Race to Cooperation with David Sloan Wilson CORRECTION: Brody Mullins refers to AT&T as having a “hundred million dollar” lobbying budget in 2006 and 2007. While we couldn’t verify the size of their budget for lobbying, their actual lobbying spend was much less than this: $27.4m in 2006 and $16.5m in 2007, according to OpenSecrets. The views expressed by guests appearing on Center for Humane Technology’s podcast, Your Undivided Attention, are their own, and do not necessarily reflect the views of CHT. CHT does not support or oppose any candidate or party for election to public office 
It’s been a year and half since Tristan and Aza laid out their vision and concerns for the future of artificial intelligence in The AI Dilemma. In this Spotlight episode, the guys discuss what’s happened since then–as funding, research, and public interest in AI has exploded–and where we could be headed next. Plus, some major updates on social media reform, including the passage of the Kids Online Safety and Privacy Act in the Senate. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ RECOMMENDED MEDIAThe AI Dilemma: Tristan and Aza’s talk on the catastrophic risks posed by AI.Info Sheet on KOSPA: More information on KOSPA from FairPlay.Situational Awareness by Leopold Aschenbrenner: A widely cited blog from a former OpenAI employee, predicting the rapid arrival of AGI.AI for Good: More information on the AI for Good summit that was held earlier this year in Geneva. Using AlphaFold in the Fight Against Plastic Pollution: More information on Google’s use of AlphaFold to create an enzyme to break down plastics. Swiss Call For Trust and Transparency in AI: More information on the initiatives mentioned by Katharina Frey. RECOMMENDED YUA EPISODESWar is a Laboratory for AI with Paul ScharreJonathan Haidt On How to Solve the Teen Mental Health CrisisCan We Govern AI? with Marietje Schaake The Three Rules of Humane TechThe AI Dilemma Clarification: Swiss diplomat Nina Frey’s full name is Katharina Frey. The views expressed by guests appearing on Center for Humane Technology’s podcast, Your Undivided Attention, are their own, and do not necessarily reflect the views of CHT. CHT does not support or oppose any candidate or party for election to public office
AI has been a powerful accelerant for biological research, rapidly opening up new frontiers in medicine and public health. But that progress can also make it easier for bad actors to manufacture new biological threats. In this episode, Tristan and Daniel sit down with biologist Kevin Esvelt to discuss why AI has been such a boon for biologists and how we can safeguard society against the threats that AIxBio poses.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIASculpting Evolution: Information on Esvelt’s lab at MIT.SecureDNA: Esvelt’s free platform to provide safeguards for DNA synthesis.The Framework for Nucleic Acid Synthesis Screening: The Biden admin’s suggested guidelines for DNA synthesis regulation.Senate Hearing on Regulating AI Technology: C-SPAN footage of Dario Amodei’s testimony to Congress.The AlphaFold Protein Structure DatabaseRECOMMENDED YUA EPISODESU.S. Senators Grilled Social Media CEOs. Will Anything Change?Big Food, Big Tech and Big AI with Michael MossThe AI DilemmaClarification: President Biden’s executive order only applies to labs that receive funding from the federal government, not state governments.
Will AI ever start to think by itself? If it did, how would we know, and what would it mean?In this episode, Dr. Anil Seth and Aza discuss the science, ethics, and incentives of artificial consciousness. Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex and the author of Being You: A New Science of Consciousness.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIAFrankenstein by Mary ShelleyA free, plain text version of the Shelley’s classic of gothic literature.OpenAI’s GPT4o DemoA video from OpenAI demonstrating GPT4o’s remarkable ability to mimic human sentience.You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue PillsThe NYT op-ed from last year by Tristan, Aza, and Yuval Noah Harari outlining the AI dilemma. What It’s Like to Be a BatThomas Nagel’s essay on the nature of consciousness.Are You Living in a Computer Simulation?Philosopher Nick Bostrom’s essay on the simulation hypothesis.Anthropic’s Golden Gate ClaudeA blog post about Anthropic’s recent discovery of millions of distinct concepts within their LLM, a major development in the field of AI interpretability.RECOMMENDED YUA EPISODESEsther Perel on Artificial IntimacyTalking With Animals... Using AISynthetic Humanity: AI & What’s At Stake
Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIAThe Walls Have Eyes: Surviving Migration in the Age of Artificial IntelligencePetra’s newly published book on the rollout of high risk tech at the border.Bots at the GateA report co-authored by Petra about Canada’s use of AI technology in their immigration process.Technological Testing GroundsA report authored by Petra about the use of experimental technology in EU border enforcement.Startup Pitched Tasing Migrants from Drones, Video RevealsAn article from The Intercept, containing the demo for Brinc’s taser drone pilot program.The UNHCRInformation about the global refugee crisis from the UN.RECOMMENDED YUA EPISODESWar is a Laboratory for AI with Paul ScharreNo One is Immune to AI Harms with Dr. Joy BuolamwiniCan We Govern AI? With Marietje SchaakeCLARIFICATION:The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019
This week, a group of current and former employees from OpenAI and Google DeepMind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. RECOMMENDED MEDIA The Right to Warn Open LetterMy Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letterLeaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.RECOMMENDED YUA EPISODESA First Step Toward AI Regulation with Tom WheelerSpotlight on AI: What Would It Take For This to Go Well?Big Food, Big Tech and Big AI with Michael MossCan We Govern AI? With Marietje SchaakeYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. RECOMMENDED MEDIAFour Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza: An investigation into the use of AI targeting systems by the IDF.RECOMMENDED YUA EPISODESThe AI ‘Race’: China vs. the US with Jeffrey Ding and Karen HaoCan We Govern AI? with Marietje SchaakeBig Food, Big Tech and Big AI with Michael MossThe Invisible Cyber-War with Nicole PerlrothYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?RECOMMENDED MEDIAPower and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the worldCan we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary pathRethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentivesRECOMMENDED YUA EPISODESThe Three Rules of Humane TechThe Tech We Need for 21st Century DemocracyCan We Govern AI?An Alternative to Silicon Valley UnicornsYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.This episode was recorded live at the San Francisco Commonwealth Club.  Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.
Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.RECOMMENDED MEDIA Chip War: The Fight For the World’s Most Critical Technology by Chris MillerTo make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chipsGordon Moore Biography & FactsGordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023AI’s most popular chipmaker Nvidia is trying to use AI to design chips fasterNvidia's GPUs are in high demand - and the company is using AI to accelerate chip productionRECOMMENDED YUA EPISODESFuture-proofing Democracy In the Age of AI with Audrey TangHow Will AI Affect the 2024 Elections? with Renee DiResta and Carl MillerThe AI ‘Race’: China vs. the US with Jeffrey Ding and Karen HaoProtecting Our Freedom of Thought with Nita FarahanyYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_  
What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. RECOMMENDED MEDIA Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. PageThis academic paper addresses tough questions for Americans: Who governs? Who really rules? Recursive PublicRecursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governanceA Strong Democracy is a Digital DemocracyAudrey Tang’s 2019 op-ed for The New York TimesThe Frontiers of Digital DemocracyNathan Gardels interviews Audrey Tang in NoemaRECOMMENDED YUA EPISODES Digital Democracy is Within Reach with Audrey TangThe Tech We Need for 21st Century Democracy with Divya SiddarthHow Will AI Affect the 2024 Elections? with Renee DiResta and Carl MillerThe AI DilemmaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.RECOMMENDED MEDIA Get Media SavvyFounded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and familiesThe Power of One by Frances HaugenThe inside story of France’s quest to bring transparency and accountability to Big TechRECOMMENDED YUA EPISODESReal Social Media Solutions, Now with Frances HaugenA Conversation with Facebook Whistleblower Frances HaugenAre the Kids Alright?Social Media Victims Lawyer Up with Laura Marquez-GarrettYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_  
Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.RECOMMENDED MEDIA Revenge Porn: The Cyberwar Against WomenIn a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge pornThe Cult of the ConstitutionIn this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalismFake Explicit Taylor Swift Images Swamp Social MediaCalls to protect women and crack down on the platforms and technology that spread such images have been reignitedRECOMMENDED YUA EPISODES No One is Immune to AI HarmsEsther Perel on Artificial IntimacySocial Media Victims Lawyer UpThe AI DilemmaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.RECOMMENDED MEDIA The Emerald podcastThe Emerald explores the human experience through a vibrant lens of myth, story, and imaginationEmbodied Ethics in The Age of AIA five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew DunnNature Nurture: Children Can Become Stewards of Our Delicate PlanetA U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animalsThe New FireAI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive orderRECOMMENDED YUA EPISODES How Will AI Affect the 2024 Elections?The AI DilemmaThe Three Rules of Humane TechAI Myths and Misconceptions Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
2024 will be the biggest election year in world history. Forty countries will hold national elections, with over two billion voters heading to the polls. In this episode of Your Undivided Attention, two experts give us a situation report on how AI will increase the risks to our elections and our democracies. Correction: Tristan says two billion people from 70 countries will be undergoing democratic elections in 2024. The number expands to 70 when non-national elections are factored in.RECOMMENDED MEDIA White House AI Executive Order Takes On Complexity of Content Integrity IssuesRenee DiResta’s piece in Tech Policy Press about content integrity within President Biden’s AI executive orderThe Stanford Internet ObservatoryA cross-disciplinary program of research, teaching and policy engagement for the study of abuse in current information technologies, with a focus on social mediaDemosBritain’s leading cross-party think tankInvisible Rulers: The People Who Turn Lies into Reality by Renee DiRestaPre-order Renee’s upcoming book that’s landing on shelves June 11, 2024RECOMMENDED YUA EPISODESThe Spin Doctors Are In with Renee DiRestaFrom Russia with Likes Part 1 with Renee DiRestaFrom Russia with Likes Part 2 with Renee DiRestaEsther Perel on Artificial IntimacyThe AI DilemmaA Conversation with Facebook Whistleblower Frances HaugenYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ 
2023 Ask Us Anything

2023 Ask Us Anything

2023-11-3035:072

You asked, we answered. This has been a big year in the world of tech, with the rapid proliferation of artificial intelligence, acceleration of neurotechnology, and continued ethical missteps of social media. Looking back on 2023, there are still so many questions on our minds, and we know you have a lot of questions too. So we created this episode to respond to listener questions and to reflect on what lies ahead.Correction: Tristan mentions that 41 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.Correction: Tristan refers to Casey Mock as the Center for Humane Technology’s Chief Policy and Public Affairs Manager. His title is Chief Policy and Public Affairs Officer.RECOMMENDED MEDIA Tech Policy WatchMarietje Schaake curates this briefing on artificial intelligence and technology policy from around the worldThe AI Executive OrderPresident Biden’s executive order on the safe, secure, and trustworthy development and use of AIMeta sued by 42 AGs for addictive features targeting kidsA bipartisan group of 42 attorneys general is suing Meta, alleging features on Facebook and Instagram are addictive and are aimed at kids and teensRECOMMENDED YUA EPISODES The Three Rules of Humane TechTwo Million Years in Two Hours: A Conversation with Yuval Noah HarariInside the First AI Insight Forum in WashingtonDigital Democracy is Within Reach with Audrey TangThe Tech We Need for 21st Century Democracy with Divya SiddarthMind the (Perception) Gap with Dan ValloneThe AI DilemmaCan We Govern AI? with Marietje SchaakeAsk Us Anything: You Asked, We AnsweredYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
As AI development races forward, a fierce debate has emerged over open source AI models. So what does it mean to open-source AI? Are we opening Pandora’s box of catastrophic risks? Or is open-sourcing AI the only way we can democratize its benefits and dilute the power of big tech? Correction: When discussing the large language model Bloom, Elizabeth said it functions in 26 different languages. Bloom is actually able to generate text in 46 natural languages and 13 programming languages - and more are in the works. RECOMMENDED MEDIA Open-Sourcing Highly Capable Foundation ModelsThis report, co-authored by Elizabeth Seger, attempts to clarify open-source terminology and to offer a thorough analysis of risks and benefits from open-sourcing AIBadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13BThis paper, co-authored by Jeffrey Ladish, demonstrates that it’s possible to effectively undo the safety fine-tuning from Llama 2-Chat 13B with less than $200 while retaining its general capabilitiesCentre for the Governance of AISupports governments, technology companies, and other key institutions by producing relevant research and guidance around how to respond to the challenges posed by AIAI: Futures and Responsibility (AI:FAR)Aims to shape the long-term impacts of AI in ways that are safe and beneficial for humanityPalisade ResearchStudies the offensive capabilities of AI systems today to better understand the risk of losing control to AI systems forever RECOMMENDED YUA EPISODESA First Step Toward AI Regulation with Tom WheelerNo One is Immune to AI Harms with Dr. Joy BuolamwiniMustafa Suleyman Says We Need to Contain AI. How Do We Do It?The AI DilemmaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
loading
Comments (56)

Richard L. Hanson

This compact devices harness the latent energy from radioisotope decay, ensuring spacecraft endurance during the frigid lunar nights when temperatures plummet to minus 170°C https://orbitaltoday.com/2024/05/23/uksa-supports-leicester-ispace-lunar-night-survival-technology-project/ As the cosmic stage expands, this partnership exemplifies the harmonious interplay of scientific ingenuity and pragmatic solutions, poised to illuminate the lunar expanse.

Jun 20th
Reply

mrs rime

🔴💚Really Amazing ️You Can Try This💚WATCH💚ᗪOᗯᑎᒪOᗩᗪ👉https://co.fastmovies.org

Jan 16th
Reply

Hamish Lamont

I have to laugh at these Chinese American AI Researchers. "China isn't obsessed with beating America." "China wouldn't steal AI." What BS. And I can't believe for a moment they are that naiive. Clearly, their allegiances lie with China. And they can't be trusted.

Oct 23rd
Reply

Ed Potter

I hope it isn't lost on people that it is a critical organ for exchanging ideas, informing dissidents in Iran and other authoritarian controlled countries. That is of value to the Mullahs, Putin, etc.

Nov 19th
Reply

GD2021

we're supposed to be a democratic republic, but whatever...we're not that either anymore.

Oct 20th
Reply

Maciej Czech

You are so disconnected from reality, it's hard to listen this patronizing tone :/

Jun 28th
Reply

Ed Potter

I'm surprised by Frank Luntz feeling he isn't listened to. Every time it seems when there's been of some thorny issue, a messaging battle in the last couple of decades, he's been there. And, understanding the right groups to win over. I hope the messaging debacle, though I understand it, has taught or chastened the Dems enough to employ the strategies of gurus like Drew Weston and Luntz!!

Jun 6th
Reply

Gr8 Mutato

I hate to crush dreams to commenters, but neither Tristan Harris or Aza Raskin read comments from Castbox!!!

Feb 17th
Reply

Kat

I just started listening to this podcast and it seems really interesting! I just have one comment about this one on gambling addiction, since I kept waiting for them to talk about the root of gambling or any other kind of addiction... this is central to solve this problem and any psychologist working in the area knows about this, so I was somewhat surprised there was no mention of this. Why do people start gambling in the first place (or other behaviours that end up in addiction)? And I am not talking about playing slots once a year on your bday or for a bachelor's party... Once people are addicted, it is extremely difficult to stop it (once an addict, always and addict!), but prevention of it is much easier to manage and implement. There are some genetic/hereditary propensities for addiction given the right conditions, but this is not always predictive. The clearer predictive of someone becoming an addict is linked to social and emotional relationships quality in one's life. And my gu

Feb 16th
Reply

Grant Hutton

I think you dropped the ball on this one guys. I couldn't think of one thing McCaster said that China does, or Russia, that we do not do abroad ourselves, or here at home in America. Just because we're America, doesn't make our intent for nefarious things like media control in our own country and others, any better than China's.

Jan 20th
Reply (1)

Daniel Burt

This is Wert's lost tape from Over the Garden Wall.

Nov 30th
Reply

Meykel

amazing episode, so insightful. This kind of conversation should be had on national news

Sep 24th
Reply

Ed Potter

I am extremely impressed with this podcast. It's presentation was cogent and very well informed. Thank you! What's the plan for having government adopt Blockchain as a means to transparency?

Jul 18th
Reply

Michael Pemulis

what did you think of this?

Jun 30th
Reply

ncooty

She habitually drags out the final word or syllable of each clause, as though she thinks it accentuates her point. Don't inflect EVERYTHING.

Apr 8th
Reply

James Weatherby

This podcast changed my life. Ive felt 'wrong' about social media for some time and since disconnecting have found myself justifying 'why not' to my family and friends, and finding my 'why so?' to be wholly ineffective. Even to myself, it was hard yo educate and explain internally. I can now explain myself more clearly. I wont change my family's mind but i am now more informed (on both sides) and can more considered decisions. Ive shared this podcast with some colleagues and friends who are more open minded and already i see a change, and thats what matters. Its about awareness. I dont want to proselytise. Thank you for the passion, accessibility and transparency of a podcast like this. I truly hope we will look back on podcasts like this decades from now and see them as prophetic. I hope... The alternative doesn't bare thinking about.

Apr 4th
Reply (1)

ncooty

The snaps get old.

Apr 2nd
Reply

ncooty

@18:32: "True for them" is such an intellectually broken phrase that it contributes to the very problem being discussed. The violence of Jan. 6th was fueled by lies conveyed through a misappropriation of English. Muddled language has a reciprocal relationship with muddled thinking. How can we have accountability when words no longer have meaning? This is Trump's own defense, and that of Sidney Powell, and Rudy Giuliani, and Fox, and every depraved Reupblican attempting to hide their bigotry and malice in a fog of nonsense. Stop contributing to the problem. Start using words as if they have actual meanings.

Apr 2nd
Reply

ncooty

Another well intentioned person holding forth about "truth" because it seems right to her, yet many of her strung-together conjectures are factually wrong. It reminds me of anti-scientific Socratic precepts. So little of what she said is empirically falsifiable, and many of her little factoids are in fact false. It undermines her credibility, and therefore her efficacy in promoting what might be useful approaches.

Mar 31st
Reply

ncooty

@8:58: A great point I'm very glad to hear someone make regarding the over-use of military terminology and metaphors.

Mar 31st
Reply
loading