DiscoverGoogle DeepMind: The Podcast
Claim Ownership
Google DeepMind: The Podcast
Author: Hannah Fry
Subscribed: 8,528Played: 59,242Subscribe
Share
© Google DeepMind Technologies Limited 2022
Description
What is generative AI? How do you create safe and capable models? Is AI overhyped? Join mathematician and broadcaster Professor Hannah Fry as she answers these questions and more in the highly-praised and award-winning podcast from Google DeepMind.
In this series, Hannah goes behind the scenes of the world-leading research lab to uncover the extraordinary ways AI is transforming our world. No hype. No spin, just compelling discussions and grand scientific ambition.
In this series, Hannah goes behind the scenes of the world-leading research lab to uncover the extraordinary ways AI is transforming our world. No hype. No spin, just compelling discussions and grand scientific ambition.
32 Episodes
Reverse
In our final episode for the year, we explore Project Astra, a research prototype exploring future capabilities of a universal AI assistant that can understand the world around you. Host Hannah Fry is joined by Greg Wayne, Director in Research at Google DeepMind. They discuss the inspiration behind the research prototype, its current strengths and limitations, as well as potential future use cases. Hannah even gets the chance to put Project Astra's multilingual skills to the test.Further reading / listening:Gemini 2.0Project Astra Decoding Google Gemini with Jeff DeanGaming, Goats & General Intelligence with Frederic BesseThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale StudiosCommissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Bernardo ResendeAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
In this episode, Hannah is joined by Oriol Vinyals, VP of Drastic Research and Gemini co-lead. They discuss the evolution of agents from single-task models to more general-purpose models capable of broader applications, like Gemini. Vinyals guides Hannah through the two-step process behind multi modal models: pre-training (imitation learning) and post-training (reinforcement learning). They discuss the complexities of scaling and the importance of innovation in architecture and training processes. They close on a quick whirlwind tour of some of the new agentic capabilities recently released by Google DeepMind. Note: To see all of the full length demos, including unedited versions, and other videos related to Gemini 2.0 head to YouTube.Future reading/watching: Gemini 2.0 Decoding Google Gemini with Jeff DeanGaming, Goats & General Intelligence with Frederic BesseThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Bernardo ResendeAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind— Subscribe to our YouTube channel Find us on XFollow us on InstagramAdd us on Linkedin
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
There is broad consensus across the tech industry, governments and society, that as artificial intelligence becomes more embedded in every aspect of our world, regulation will be essential. But what does this look like? Can it be adopted without stifling innovation? Are current frameworks presented by government leaders headed in the right direction?Join host Hannah Fry as she discusses these questions and more with Nicklas Lundblad, Director of Public Policy at Google DeepMind. Nicklas emphasises the importance of a nuanced approach to regulation, focusing on adaptability and evidence-based policymaking. He highlights the complexities of assessing risk and reward in emerging technologies, advocating for a focus on harm reduction. Further reading/watching:AI Principles: https://ai.google/responsibility/principles/Frontier Model Forum: https://blog.google/outreach-initiatives/public-policy/google-microsoft-openai-anthropic-frontier-model-forum/Ethics of AI assistants with Iason Gabriel https://youtu.be/aaZc-as-soA?si=0ThbYY30FlO31kKQThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale StudiosCommissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Bernardo ResendeAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
NotebookLM is a research assistant powered by Gemini that draws on expertise from storytelling to present information in an engaging way. It allows users to upload their own documents and generate insights, explanations, and—more recently—podcasts. This feature, also known as audio overviews, has captured the imagination of millions of people worldwide, who have created thousands of engaging podcasts ranging from personal narratives to educational explainers using source materials like CVs, personal journals, sales decks, and more.Join Raiza Martin and Steven Johnson from Google Labs, Google’s testing ground for products, as they guide host Hannah Fry through the technical advancements that have made NotebookLM possible. In this episode they'll explore what it means to be interesting, the challenges of generating natural-sounding speech, as well as exciting new modalities on the horizon.Further readingTry NotebookLM hereRead about the speech generation technology behind Audio Overveiws: https://deepmind.google/discover/blog/pushing-the-frontiers-of-audio-generation/Thanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Daniel LazardAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Alex Baro Cayetano, Daniel Lazard Video Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Join Professor Hannah Fry at the AI for Science Forum for a fascinating conversation with Google DeepMind CEO Demis Hassabis. They explore how AI is revolutionizing scientific discovery, delving into topics like the nuclear pore complex, plastic-eating enzymes, quantum computing, and the surprising power of Turing machines. The episode also features a special 'ask me anything' session with Nobel Laureates Sir Paul Nurse, Jennifer Doudna, and John Jumper, who answer audience questions about the future of AI in science.Watch the episode here, and catch up on all of the sessions from the AI for Science Forum here.
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Imagine a future where we interact regularly with a range of advanced artificial intelligence (AI) assistants — and where millions of assistants interact with each other on our behalf. These experiences and interactions may soon become part of our everyday reality.In this episode, host Hannah Fry and Google DeepMind Senior Staff Research Scientist Iason Gabriel discuss the ethical implications of advanced AI assistants. Drawing from Iason's recent paper, they examine value alignment, anthropomorphism, safety concerns, and the potential societal impact of these technologies. Timecodes: 00:00 Intro01:13 Definition of AI assistants04:05 A utopic view06:25 Iason’s background07:45 The Ethics of Advanced AI Assistants paper13:06 Anthropomorphism14:07 Turing perspective15:25 Anthropomorphism continued20:02 The value alignment question24:54 Deception27:07 Deployed at scale28:32 Agentic inequality31:02 Unfair outcomes34:10 Coordinated systems37:10 A new paradigm38:23 Tetradic value alignment41:10 The future42:41 Reflections from HannahThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale StudiosCommissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Daniel LazardAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonProduction support: Mo DawoudCommissioned by Google DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
How human should an AI tutor be? What does ‘good’ teaching look like? Will AI lead in the classroom, or take a back seat to human instruction? Will everyone have their own personalized AI tutor? Join research lead, Irina Jurenka, and Professor Hannah Fry as they explore the complicated yet exciting world of AI in education. Further reading:Towards Responsible Development of Generative AI for Education: An Evaluation-Driven ApproachThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Daniel LazardAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonProduction support: Mo Dawoud Commissioned by Google DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
In this episode, Professor Hannah Fry sits down with Pushmeet Kohli, VP of Research at Google DeepMind to discuss AI’s impact on scientific discovery. They go on a whirlwind tour of scientific projects, touching on recent breakthroughs in AlphaFold, material science, weather forecasting, and mathematics to better understand how AI can enhance our scientific understanding of the world.Further reading:Millions of new materials discovered with deep learningGraphCast: AI model for faster and more accurate global weather forecastingAlphaFold: A breakthrough unfolds (S2,E1)AlphaGeometry: An Olympiad-level AI system for geometryAI achieves silver-medal standard solving International Mathematical Olympiad problemsThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonProduction support: Mo Dawoud Commissioned by Google DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Games are a very good training ground for agents. Think about it. Perfectly packaged, neatly constrained environments where agents can run wild, work out the rules for themselves, and learn how to handle autonomy. In this episode, Research Engineering Team Lead, Frederic Besse, joins Hannah as they discuss important research like SIMA (Scalable Instructable Multiworld Agent) and what we can expect from future agents that can understand and safely carry out a wide range of tasks - online and in the real world.Further reading:SIMARTX & RT2Interactive AgentsThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifProduction support: Mo Dawoud Music composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor Tomlinson Commissioned by Google DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Professor Hannah Fry is joined by Jeff Dean, one of the most legendary figures in computer science and chief scientist of Google DeepMind and Google Research. Jeff was instrumental to the field in the late 1990s, writing the code that transformed Google from a small startup into the multinational company it is today. Hannah and Jeff discuss it all - from the early days of Google and neural networks, to the long term potential of multi-modal models like Gemini.Thanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifProduction support: Mo DawoudMusic composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Building safe and capable models is one of the greatest challenges of our time. Can we make AI work for everyone? How do we prevent existential threats? Why is alignment so important? Join Professor Hannah Fry as she delves into these critical questions with Anca Dragan, lead for AI safety and alignment at Google DeepMind. For further reading, search "Introducing the Frontier Safety Framework" and "Evaluating Frontier Models for Dangerous Capabilities".Thanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifProduction support: Mo DawoudMusic composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Professor Hannah Fry is joined by Google DeepMind's senior research director Douglas Eck to explore AI's capacity for true creativity. They delve into the complexities of defining creativity, the challenges of AI generated content and attribution, and whether AI can help us to connect with each other in new and meaningful ways. Want to watch the full episode? Subscribe to Google DeepMind's YouTube page and stay tuned for new episodes. Further reading: VeoImagen SynthIDAn update on web publisher controls (Google-Extended)Social channels to follow for new content:InstagramXLinkedinThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonSeries Editor: Rami Tzabar, TellTale Studios Commissioner and Producer: Emma YousifProduction support: Mo DawoudMusic composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Darren Carikas Video Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor Tomlinson Commissioned by Google DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
It has been a few years since Google DeepMind CEO and co-founder, Demis Hassabis, and Professor Hannah Fry caught up.In that time, the world has caught on to artificial intelligence—in a big way. Listen as they discuss the recent explosion of interest in AI, what Demis means when he describes chatbots as ‘unreasonably effective’, and the unexpected emergence of capabilities like conceptual understanding and abstraction in recent generative models.Demis and Hannah also explore the need for rigorous AI safety measures, the importance of responsible AI development, and what he hopes for as we move closer towards artificial general intelligence. Want to watch the full episode? Subscribe to Google DeepMind's YouTube page and stay tuned for new episodes.Further reading: GeminiProject Astra Google I/O 2024Scaling Language Models: Methods, Analysis & Insights from Training GopherLaMDA: our breakthrough conversation technologySocial channels to follow for new content:InstagramXLinkedinThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonSeries Editor: Rami Tzabar, TellTale Studios Commissioner and Producer: Emma YousifProduction support: Mo DawoudMusic composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Darren Carikas Video Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor Tomlinson Commissioned by Google DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Hannah wraps up the series by meeting DeepMind co-founder and CEO, Demis Hassabis. In an extended interview, Demis describes why he believes AGI is possible, how we can get there, and the problems he hopes it will solve. Along the way, he highlights the important role of consciousness and why he’s so optimistic that AI can help solve many of the world’s major challenges. As a final note, Demis shares the story of a personal meeting with Stephen Hawking to discuss the future of AI and discloses Hawking’s parting message. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com. Interviewee: Deepmind co-founder and CEO, Demis Hassabis CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading:DeepMind, The Podcast: https://deepmind.com/blog/article/welcome-to-the-deepmind-podcastDeepMind’s Demis Hassabis on its breakthrough scientific discoveries, WIRED: https://www.youtube.com/watch?v=2WRow9FqUbwRiemann hypothesis, Wikipedia: https://en.wikipedia.org/wiki/Riemann_hypothesisUsing AI to accelerate scientific discovery by Demis Hassabis, Kendrew Lecture 2021: https://www.youtube.com/watch?v=sm-VkgVX-2oProtein Folding & the Next Technological Revolution by Demis Hassabis, Bloomberg: https://www.youtube.com/watch?v=vhd4ENh5ON4The Algorithm, MIT Technology Review: https://forms.technologyreview.com/newsletters/ai-the-algorithm/Machine learning resources, The Royal Society: https://royalsociety.org/topics-policy/education-skills/teacher-resources-and-opportunities/resources-for-teachers/resources-machine-learning/How to get empowered, not overpowered, by AI, TED: https://www.youtube.com/watch?v=2LRwvU6gEbA
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
AI needs to benefit everyone, not just those who build it. But fulfilling this promise requires careful thought before new technologies are built and released into the world. In this episode, Hannah delves into some of the most pressing and difficult ethical and social questions surrounding AI today. She explores complex issues like racial and gender bias and the misuse of AI technologies, and hears why diversity and representation is vital for building technology that works for all. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com. Interviewees: DeepMind’s Sasha Brown, William Isaac, Shakir Mohamed, Kevin Mckee & Obum Ekeke CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading: What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias, The Verge: https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-biasTuskegee Syphilis Study, Wikipedia: https://en.wikipedia.org/wiki/Tuskegee_Syphilis_StudyEthics & Society, DeepMind: https://deepmind.com/about/ethics-and-societyRow over AI that 'identifies gay faces', BBC: https://www.bbc.co.uk/news/technology-41188560The Trevor Project: https://www.thetrevorproject.org/AI takes root, helping farmers identify diseased plants, Google: https://www.blog.google/technology/ai/ai-takes-root-helping-farmers-identity-diseased-plants/How Can You Use Technology to Support a Culture of Inclusion and Diversity?, myHRfuture: https://www.myhrfuture.com/blog/2019/7/16/how-can-you-use-technology-to-support-a-culture-of-inclusion-and-diversityScholarships at DeepMind: https://www.deepmind.com/scholarshipsAI, Ain’t I a Woman? Joy Buolamwini, YouTube: https://www.youtube.com/watch?v=QxuyfWoVV98How to be Human in the Age of the Machine, Hannah Fry: https://royalsociety.org/grants-schemes-awards/book-prizes/science-book-prize/2018/hello-world/
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
AI doesn’t just exist in the lab, it’s already solving a range of problems in the real world. In this episode, Hannah encounters a realistic recreation of her voice by WaveNet, the voice synthesising system that powers the Google Assistant and helps people with speech difficulties and illnesses regain their voices. Hannah also discovers how ‘deepfake’ technology can be used to improve weather forecasting and how DeepMind researchers are collaborating with Liverpool Football Club, aiming to take sports to the next level. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com. Interviewees: DeepMind’s Demis Hassabis, Raia Hadsell, Karl Tuyls, Zach Gleicher & Jackson Broshear; Niall Robinson of the UK Met Office CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading: A generative model for raw audio, DeepMind: https://deepmind.com/blog/article/wavenet-generative-model-raw-audioWaveNet case study, DeepMind: https://deepmind.com/research/case-studies/wavenetUsing WaveNet technology to reunite speech-impaired users with their original voices, DeepMind:| https://deepmind.com/blog/article/Using-WaveNet-technology-to-reunite-speech-impaired-users-with-their-original-voicesProject Euphonia, Google Research: https://sites.research.google/euphonia/about/Nowcasting the next hour of rain, DeepMind: https://deepmind.com/blog/article/nowcastingNow DeepMind is using AI to transform football, WIRED: https://www.wired.co.uk/article/deepmind-football-liverpool-aiAdvancing sports analytics through AI, DeepMind: https://deepmind.com/blog/article/advancing-sports-analytics-through-aiMetOffice, BBC: https://www.metoffice.gov.uk/The village ‘washed on to the map’, BBC: https://www.bbc.co.uk/news/uk-england-cornwall-28523053Michael Fish got the storm of 1987 wrong, Sky News: https://news.sky.com/story/michael-fish-got-the-storm-of-1987-wrong-but-modern-supercomputers-may-have-missed-it-too-11076659#:~:text=In%20a%20lunchtime%20broadcast%20on,%2C%22%20he%20confidently%20told%20viewers.
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Step inside DeepMind's laboratories and you'll find researchers studying DNA to understand the mysteries of life, seeking new ways to use nuclear energy, or putting AI to the test in mind-bending areas of maths. In this episode, Hannah meets Pushmeet Kohli, the head of science at DeepMind, to understand how AI is accelerating scientific progress. Listeners also join Hannah on a [virtual] safari in the Serengeti in East Africa to find out how researchers are using AI to conserve wildlife in one of the world’s most spectacular ecosystems. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com. Interviewees: DeepMind’s Demis Hassabis, Pushmeet Kohli & Sarah Jane Dunn; Meredith Palmer of the Princeton University CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading:Using AI for scientific discovery, DeepMind: https://deepmind.com/blog/article/AlphaFold-Using-AI-for-scientific-discoveryDeepMind’s Demis Hassabis on its breakthrough scientific discoveries, WIRED: https://www.youtube.com/watch?v=2WRow9FqUbwThe AI revolution in scientific research, The Royal Society: https://royalsociety.org/-/media/policy/projects/ai-and-society/AI-revolution-in-science.pdfDOE Explains...Tokamaks, Office of Science: https://www.energy.gov/science/doe-explainstokamaksHow AI Accidentally Learned Ecology by Playing StarCraft, Discover: https://www.discovermagazine.com/technology/how-ai-accidentally-learned-ecology-by-playing-starcraftGoogle AI can identify wildlife from trap-camera footage, VentureBeat:https://venturebeat.com/2019/12/17/googles-ai-can-identify-wildlife-from-trap-camera-footage-with-up-to-98-6-accuracy/Snapshot Serengeti, Zooniverse:https://www.zooniverse.org/projects/zooniverse/snapshot-serengetiThe Human Genome Project, National Human Genome Research Institute: https://www.genome.gov/human-genome-projectExploring the beauty of pure mathematics in novel ways, DeepMind: https://deepmind.com/blog/article/exploring-the-beauty-of-pure-mathematics-in-novel-waysPredicting gene expression with AI, DeepMind: https://deepmind.com/blog/article/enformerUsing machine learning to accelerate ecological research, DeepMind: https://deepmind.com/blog/article/using-machine-learning-to-accelerate-ecological-researchAccelerating fusion science through learned plasma control, DeepMind: https://deepmind.com/blog/article/Accelerating-fusion-science-through-learned-plasma-controlSimulating matter on the quantum scale with AI, DeepMind: https://deepmind.com/blog/article/Simulating-matter-on-the-quantum-scale-with-AIHow AI is helping the natural sciences, Nature: https://www.nature.com/articles/d41586-021-02762-6Inside DeepMind's epic mission to solve science's trickiest problem, WIRED: https://www.wired.co.uk/article/deepmind-protein-foldingHow Artificial Intelligence Is Changing Science, Quanta: https://www.quantamagazine.org/how-artificial-intelligence-is-changing-science-20190311/
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Hannah meets DeepMind co-founder and chief scientist Shane Legg, the man who coined the phrase ‘artificial general intelligence’, and explores how it might be built. Why does Shane think AGI is possible? When will it be realised? And what could it look like? Hannah also explores a simple theory of using trial and error to reach AGI and takes a deep dive into MuZero, an AI system which mastered complex board games from chess to Go, and is now generalising to solve a range of important tasks in the real world. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com. Interviewees: DeepMind’s Shane Legg, Doina Precup, Dave Silver & Jackson Broshear CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading: Real-world challenges for AGI, DeepMind: https://deepmind.com/blog/article/real-world-challenges-for-agiAn executive primer on artificial general intelligence, McKinsey: https://www.mckinsey.com/business-functions/operations/our-insights/an-executive-primer-on-artificial-general-intelligenceMastering Go, chess, shogi and Atari without rules, DeepMind: https://deepmind.com/blog/article/muzero-mastering-go-chess-shogi-and-atari-without-rulesWhat is AGI?, Medium: https://medium.com/intuitionmachine/what-is-agi-99cdb671c88eA Definition of Machine Intelligence by Shane Legg, arXiv: https://arxiv.org/abs/0712.3329Reward is enough by David Silver, ScienceDirect: https://www.sciencedirect.com/science/article/pii/S0004370221000862
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Do you need a body to have intelligence? And can one exist without the other? Hannah takes listeners behind the scenes of DeepMind's robotics lab in London where she meets robots that are trying to independently learn new skills, and explores why physical intelligence is a necessary part of intelligence. Along the way, she finds out how researchers trained their robots at home during lockdown, uncovers why so many robotics demonstrations are faking it, and what it takes to train a robotic football team. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com. Interviewees: DeepMind’s Raia Hadsell, Viorica Patraucean, Jan Humplik, Akhil Raju & Doina Precup CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading: Stacking our way to more general robots, DeepMind: https://deepmind.com/blog/article/stacking-our-way-to-more-general-robotsResearchers Propose Physical AI As Key To Lifelike Robots, Forbes: https://www.forbes.com/sites/simonchandler/2020/11/11/researchers-propose-physical-ai-as-key-to-lifelike-robots/The robots going where no human can, BBC: https://www.bbc.co.uk/news/av/technology-41584738The Robot Assault On Fukushima, WIRED: https://www.wired.com/story/fukushima-robot-cleanup/Leaps, Bounds, and Backflips, Boston Dynamics: http://blog.bostondynamics.com/atlas-leaps-bounds-and-backflipsNow DeepMind is using AI to transform football, WIRED: https://www.wired.co.uk/article/deepmind-football-liverpool-ai
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Cooperation is at the heart of our society. Inventing the railway, giving birth to the Renaissance, and creating the Covid-19 vaccine all required people to combine efforts. But cooperation is so much more. It governs our education systems, healthcare, and food production. In this episode, Hannah meets the researchers working on cooperative AI, and hears about their work and influences from the famous American psychologist - and pigeon trainer - BF Skinner to the strategic board game Diplomacy. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com Interviewees: DeepMind’s Thore Graepel, Kevin Mckee, Doina Precup & Laura Weidinger CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading: Machines must learn to find common ground, Nature: https://www.nature.com/articles/d41586-021-01170-0Introduction to Reinforcement Learning, DeepMind: https://www.youtube.com/watch?v=2pWv7GOvuf0B.F. Skinner, Wikipedia: https://en.wikipedia.org/wiki/B._F._SkinnerThe Tragedy of the Commons, Wikipedia: https://en.wikipedia.org/wiki/Tragedy_of_the_commonsStaving Off The Ultimate Tragedy Of The Commons, Forbes: https://www.forbes.com/sites/georgebradt/2021/11/02/staving-off-the-ultimate-tragedy-of-the-commons-by-making-better-complex-decisions-cooperatively-in-glasgow/Understanding Agent Cooperation, DeepMind: https://deepmind.com/blog/article/understanding-agent-cooperationThe emergence of complex cooperative agents, DeepMind: https://deepmind.com/blog/article/capture-the-flag-science
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Top Podcasts
The Best New Comedy Podcast Right Now – June 2024The Best News Podcast Right Now – June 2024The Best New Business Podcast Right Now – June 2024The Best New Sports Podcast Right Now – June 2024The Best New True Crime Podcast Right Now – June 2024The Best New Joe Rogan Experience Podcast Right Now – June 20The Best New Dan Bongino Show Podcast Right Now – June 20The Best New Mark Levin Podcast – June 2024
United States
really looking forward to listening to the new series. Season one was brilliant.
What is the name of piece of piano music played in the background?
Enjoyed this tremendously. Thanks
I have only recently become interested in the topic AI, I find this series totally fascinating. When I heard Hannah Fry's reaction to the news that Go Zero beat Go alpha by 100-1, that said it all, we are far further ahead in this field of technology than I ever imagined. Great work Deep Mind.
Very interesting and illuminating
What a great podcast, thought provoking, interesting and informative, very much looking forward to the next one.
So far so good. Much awaited!
brilliant podcast!
Great news, épisodes finally out on castbox! Would it be possible it have the episodes numbered?
I can’t wait for this great work Hannab 😁🎉🎉
Very much looking forward to this 🙃🙃🙃