BI 193 Kim Stachenfeld: Enhancing Neuroscience and AI
Description
Support the show to get full episodes and join the Discord community.
The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.
Read more about our partnership.
Check out this story: Monkeys build mental maps to navigate new tasks
Sign up for “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released.
To explore more neuroscience news and perspectives, visit thetransmitter.org.
Kim Stachenfeld embodies the original core focus of this podcast, the exploration of the intersection between neuroscience and AI, now commonly known as Neuro-AI. That's because she walks both lines. Kim is a Senior Research Scientist at Google DeepMind, the AI company that sprang from neuroscience principles, and also does research at the Center for Theoretical Neuroscience at Columbia University. She's been using her expertise in modeling, and reinforcement learning, and cognitive maps, for example, to help understand brains and to help improve AI. I've been wanting to have her on for a long time to get her broad perspective on AI and neuroscience.
We discuss the relative roles of industry and academia in pursuing various objectives related to understanding and building cognitive entities
She's studied the hippocampus in her research on reinforcement learning and cognitive maps, so we discuss what the heck the hippocampus does since it seems to implicated in so many functions, and how she thinks of reinforcement learning these days.
Most recently Kim at Deepmind has focused on more practical engineering questions, using deep learning models to predict things like chaotic turbulent flows, and even to help design things like bridges and airplanes. And we don't get into the specifics of that work, but, given that I just spoke with Damian Kelty-Stephen, who thinks of brains partially as turbulent cascades, Kim and I discuss how her work on modeling turbulence has shaped her thoughts about brains.
- Kim's website.
- Twitter: @neuro_kim.
- Related papers
- Scaling Laws for Neural Language Models.
- Emergent Abilities of Large Language Models.
- Learned simulators:
- Learned coarse models for efficient turbulence simulation.
- Physical design using differentiable learned simulators.
Check out the transcript, provided by The Transmitter.
0:00 - Intro
4:31 - Deepmind's original and current vision
9:53 - AI as tools and models
12:53 - Has AI hindered neuroscience?
17:05 - Deepmind vs academic work balance
20:47 - Is industry better suited to understand brains?
24?42 - Trajectory of Deepmind
27:41 - Kim's trajectory
33:35 - Is the brain a ML entity?
36:12 - Hippocampus
44:12 - Reinforcement learning
51:32 - What does neuroscience need more and less of?
1:02:53 - Neuroscience in a weird place?
1:06:41 - How Kim's questions have changed
1:16:31 - Intelligence and LLMs
1:25:34 - Challenges