Brain Inspired

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

BI 201 Rajesh Rao: From Predictive Coding to Brain Co-Processors

Support the show to get full episodes, full archive, and join the Discord community. Today I'm in conversation with Rajesh Rao, a distinguished professor of computer science and engineering at the University of Washington, where he also co-directs the Center for Neurotechnology. Back in 1999, Raj and Dana Ballard published what became quite a famous paper, which proposed how predictive coding might be implemented in brains. What is predictive coding, you may be wondering? It's roughly the idea that your brain is constantly predicting incoming sensory signals, and it generates that prediction as a top-down signal that meets the bottom-up sensory signals. Then the brain computes a difference between the prediction and the actual sensory input, and that difference is sent back up to the "top" where the brain then updates its internal model to make better future predictions. So that was 25 years ago, and it was focused on how the brain handles sensory information. But Raj just recently published an update to the predictive coding framework, one that incorporates actions and perception, suggests how it might be implemented in the cortex - specifically which cortical layers do what - something he calls "Active predictive coding." So we discuss that new proposal, we also talk about his engineering work on brain-computer interface technologies, like BrainNet, which basically connects two brains together, and like neural co-processors, which use an artificial neural network as a prosthetic that can do things like enhance memories, optimize learning, and help restore brain function after strokes, for example. Finally, we discuss Raj's interest and work on deciphering an ancient Indian text, the mysterious Indus script. Raj's website. Related papers A sensory–motor theory of the neocortex. Brain co-processors: using AI to restore and augment brain function. Towards neural co-processors for the brain: combining decoding and encoding in brain–computer interfaces. BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains. Read the transcript. 0:00 - Intro 7:40 - Predictive coding origins 16:14 - Early appreciation of recurrence 17:08 - Prediction as a general theory of the brain 18:38 - Rao and Ballard 1999 26:32 - Prediction as a general theory of the brain 33:24 - Perception vs action 33:28 - Active predictive coding 45:04 - Evolving to augment our brains 53:03 - BrainNet 57:12 - Neural co-processors 1:11:19 - Decoding the Indus Script 1:20:18 - Transformer models relation to active predictive coding

12-18
01:37:22

BI 200 Grace Hwang and Joe Monaco: The Future of NeuroAI

Support the show to get full episodes, full archive, and join the Discord community. Joe Monaco and Grace Hwang  co-organized a recent workshop I participated in, the 2024 BRAIN NeuroAI Workshop. You may have heard of the BRAIN Initiative, but in case not, BRAIN is is huge funding effort across many agencies, one of which is the National Institutes of Health, where this recent workshop was held. The BRAIN Initiative began in 2013 under the Obama administration, with the goal to support developing technologies to help understand the human brain, so we can cure brain based diseases. BRAIN Initiative just became a decade old, with many successes like recent whole brain connectomes, and discovering the vast array of cell types. Now the question is how to move forward, and one area they are curious about, that perhaps has a lot of potential to support their mission, is the recent convergence of neuroscience and AI... or NeuroAI. The workshop was designed to explore how NeuroAI might contribute moving forward, and to hear from NeuroAI folks how they envision the field moving forward. You'll hear more about that in a moment. That's one reason I invited Grace and Joe on. Another reason is because they co-wrote a position paper a while back that is impressive as a synthesis of lots of cognitive sciences concepts, but also proposes a specific level of abstraction and scale in brain processes that may serve as a base layer for computation. The paper is called Neurodynamical Computing at the Information Boundaries, of Intelligent Systems, and you'll learn more about that in this episode. Joe's NIH page. Grace's NIH page. Twitter:  Joe: @j_d_monaco Related papers Neurodynamical Computing at the Information Boundaries of Intelligent Systems. Cognitive swarming in complex environments with attractor dynamics and oscillatory computing. Spatial synchronization codes from coupled rate-phase neurons. Oscillators that sync and swarm. Mentioned A historical survey of algorithms and hardware architectures for neural-inspired and neuromorphic computing applications. Recalling Lashley and reconsolidating Hebb. BRAIN NeuroAI Workshop (Nov 12–13) NIH BRAIN NeuroAI Workshop Program Book NIH VideoCast – Day 1 Recording – BRAIN NeuroAI Workshop NIH VideoCast – Day 2 Recording – BRAIN NeuroAI Workshop Neuromorphic Principles in Biomedicine and Healthcare Workshop (Oct 21–22) NPBH 2024 BRAIN Investigators Meeting 2020 Symposium & Perspective Paper BRAIN 2020 Symposium on Dynamical Systems Neuroscience and Machine Learning (YouTube) Neurodynamical Computing at the Information Boundaries of Intelligent Systems | Cognitive Computation NSF/CIRC Community Infrastructure for Research in Computer and Information Science and Engineering (CIRC) | NSF - National Science Foundation THOR Neuromorphic Commons - Matrix: The UTSA AI Consortium for Human Well-Being Read the transcript. 0:00 - Intro 25:45 - NeuroAI Workshop - neuromorphics 33:31 - Neuromorphics and theory 49:19 - Reflections on the workshop 54:22 - Neurodynamical computing and information boundaries 1:01:04 - Perceptual control theory 1:08:56 - Digital twins and neural foundation models 1:14:02 - Base layer of computation

12-04
01:37:11

BI 199 Hessam Akhlaghpour: Natural Universal Computation

Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: https://www.thetransmitter.org/newsletters/ To explore more neuroscience news and perspectives, visit thetransmitter.org. Hessam Akhlaghpour is a postdoctoral researcher at Rockefeller University in the Maimon lab. His experimental work is in fly neuroscience mostly studying spatial memories in fruit flies. However, we are going to be talking about a different (although somewhat related) side of his postdoctoral research. This aspect of his work involves theoretical explorations of molecular computation, which are deeply inspired by Randy Gallistel and Adam King's book Memory and the Computational Brain. Randy has been on the podcast before to discuss his ideas that memory needs to be stored in something more stable than the synapses between neurons, and how that something could be genetic material like RNA. When Hessam read this book, he was re-inspired to think of the brain the way he used to think of it before experimental neuroscience challenged his views. It re-inspired him to think of the brain as a computational system. But it also led to what we discuss today, the idea that RNA has the capacity for universal computation, and Hessam's development of how that might happen. So we discuss that background and story, why universal computation has been discovered in organisms yet since surely evolution has stumbled upon it, and how RNA might and combinatory logic could implement universal computation in nature. Hessam's website. Maimon Lab. Twitter: @theHessam. Related papers An RNA-based theory of natural universal computation. The molecular memory code and synaptic plasticity: a synthesis. Lifelong persistence of nuclear RNAs in the mouse brain. Cris Moore's conjecture #5 in this 1998 paper. (The Gallistel book): Memory and the Computational Brain: Why Cognitive Science Will Transform Neuroscience. Related episodes BI 126 Randy Gallistel: Where Is the Engram? BI 172 David Glanzman: Memory All The Way Down Read the transcript. 0:00 - Intro 4:44 - Hessam's background 11:50 - Randy Gallistel's book 14:43 - Information in the brain 17:51 - Hessam's turn to universal computation 35:30 - AI and universal computation 40:09 - Universal computation to solve intelligence 44:22 - Connecting sub and super molecular 50:10 - Junk DNA 56:42 - Genetic material for coding 1:06:37 - RNA and combinatory logic 1:35:14 - Outlook 1:42:11 - Reflecting on the molecular world

11-26
01:49:07

BI 198 Tony Zador: Neuroscience Principles to Improve AI

Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: https://www.thetransmitter.org/newsletters/ To explore more neuroscience news and perspectives, visit thetransmitter.org. Tony Zador runs the Zador lab at Cold Spring Harbor Laboratory. You've heard him on Brain Inspired a few times in the past, most recently in a panel discussion I moderated at this past COSYNE conference - a conference Tony co-founded 20 years ago. As you'll hear, Tony's current and past interests and research endeavors are of a wide variety, but today we focus mostly on his thoughts on NeuroAI. We're in a huge AI hype cycle right now, for good reason, and there's a lot of talk in the neuroscience world about whether neuroscience has anything of value to provide AI engineers - and how much value, if any, neuroscience has provided in the past. Tony is team neuroscience. You'll hear him discuss why in this episode, especially when it comes to ways in which development and evolution might inspire better data efficiency, looking to animals in general to understand how they coordinate numerous objective functions to achieve their intelligent behaviors - something Tony calls alignment - and using spikes in AI models to increase energy efficiency. Zador Lab Twitter: @TonyZador Previous episodes: BI 187: COSYNE 2024 Neuro-AI Panel. BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys BI 034 Tony Zador: How DNA and Evolution Can Inform AI Related papers Catalyzing next-generation Artificial Intelligence through NeuroAI. Encoding innate ability through a genomic bottleneck. Essays NeuroAI: A field born from the symbiosis between neuroscience, AI. What the brain can teach artificial neural networks. Read the transcript. 0:00 - Intro 3:28 - "Neuro-AI" 12:48 - Visual cognition history 18:24 - Information theory in neuroscience 20:47 - Necessary steps for progress 24:34 - Neuro-AI models and cognition 35:47 - Animals for inspiring AI 41:48 - What we want AI to do 46:01 - Development and AI 59:03 - Robots 1:25:10 - Catalyzing the next generation of AI

11-11
01:35:04

BI 197 Karen Adolph: How Babies Learn to Move and Think

Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Karen Adolph runs the Infant Action Lab at NYU, where she studies how our motor behaviors develop from infancy onward. We discuss how observing babies at different stages of development illuminates how movement and cognition develop in humans, how variability and embodiment are key to that development, and the importance of studying behavior in real-world settings as opposed to restricted laboratory settings. We also explore how these principles and simulations can inspire advances in intelligent robots. Karen has a long-standing interest in ecological psychology, and she shares some stories of her time studying under Eleanor Gibson and other mentors. Finally, we get a surprise visit from her partner Mark Blumberg, with whom she co-authored an opinion piece arguing that "motor cortex" doesn't start off with a motor function, oddly enough, but instead processes sensory information during the first period of animals' lives. Infant Action Lab (Karen Adolph's lab) Sleep and Behavioral Development Lab (Mark Blumberg's lab) Related papers Motor Development: Embodied, Embedded, Enculturated, and Enabling An Ecological Approach to Learning in (Not and) Development An update of the development of motor behavior Protracted development of motor cortex constrains rich interpretations of infant cognition Read the transcript.

10-25
01:29:31

BI 196 Cristina Savin and Tim Vogels with Gaute Einevoll and Mikkel Lepperød

Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.  This is the second conversation I had while teamed up with Gaute Einevoll at a workshop on NeuroAI in Norway. In this episode, Gaute and I are joined by Cristina Savin and Tim Vogels. Cristina shares how her lab uses recurrent neural networks to study learning, while Tim talks about his long-standing research on synaptic plasticity and how AI tools are now helping to explore the vast space of possible plasticity rules. We touch on how deep learning has changed the landscape, enhancing our research but also creating challenges with the "fashion-driven" nature of science today. We also reflect on how these new tools have changed the way we think about brain function without fundamentally altering the structure of our questions. Be sure to check out Gaute's Theoretical Neuroscience podcast as well! Mikkel Lepperød Cristina Savin Tim Vogels Twitter: @TPVogels Gaute Einevoll Twitter: @GauteEinevoll Gaute's Theoretical Neuroscience podcast. Validating models: How would success in NeuroAI look like? Read the transcript, provided by The Transmitter.

10-11
01:19:40

BI 195 Ken Harris and Andreas Tolias with Gaute Einevoll and Mikkel Lepperød

Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. This is the first of two less usual episodes. I was recently in Norway at a NeuroAI workshop called Validating models: How would success in NeuroAI look like? What follows are a few recordings I made with my friend Gaute Einevoll. Gaute has been on this podcast before, but more importantly he started his own podcast a while back called Theoretical Neuroscience, which you should check out. Gaute and I introduce the episode, then briefly speak with Mikkel Lepperød, one of the organizers of the workshop. In this first episode, we're then joined by Ken Harris and Andreas Tolias to discuss how AI has influenced their research, thoughts about brains and minds, and progress and productivity. Validating models: How would success in NeuroAI look like? Mikkel Lepperød Andreas Tolias Twitter: @AToliasLab Ken Harris Twitter: @kennethd_harris Gaute Einevoll Twitter: @GauteEinevoll Gaute's Theoretical Neuroscience podcast. Read the transcript, provided by The Transmitter.

10-08
01:17:05

BI 194 Vijay Namboodiri & Ali Mohebi: Dopamine Keeps Getting More Interesting

Support the show to get full episodes, full archive, and join the Discord community. https://youtu.be/lbKEOdbeqHo The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.  The Transmitter has provided a transcript for this episode. Vijay Namoodiri runs the Nam Lab at the University of California San Francisco, and Ali Mojebi is an assistant professor at the University of Wisconsin-Madison. Ali as been on the podcast before a few times, and he's interested in how neuromodulators like dopamine affect our cognition. And it was Ali who pointed me to Vijay, because of some recent work Vijay has done reassessing how dopamine might function differently than what has become the classic story of dopamine's function as it pertains to learning. The classic story is that dopamine is related to reward prediction errors. That is, dopamine is modulated when you expect reward and don't get it, and/or when you don't expect reward but do get it. Vijay calls this a "prospective" account of dopamine function, since it requires an animal to look into the future to expect a reward. Vijay has shown, however, that a retrospective account of dopamine might better explain lots of know behavioral data. This retrospective account links dopamine to how we understand causes and effects in our ongoing behavior. So in this episode, Vijay gives us a history lesson about dopamine, his newer story and why it has caused a bit of controversy, and how all of this came to be. I happened to be looking at the Transmitter the other day, after I recorded this episode, and low and behold, there was an article titles Reconstructing dopamine’s link to reward. Vijay is featured in the article among a handful of other thoughtful researchers who share their work and ideas about this very topic. Vijay wrote his own piece as well: Dopamine and the need for alternative theories. So check out those articles for more views on how the field is reconsidering how dopamine works. Nam Lab. Mohebi & Associates (Ali's Lab). Twitter: @vijay_mkn @mohebial Transmitter Dopamine and the need for alternative theories. Reconstructing dopamine’s link to reward. Related papers Mesolimbic dopamine release conveys causal associations. Mesostriatal dopamine is sensitive to changes in specific cue-reward contingencies. What is the state space of the world for real animals? The learning of prospective and retrospective cognitive maps within neural circuits Further reading (Ali's paper): Dopamine transients follow a striatal gradient of reward time horizons. Ali listed a bunch of work on local modulation of DA release: Local control of striatal dopamine release. Synaptic-like axo-axonal transmission from striatal cholinergic interneurons onto dopaminergic fibers. Spatial and temporal scales of dopamine transmission. Striatal dopamine neurotransmission: Regulation of release and uptake. Striatal Dopamine Release Is Triggered by Synchronized Activity in Cholinergic Interneurons. An action potential initiation mechanism in distal axons for the control of dopamine release. Read the transcript, produced by The Transmitter. 0:00 - Intro 3:42 - Dopamine: the history of theories 32:54 - Importance of learning and behavior studies 39:12 - Dopamine and causality 1:06:45 - Controversy over Vijay's findings

09-27
01:37:21

BI 193 Kim Stachenfeld: Enhancing Neuroscience and AI

Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.  Read more about our partnership. Check out this story:  Monkeys build mental maps to navigate new tasks Sign up for “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Kim Stachenfeld embodies the original core focus of this podcast, the exploration of the intersection between neuroscience and AI, now commonly known as Neuro-AI. That's because she walks both lines. Kim is a Senior Research Scientist at Google DeepMind, the AI company that sprang from neuroscience principles, and also does research at the Center for Theoretical Neuroscience at Columbia University. She's been using her expertise in modeling, and reinforcement learning, and cognitive maps, for example, to help understand brains and to help improve AI. I've been wanting to have her on for a long time to get her broad perspective on AI and neuroscience. We discuss the relative roles of industry and academia in pursuing various objectives related to understanding and building cognitive entities She's studied the hippocampus in her research on reinforcement learning and cognitive maps, so we discuss what the heck the hippocampus does since it seems to implicated in so many functions, and how she thinks of reinforcement learning these days. Most recently Kim at Deepmind has focused on more practical engineering questions, using deep learning models to predict things like chaotic turbulent flows, and even to help design things like bridges and airplanes. And we don't get into the specifics of that work, but, given that I just spoke with Damian Kelty-Stephen, who thinks of brains partially as turbulent cascades, Kim and I discuss how her work on modeling turbulence has shaped her thoughts about brains. Kim's website. Twitter: @neuro_kim. Related papers Scaling Laws for Neural Language Models. Emergent Abilities of Large Language Models. Learned simulators: Learned coarse models for efficient turbulence simulation. Physical design using differentiable learned simulators. Check out the transcript, provided by The Transmitter. 0:00 - Intro 4:31 - Deepmind's original and current vision 9:53 - AI as tools and models 12:53 - Has AI hindered neuroscience? 17:05 - Deepmind vs academic work balance 20:47 - Is industry better suited to understand brains? 24?42 - Trajectory of Deepmind 27:41 - Kim's trajectory 33:35 - Is the brain a ML entity? 36:12 - Hippocampus 44:12 - Reinforcement learning 51:32 - What does neuroscience need more and less of? 1:02:53 - Neuroscience in a weird place? 1:06:41 - How Kim's questions have changed 1:16:31 - Intelligence and LLMs 1:25:34 - Challenges

09-11
01:32:41

BI 192 Àlex Gómez-Marín: The Edges of Consciousness

Support the show to get full episodes, full archive, and join the Discord community. Àlex Gómez-Marín heads The Behavior of Organisms Laboratory at the Institute of Neuroscience in Alicante, Spain. He's one of those theoretical physicist turned neuroscientist, and he has studied a wide range of topics over his career. Most recently, he has become interested in what he calls the "edges of consciousness", which encompasses the many trying to explain what may be happening when we have experiences outside our normal everyday experiences. For example, when we are under the influence of hallucinogens, when have near-death experiences (as Alex has), paranormal experiences, and so on. So we discuss what led up to his interests in these edges of consciousness, how he now thinks about consciousness and doing science in general, how important it is to make room for all possible explanations of phenomena, and to leave our metaphysics open all the while. Alex's website: The Behavior of Organisms Laboratory. Twitter: @behaviOrganisms. Previous episodes: BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness. BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology. Related: The Consciousness of Neuroscience. Seeing the consciousness forest for the trees. The stairway to transhumanist heaven. 0:00 - Intro 4:13 - Evolving viewpoints 10:05 - Near-death experience 18:30 - Mechanistic neuroscience vs. the rest 22:46 - Are you doing science? 33:46 - Where is my. mind? 44:55 - Productive vs. permissive brain 59:30 - Panpsychism 1:07:58 - Materialism 1:10:38 - How to choose what to do 1:16:54 - Fruit flies 1:19:52 - AI and the Singularity

08-28
01:30:34

BI 191 Damian Kelty-Stephen: Fractal Turbulent Cascading Intelligence

Support the show to get full episodes, full archive, and join the Discord community. Damian Kelty-Stephen is an experimental psychologist at State University of New York at New Paltz. Last episode with Luis Favela, we discussed many of the ideas from ecological psychology, and how Louie is trying to reconcile those principles with those of neuroscience. In this episode, Damian and I in some ways continue that discussion, because Damian is also interested in unifying principles of ecological psychology and neuroscience. However, he is approaching it from a different perspective that Louie. What drew me originally to Damian was a paper he put together with a bunch of authors offering their own alternatives to the computer metaphor of the brain, which has come to dominate neuroscience. And we discuss that some, and I'll link to the paper in the show notes. But mostly we discuss Damian's work studying the fractal structure of our behaviors, connecting that structure across scales, and linking it to how our brains and bodies interact to produce our behaviors. Along the way, we talk about his interests in cascades dynamics and turbulence to also explain our intelligence and behaviors. So, I hope you enjoy this alternative slice into thinking about how we think and move in our bodies and in the world. Damian's website. Related papers In search for an alternative to the computer metaphor of the mind and brain. Multifractal emergent processes: Multiplicative interactions override nonlinear component properties. 0:00 - Intro 2:34 - Damian's background 9:02 - Brains 12:56 - Do neuroscientists have it all wrong? 16:56 - Fractals everywhere 28:01 - Fractality, causality, and cascades 32:01 - Cascade instability as a metaphor for the brain 40:43 - Damian's worldview 46:09 - What is AI missing? 54:26 - Turbulence 1:01:02 - Intelligence without fractals? Multifractality 1:10:28 - Ergodicity 1:19:16 - Fractality, intelligence, life 1:23:24 - What's exciting, changing viewpoints

08-15
01:27:51

BI 190 Luis Favela: The Ecological Brain

Support the show to get full episodes, full archive, and join the Discord community. Luis Favela is an Associate Professor at Indiana University Bloomington. He is part philosopher, part cognitive scientist, part many things, and on this episode we discuss his new book, The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment. In the book, Louie presents his NeuroEcological Nexus Theory, or NExT, which, as the subtitle says, proposes a way forward to tie together our brains, our bodies, and the environment; namely it has a lot to do with the complexity sciences and manifolds, which we discuss. But the book doesn't just present his theory. Among other things, it presents a rich historical look into why ecological psychology and neuroscience haven't been exactly friendly over the years, in terms of how to explain our behaviors, the role of brains in those explanations, how to think about what minds are, and so on. And it suggests how the two fields can get over their differences and be friends moving forward. And I'll just say, it's written in a very accessible manner, gently guiding the reader through many of the core concepts and science that have shaped ecological psychology and neuroscience, and for that reason alone I highly it. Ok, so we discuss a bunch of topics in the book, how Louie thinks, and Louie gives us some great background and historical lessons along the way. Luis' website. Book: The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment 0:00 - Intro 7:05 - Louie's target with NEXT 20:37 - Ecological psychology and grid cells 22:06 - Why irreconcilable? 28:59 - Why hasn't ecological psychology evolved more? 47:13 - NExT 49:10 - Hypothesis 1 55:45 - Hypothesis 2 1:02:55 - Artificial intelligence and ecological psychology 1:16:33 - Manifolds 1:31:20 - Hypothesis 4: Body, low-D, Synergies 1:35:53 - Hypothesis 5: Mind emerges 1:36:23 - Hypothesis 6:

07-31
01:41:03

BI 189 Joshua Vogelstein: Connectomes and Prospective Learning

Support the show to get full episodes, full archive, and join the Discord community. Jovo, as you'll learn, is theoretically oriented, and enjoys the formalism of mathematics to approach questions that begin with a sense of wonder. So after I learn more about his overall approach, the first topic we discuss is the world's currently largest map of an entire brain... the connectome of an insect, the fruit fly. We talk about his role in this collaborative effort, what the heck a connectome is, why it's useful and what to do with it, and so on. The second main topic we discuss is his theoretical work on what his team has called prospective learning. Prospective learning differs in a fundamental way from the vast majority of AI these days, which they call retrospective learning. So we discuss what prospective learning is, and how it may improve AI moving forward. At some point there's a little audio/video sync issues crop up, so we switched to another recording method and fixed it... so just hang tight if you're viewing the podcast... it'll get better soon. 0:00 - Intro 05:25 - Jovo's approach 13:10 - Connectome of a fruit fly 26:39 - What to do with a connectome 37:04 - How important is a connectome? 51:48 - Prospective learning 1:15:20 - Efficiency 1:17:38 - AI doomerism

06-29
01:27:19

BI 188 Jolande Fooken: Coordinating Action and Perception

Support the show to get full episodes, full archive, and join the Discord community. Jolande Fooken is a post-postdoctoral researcher interested in how we move our eyes and move our hands together to accomplish naturalistic tasks. Hand-eye coordination is one of those things that sounds simple and we do it all the time to make meals for our children day in, and day out, and day in, and day out. But it becomes way less seemingly simple as soon as you learn how we make various kinds of eye movements, and how we make various kinds of hand movements, and use various strategies to do repeated tasks. And like everything in the brain sciences, it's something we don't have a perfect story for yet. So, Jolande and I discuss her work, and thoughts, and ideas around those and related topics. Jolande's website. Twitter: @ookenfooken. Related papers I am a parent. I am a scientist. Eye movement accuracy determines natural interception strategies. Perceptual-cognitive integration for goal-directed action in naturalistic environments. 0:00 - Intro 3:27 - Eye movements 8:53 - Hand-eye coordination 9:30 - Hand-eye coordination and naturalistic tasks 26:45 - Levels of expertise 34:02 - Yarbus and eye movements 42:13 - Varieties of experimental paradigms, varieties of viewing the brain 52:46 - Career vision 1:04:07 - Evolving view about the brain 1:10:49 - Coordination, robots, and AI

05-27
01:28:14

BI 187: COSYNE 2024 Neuro-AI Panel

Support the show to get full episodes, full archive, and join the Discord community. Recently I was invited to moderate a panel at the annual Computational and Systems Neuroscience, or COSYNE, conference. This year was the 20th anniversary of COSYNE, and we were in Lisbon Porturgal. The panel goal was to discuss the relationship between neuroscience and AI. The panelists were Tony Zador, Alex Pouget, Blaise Aguera y Arcas, Kim Stachenfeld, Jonathan Pillow, and Eva Dyer. And I'll let them introduce themselves soon. Two of the panelists, Tony and Alex, co-founded COSYNE those 20 years ago, and they continue to have different views about the neuro-AI relationship. Tony has been on the podcast before and will return soon, and I'll also have Kim Stachenfeld on in a couple episodes. I think this was a fun discussion, and I hope you enjoy it. There's plenty of back and forth, a wide range of opinions, and some criticism from one of the audience questioners. This is an edited audio version, to remove long dead space and such. There's about 30 minutes of just panel, then the panel starts fielding questions from the audience. COSYNE.

04-20
01:03:35

BI 186 Mazviita Chirimuuta: The Brain Abstracted

Support the show to get full episodes, full archive, and join the Discord community. Mazviita Chirimuuta is a philosopher at the University of Edinburgh. Today we discuss topics from her new book, The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience. She largely argues that when we try to understand something complex, like the brain, using models, and math, and analogies, for example - we should keep in mind these are all ways of simplifying and abstracting away details to give us something we actually can understand. And, when we do science, every tool we use and perspective we bring, every way we try to attack a problem, these are all both necessary to do the science and limit the interpretation we can claim from our results. She does all this and more by exploring many topics in neuroscience and philosophy throughout the book, many of which we discuss today. Mazviita's University of Edinburgh page. The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience. Previous Brain Inspired episodes: BI 072 Mazviita Chirimuuta: Understanding, Prediction, and Reality BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind 0:00 - Intro 5:28 - Neuroscience to philosophy 13:39 - Big themes of the book 27:44 - Simplifying by mathematics 32:19 - Simplifying by reduction 42:55 - Simplification by analogy 46:33 - Technology precedes science 55:04 - Theory, technology, and understanding 58:04 - Cross-disciplinary progress 58:45 - Complex vs. simple(r) systems 1:08:07 - Is science bound to study stability? 1:13:20 - 4E for philosophy but not neuroscience? 1:28:50 - ANNs as models 1:38:38 - Study of mind

03-25
01:43:34

BI 185 Eric Yttri: Orchestrating Behavior

Support the show to get full episodes, full archive, and join the Discord community. As some of you know, I recently got back into the research world, and in particular I work in Eric Yttris' lab at Carnegie Mellon University. Eric's lab studies the relationship between various kinds of behaviors and the neural activity in a few areas known to be involved in enacting and shaping those behaviors, namely the motor cortex and basal ganglia.  And study that, he uses tools like optogentics, neuronal recordings, and stimulations, while mice perform certain tasks, or, in my case, while they freely behave wandering around an enclosed space. We talk about how Eric got here, how and why the motor cortex and basal ganglia are still mysteries despite lots of theories and experimental work, Eric's work on trying to solve those mysteries using both trained tasks and more naturalistic behavior. We talk about the valid question, "What is a behavior?", and lots more. Yttri Lab Twitter: @YttriLab Related papers Opponent and bidirectional control of movement velocity in the basal ganglia. B-SOiD, an open-source unsupervised algorithm for identification and fast prediction of behaviors. 0:00 - Intro 2:36 - Eric's background 14:47 - Different animal models 17:59 - ANNs as models for animal brains 24:34 - Main question 25:43 - How circuits produce appropriate behaviors 26:10 - Cerebellum 27:49 - What do motor cortex and basal ganglia do? 49:12 - Neuroethology 1:06:09 - What is a behavior? 1:11:18 - Categorize behavior (B-SOiD) 1:22:01 - Real behavior vs. ANNs 1:33:09 - Best era in neuroscience

03-06
01:44:50

BI 184 Peter Stratton: Synthesize Neural Principles

Support the show to get full episodes, full archive, and join the Discord community. Peter Stratton is a research scientist at Queensland University of Technology. I was pointed toward Pete by a patreon supporter, who sent me a sort of perspective piece Pete wrote that is the main focus of our conversation, although we also talk about some of his work in particular - for example, he works with spiking neural networks, like my last guest, Dan Goodman. What Pete argues for is what he calls a sideways-in approach. So a bottom-up approach is to build things like we find them in the brain, put them together, and voila, we'll get cognition. A top-down approach, the current approach in AI, is to train a system to perform a task, give it some algorithms to run, and fiddle with the architecture and lower level details until you pass your favorite benchmark test. Pete is focused more on the principles of computation brains employ that current AI doesn't. If you're familiar with David Marr, this is akin to his so-called "algorithmic level", but it's between that and the "implementation level", I'd say. Because Pete is focused on the synthesis of different kinds of brain operations - how they intermingle to perform computations and produce emergent properties. So he thinks more like a systems neuroscientist in that respect. Figuring that out is figuring out how to make better AI, Pete says. So we discuss a handful of those principles, all through the lens of how challenging a task it is to synthesize multiple principles into a coherent functioning whole (as opposed to a collection of parts). Buy, hey, evolution did it, so I'm sure we can, too, right? Peter's website. Related papers Convolutionary, Evolutionary, and Revolutionary: What’s Next for Brains, Bodies, and AI? Making a Spiking Net Work: Robust brain-like unsupervised machine learning. Global segregation of cortical activity and metastable dynamics. Unlocking neural complexity with a robotic key 0:00 - Intro 3:50 - AI background, neuroscience principles 8:00 - Overall view of modern AI 14:14 - Moravec's paradox and robotics 20:50 -Understanding movement to understand cognition 30:01 - How close are we to understanding brains/minds? 32:17 - Pete's goal 34:43 - Principles from neuroscience to build AI 42:39 - Levels of abstraction and implementation 49:57 - Mental disorders and robustness 55:58 - Function vs. implementation 1:04:04 - Spiking networks 1:07:57 - The roadmap 1:19:10 - AGI 1:23:48 - The terms AGI and AI 1:26:12 - Consciousness

02-20
01:30:47

BI 183 Dan Goodman: Neural Reckoning

Support the show to get full episodes, full archive, and join the Discord community. You may know my guest as the co-founder of Neuromatch, the excellent online computational neuroscience academy, or as the creator of the Brian spiking neural network simulator, which is freely available. I know him as a spiking neural network practitioner extraordinaire. Dan Goodman runs the Neural Reckoning Group at Imperial College London, where they use spiking neural networks to figure out how biological and artificial brains reckon, or compute. All of the current AI we use to do all the impressive things we do, essentially all of it, is built on artificial neural networks. Notice the word "neural" there. That word is meant to communicate that these artificial networks do stuff the way our brains do stuff. And indeed, if you take a few steps back, spin around 10 times, take a few shots of whiskey, and squint hard enough, there is a passing resemblance. One thing you'll probably still notice, in your drunken stupor, is that, among the thousand ways ANNs differ from brains, is that they don't use action potentials, or spikes. From the perspective of neuroscience, that can seem mighty curious. Because, for decades now, neuroscience has focused on spikes as the things that make our cognition tick. We count them and compare them in different conditions, and generally put a lot of stock in their usefulness in brains. So what does it mean that modern neural networks disregard spiking altogether? Maybe spiking really isn't important to process and transmit information as well as our brains do. Or maybe spiking is one among many ways for intelligent systems to function well. Dan shares some of what he's learned and how he thinks about spiking and SNNs and a host of other topics. Neural Reckoning Group. Twitter: @neuralreckoning. Related papers Neural heterogeneity promotes robust learning. Dynamics of specialization in neural modules under resource constraints. Multimodal units fuse-then-accumulate evidence across channels. Visualizing a joint future of neuroscience and neuromorphic engineering. 0:00 - Intro 3:47 - Why spiking neural networks, and a mathematical background 13:16 - Efficiency 17:36 - Machine learning for neuroscience 19:38 - Why not jump ship from SNNs? 23:35 - Hard and easy tasks 29:20 - How brains and nets learn 32:50 - Exploratory vs. theory-driven science 37:32 - Static vs. dynamic 39:06 - Heterogeneity 46:01 - Unifying principles vs. a hodgepodge 50:37 - Sparsity 58:05 - Specialization and modularity 1:00:51 - Naturalistic experiments 1:03:41 - Projects for SNN research 1:05:09 - The right level of abstraction 1:07:58 - Obstacles to progress 1:12:30 - Levels of explanation 1:14:51 - What has AI taught neuroscience? 1:22:06 - How has neuroscience helped AI?

02-06
01:28:54

BI 182: John Krakauer Returns… Again

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience John Krakauer has been on the podcast multiple times (see links below). Today we discuss some topics framed around what he's been working on and thinking about lately. Things like Whether brains actually reorganize after damage The role of brain plasticity in general The path toward and the path not toward understanding higher cognition How to fix motor problems after strokes AGI Functionalism, consciousness, and much more. Relevant links: John's Lab. Twitter: @blamlab Related papers What are we talking about? Clarifying the fuzzy concept of representation in neuroscience and beyond. Against cortical reorganisation. Other episodes with John: BI 025 John Krakauer: Understanding Cognition BI 077 David and John Krakauer: Part 1 BI 078 David and John Krakauer: Part 2 BI 113 David Barack and John Krakauer: Two Views On Cognition Time stamps 0:00 - Intro 2:07 - It's a podcast episode! 6:47 - Stroke and Sherrington neuroscience 19:26 - Thinking vs. moving, representations 34:15 - What's special about humans? 56:35 - Does cortical reorganization happen? 1:14:08 - Current era in neuroscience

01-19
01:25:42

Alex Spies

Fantastic Podcast - Goes into a fair amount of detail, whilst retaining a broad perspective on the context and potential of specific cutting-edge research - highly entertaining!

03-28 Reply

Recommend Channels