DiscoverBrain Inspired
Brain Inspired
Claim Ownership

Brain Inspired

Author: Paul Middlebrooks

Subscribed: 1,511Played: 48,195
Share

Description

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.
115 Episodes
Reverse
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Xiao-Jing Wang is a Distinguished Global Professor of Neuroscience at NYU Xiao-Jing was born and grew up in China, spent 8 years in Belgium studying theoretical physics like nonlinear dynamical systems and deterministic chaos. And as he says it, he arrived from Brussels to California as a postdoc, and in one day switched from French to English, from European to American culture, and physics to neuroscience. I know Xiao-Jing as a legend in non-human primate neurophysiology and modeling, paving the way for the rest of us to study brain activity related cognitive functions like working memory and decision-making. He has just released his new textbook, Theoretical Neuroscience: Understanding Cognition, which covers the history and current research on modeling cognitive functions from the very simple to the very cognitive. The book is also somewhat philosophical, arguing that we need to update our approach to explaining how brains function, to go beyond Marr's levels and enter a cross-level mechanistic explanatory pursuit, which we discuss. I just learned he even cites my own PhD research, studying metacognition in nonhuman primates - so you know it's a great book. Learn more about Xiao-Jing and the book in the show notes. It was fun having one of my heroes on the podcast, and I hope you enjoy our discussion. Computational Laboratory of Cortical Dynamics Book: Theoretical Neuroscience: Understanding Cognition. Related papers Division of labor among distinct subtypes of inhibitory neurons in a cortical microcircuit of working memory. Macroscopic gradients of synaptic excitation and inhibition across the neocortex. Theory of the multiregional neocortex: large-scale neural dynamics and distributed cognition. 0:00 - Intro 3:08 - Why the book now? 11:00 - Modularity in neuro vs AI 14:01 - Working memory and modularity 22:37 - Canonical cortical microcircuits 25:53 - Gradient of inhibitory neurons 27:47 - Comp neuro then and now 45:35 - Cross-level mechanistic understanding 1:13:38 - Bifurcation 1:24:51 - Bifurcation and degeneracy 1:34:02 - Control theory 1:35:41 - Psychiatric disorders 1:39:14 - Beyond dynamical systems 1:43:447 - Mouse as a model 1:48:11 - AI needs a PFC
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Check out this story: What, if anything, makes mood fundamentally different from memory? Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Elusive Cures: Why Neuroscience Hasn’t Solved Brain Disorders―and How We Can Change That. Nicole Rust runs the Visual Memory laboratory at the University of Pennsylvania. Her interests have expanded now to include mood and feelings, as you'll hear. And she wrote this book, which contains a plethora of ideas about how we can pave a way forward in neuroscience to help treat mental and brain disorders. We talk about a small plethora of those ideas from her book. which also contains the story partially which will hear of her own journey in thinking about these things from working early on in visual neuroscience to where she is now. Nicole's website. Elusive Cures: Why Neuroscience Hasn’t Solved Brain Disorders―and How We Can Change That. 0:00 - Intro 6:12 - Nicole's path 19:25 - The grand plan 25:18 - Robustness and fragility 39:15 - Mood 49:25 - Model everything! 56:26 - Epistemic iteration 1:06:50 - Can we standardize mood? 1:10:36 - Perspective neuroscience 1:20:12 - William Wimsatt 1:25:40 - Consciousness
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Check out this series of essays about representations: What are we talking about? Clarifying the fuzzy concept of representation in neuroscience and beyond Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. What do neuroscientists mean when they use the term representation? That's part of what Luis Favela and Edouard Machery set out to answer a couple years ago by surveying lots of folks in the cognitive sciences, and they concluded that as a field the term is used in a confused and unclear way. Confused and unclear are technical terms here, and Luis and Edouard explain what they mean in the episode. More recently Luis and Edouard wrote a follow-up piece arguing that maybe it's okay for everyone to use the term in slightly different ways, maybe it helps communication across disciplines, perhaps. My three other guests today, Frances Egan, Rosa Cao, and John Krakauer wrote responses to that argument, and on today's episode all those folks are here to further discuss that issue and why it matters. Luis is a part philosopher, part cognitive scientists at Indiana University Bloomington, Edouard is a philosopher and Director of the Center for Philosophy of Science at the University of Pittsburgh, Frances is a philosopher from Rutgers University, Rosa is a neuroscientist-turned philosopher at Stanford University, and John is a neuroscientist among other things, and co-runs the Brain, Learning, Animation, and Movement Lab at Johns Hopkins. Luis Favela. Favela's book: The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment Edouard Machery. Machery's book: Doing without Concepts Frances Egan. Egan's book: Deflating Mental Representation. John Krakauer. Rosa Cao. Paper mentioned: Putting representations to use. The exchange, in order, discussed on this episode: Investigating the concept of representation in the neural and psychological sciences. The concept of representation in the brain sciences: The current status and ways forward. Commentaries: Assessing the landscape of representational concepts: Commentary on Favela and Machery. Comments on Favela and Machery's The concept of representation in the brain sciences: The current status and ways forward. Where did real representations go? Commentary on: The concept of representation in the brain sciences: The current status and ways forward by Favela and Machery. Reply to commentaries: Contextualizing, eliminating, or glossing: What to do with unclear scientific concepts like representation. 0:00 - Intro 3:55 - What is a representation to a neuroscientist? 14:44 - How to deal with the dilemma 21:20 - Opposing views 31:00 - What's at stake? 51:10 - Neural-only representation 1:01:11 - When "representation" is playing a useful role 1:12:56 - The role of a neuroscientist 1:39:35 - The purpose of "representational talk" 1:53:03 - Non-representational mental phenomenon 1:55:53 - Final thoughts
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. You may have heard of the critical brain hypothesis. It goes something like this: brain activity operates near a dynamical regime called criticality, poised at the sweet spot between too much order and too much chaos, and this is a good thing because systems at criticality are optimized for computing, they maximize information transfer, they maximize the time range over which they operate, and a handful of other good properties. John Beggs has been studying criticality in brains for over 20 years now. His 2003 paper with Deitmar Plenz is one of of the first if not the first to show networks of neurons operating near criticality, and it gets cited in almost every criticality paper I read. John runs the Beggs Lab at Indiana University Bloomington, and a few years ago he literally wrote the book on criticality, called The Cortex and the Critical Point: Understanding the Power of Emergence, which I highly recommend as an excellent introduction to the topic, and he continues to work on criticality these days. On this episode we discuss what criticality is, why and how brains might strive for it, the past and present of how to measure it and why there isn't a consensus on how to measure it, what it means that criticality appears in so many natural systems outside of brains yet we want to say it's a special property of brains. These days John spends plenty of effort defending the criticality hypothesis from critics, so we discuss that, and much more. Beggs Lab. Book: The Cortex and the Critical Point: Understanding the Power of Emergence Related papers Addressing skepticism of the critical brain hypothesis Papers John mentioned: Tetzlaff et al 2010: Self-organized criticality in developing neuronal networks. Haldeman and Beggs 2005: Critical Branching Captures Activity in Living Neural Networks and Maximizes the Number of Metastable States. Bertschinger et al 2004: At the edge of chaos: Real-time computations and self-organized criticality in recurrent neural networks. Legenstein and Maass 2007: Edge of chaos and prediction of computational performance for neural circuit models. Kinouchi and Copelli 2006: Optimal dynamical range of excitable networks at criticality. Chialvo 2010: Emergent complex neural dynamics.. Mora and Bialek 2011: Are Biological Systems Poised at Criticality? Read the transcript. 0:00 - Intro 4:28 - What is criticality? 10:19 - Why is criticality special in brains? 15:34 - Measuring criticality 24:28 - Dynamic range and criticality 28:28 - Criticisms of criticality 31:43 - Current state of critical brain hypothesis 33:34 - Causality and criticality 36:39 - Criticality as a homeostatic set point 38:49 - Is criticality necessary for life? 50:15 - Shooting for criticality far from thermodynamic equilibrium 52:45 - Quasi- and near-criticality 55:03 - Cortex vs. whole brain 58:50 - Structural criticality through development 1:01:09 - Criticality in AI 1:03:56 - Most pressing criticisms of criticality 1:10:08 - Gradients of criticality 1:22:30 - Homeostasis vs. criticality 1:29:57 - Minds and criticality
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Rony Hirschhorn, Alex Lepauvre, and Oscar Ferrante are three of many many scientists that comprise the COGITATE group. COGITATE is an adversarial collaboration project to test theories of consciousness in humans, in this case testing the integrated information theory of consciousness and the global neuronal workspace theory of consciousness. I said it's an adversarial collaboration, so what does that mean. It's adversarial in that two theories of consciousness are being pitted against each other. It's a collaboration in that the proponents of the two theories had to agree on what experiments could be performed that could possibly falsify the claims of either theory. The group has just published the results of the first round of experiments in a paper titled Adversarial testing of global neuronal workspace and integrated information theories of consciousness, and this is what Rony, Alex, and Oscar discuss with me today. The short summary is that they used a simple task and measured brain activity with three different methods: EEG, MEG, and fMRI, and made predictions about where in the brain correlates of consciousness should be, how that activity should be maintained over time, and what kind of functional connectivity patterns should be present between brain regions. The take home is a mixed bag, with neither theory being fully falsified, but with a ton of data and results for the world to ponder and build on, to hopefully continue to refine and develop theoretical accounts of how brains and consciousness are related. So we discuss the project itself, many of the challenges they faced, their experiences and reflections working on it and on coming together as a team, the nature of working on an adversarial collaboration, when so much is at stake for the proponents of each theory, and, as you heard last episode with Dean Buonomano, when one of the theories, IIT, is surrounded by a bit of controversy itself regarding whether it should even be considered a scientific theory. COGITATE. Oscar Ferrante. @ferrante_oscar Rony Hirschhorn. @RonyHirsch Alex Lepauvre. @LepauvreAlex Paper: Adversarial testing of global neuronal workspace and integrated information theories of consciousness. BI 210 Dean Buonomano: Consciousness, Time, and Organotypic Dynamics Read the transcript. 0:00 - Intro 4:00 - COGITATE 17:42 - How the experiments were developed 32:37 - How data was collected and analyzed 41:24 - Prediction 1: Where is consciousness? 47:51 - The experimental task 1:00:14 - Prediction 2: Duration of consciousness-related activity 1:18:37 - Prediction 3: Inter-areal communication 1:28:28 - Big picture of the results 1:44:25 - Moving forward
Support the show to get full episodes, full archive, and join the Discord community. Dean Buonomano runs the Buonomano lab at UCLA. Dean was a guest on Brain Inspired way back on episode 18, where we talked about his book Your Brain is a Time Machine: The Neuroscience and Physics of Time, which details much of his thought and research about how centrally important time is for virtually everything we do, different conceptions of time in philosophy, and how how brains might tell time. That was almost 7 years ago, and his work on time and dynamics in computational neuroscience continues. One thing we discuss today, later in the episode, is his recent work using organotypic brain slices to test the idea that cortical circuits implement timing as a computational primitive it's something they do by they're very nature. Organotypic brain slices are between what I think of as traditional brain slices and full on organoids. Brain slices are extracted from an organism, and maintained in a brain-like fluid while you perform experiments on them. Organoids start with a small amount of cells that you the culture, and let them divide and grow and specialize, until you have a mass of cells that have grown into an organ of some sort, to then perform experiments on. Organotypic brain slices are extracted from an organism, like brain slices, but then also cultured for some time to let them settle back into some sort of near-homeostatic point - to them as close as you can to what they're like in the intact brain... then perform experiments on them. Dean and his colleagues use optigenetics to train their brain slices to predict the timing of the stimuli, and they find the populations of neurons do indeed learn to predict the timing of the stimuli, and that they exhibit replaying of those sequences similar to the replay seen in brain areas like the hippocampus. But, we begin our conversation talking about Dean's recent piece in The Transmitter, that I'll point to in the show notes, called The brain holds no exclusive rights on how to create intelligence. There he argues that modern AI is likely to continue its recent successes despite the ongoing divergence between AI and neuroscience. This is in contrast to what folks in NeuroAI believe. We then talk about his recent chapter with physicist Carlo Rovelli, titled Bridging the neuroscience and physics of time, in which Dean and Carlo examine where neuroscience and physics disagree and where they agree about the nature of time. Finally, we discuss Dean's thoughts on the integrated information theory of consciousness, or IIT. IIT has see a little controversy lately. Over 100 scientists, a large part of that group calling themselves IIT-Concerned, have expressed concern that IIT is actually unscientific. This has cause backlash and anti-backlash, and all sorts of fun expression from many interested people. Dean explains his own views about why he thinks IIT is not in the purview of science - namely that it doesn't play well with the existing ontology of what physics says about science. What I just said doesn't do justice to his arguments, which he articulates much better. Buonomano lab. Twitter: @DeanBuono. Related papers The brain holds no exclusive rights on how to create intelligence. What makes a theory of consciousness unscientific? Ex vivo cortical circuits learn to predict and spontaneously replay temporal patterns. Bridging the neuroscience and physics of time. BI 204 David Robbe: Your Brain Doesn’t Measure Time Read the transcript. 0:00 - Intro 8:49 - AI doesn't need biology 17:52 - Time in physics and in neuroscience 34:04 - Integrated information theory 1:01:34 - Global neuronal workspace theory 1:07:46 - Organotypic slices and predictive processing 1:26:07 - Do brains actually measure time? David Robbe
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Aran Nayebi is an Assistant Professor at Carnegie Mellon University in the Machine Learning Department. He was there in the early days of using convolutional neural networks to explain how our brains perform object recognition, and since then he's a had a whirlwind trajectory through different AI architectures and algorithms and how they relate to biological architectures and algorithms, so we touch on some of what he has studied in that regard. But he also recently started his own lab, at CMU, and he has plans to integrate much of what he has learned to eventually develop autonomous agents that perform the tasks we want them to perform in similar at least ways that our brains perform them. So we discuss his ongoing plans to reverse-engineer our intelligence to build useful cognitive architectures of that sort. We also discuss Aran's suggestion that, at least in the NeuroAI world, the Turing test needs to be updated to include some measure of similarity of the internal representations used to achieve the various tasks the models perform. By internal representations, as we discuss, he means the population-level activity in the neural networks, not the mental representations philosophy of mind often refers to, or other philosophical notions of the term representation. Aran's Website. Twitter: @ayan_nayebi. Related papers Brain-model evaluations need the NeuroAI Turing Test. Barriers and pathways to human-AI alignment: a game-theoretic approach. 0:00 - Intro 5:24 - Background 20:46 - Building embodied agents 33:00 - Adaptability 49:25 - Marr's levels 54:12 - Sensorimotor loop and intrinsic goals 1:00:05 - NeuroAI Turing Test 1:18:18 - Representations 1:28:18 - How to know what to measure 1:32:56 - AI safety
Support the show to get full episodes, full archive, and join the Discord community. Gabriele Scheler co-founded the Carl Correns Foundation for Mathematical Biology. Carl Correns was her great grandfather, one of the early pioneers in genetics. Gabriele is a computational neuroscientist, whose goal is to build models of cellular computation, and much of her focus is on neurons. We discuss her theoretical work building a new kind of single neuron model. She, like Dmitri Chklovskii a few episodes ago, believes we've been stuck with essentially the same family of models for a neuron for a long time, despite minor variations on those models. The model Gabriele is working on, for example, respects the computations going on not only externally, via spiking, which has been the only game in town forever, but also the computations going on within the cell itself. Gabriele is in line with previous guests like Randy Gallistel, David Glanzman, and Hessam Akhlaghpour, who argue that we need to pay attention to how neurons are computing various things internally and how that affects our cognition. Gabriele also believes the new neuron model she's developing will improve AI, drastically simplifying the models by providing them with smarter neurons, essentially. We also discuss the importance of neuromodulation, her interest in wanting to understand how we think via our internal verbal monologue, her lifelong interest in language in general, what she thinks about LLMs, why she decided to start her own foundation to fund her science, what that experience has been like so far. Gabriele has been working on these topics for many years, and as you'll hear in a moment, she was there when computational neuroscience was just starting to pop up in a few places, when it was a nascent field, unlike its current ubiquity in neuroscience. Gabriele's website. Carl Correns Foundation for Mathematical Biology. Neuro-AI spinoff Related papers Sketch of a novel approach to a neural model. Localist neural plasticity identified by mutual information. Related episodes BI 199 Hessam Akhlaghpour: Natural Universal Computation BI 172 David Glanzman: Memory All The Way Down BI 126 Randy Gallistel: Where Is the Engram? 0:00 - Intro 4:41 - Gabriele's early interests in verbal thinking 14:14 - What is thinking? 24:04 - Starting one's own foundation 58:18 - Building a new single neuron model 1:19:25 - The right level of abstraction 1:25:00 - How a new neuron would change AI
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. The concept of a schema goes back at least to the philosopher Immanuel Kant in the 1700s, who use the term to refer to a kind of built-in mental framework to organize sensory experience. But it was the psychologist Frederic Bartlett in the 1930s who used the term schema in a psychological sense, to explain how our memories are organized and how new information gets integrated into our memory. Fast forward another 100 years to today, and we have a podcast episode with my guest today, Alison Preston, who runs the Preston Lab at the University of Texas at Austin. On this episode, we discuss her neuroscience research explaining how our brains might carry out the processing that fits with our modern conception of schemas, and how our brains do that in different ways as we develop from childhood to adulthood. I just said, "our modern conception of schemas," but like everything else, there isn't complete consensus among scientists exactly how to define schema. Ali has her own definition. She shares that, and how it differs from other conceptions commonly used. I like Ali's version and think it should be adopted, in part because it helps distinguish schemas from a related term, cognitive maps, which we've discussed aplenty on brain inspired, and can sometimes be used interchangeably with schemas. So we discuss how to think about schemas versus cognitive maps, versus concepts, versus semantic information, and so on. Last episode Ciara Greene discussed schemas and how they underlie our memories, and learning, and predictions, and how they can lead to inaccurate memories and predictions. Today Ali explains how circuits in the brain might adaptively underlie this process as we develop, and how to go about measuring it in the first place. Preston Lab Twitter: @preston_lab Related papers: Concept formation as a computational cognitive process. Schema, Inference, and Memory. Developmental differences in memory reactivation relate to encoding and inference in the human brain. Read the transcript. 0:00 - Intro 6:51 - Schemas 20:37 - Schemas and the developing brain 35:03 - Information theory, dimensionality, and detail 41:17 - Geometry of schemas 47:26 - Schemas and creativity 50:29 - Brain connection pruning with development 1:02:46 - Information in brains 1:09:20 - Schemas and development in AI
Here's the link to learn more and sign up: Complexity Group Email List.
Support the show to get full episodes, full archive, and join the Discord community. Ciara Greene is Associate Professor in the University College Dublin School of Psychology. In this episode we discuss Ciara's book Memory Lane: The Perfectly Imperfect Ways We Remember, co-authored by her colleague Gillian Murphy. The book is all about how human episodic memory works and why it works the way it does. Contrary to our common assumption, a "good memory" isn't necessarily highly accurate - we don't store memories like files in a filing cabinet. Instead our memories evolved to help us function in the world. That means our memories are flexible, constantly changing, and that forgetting can be beneficial, for example. Regarding how our memories work, we discuss how memories are reconstructed each time we access them, and the role of schemas in organizing our episodic memories within the context of our previous experiences. Because our memories evolved for function and not accuracy, there's a wide range of flexibility in how we process and store memories. We're all susceptible to misinformation, all our memories are affected by our emotional states, and so on. Ciara's research explores many of the ways our memories are shaped by these various conditions, and how we should better understand our own and other's memories. Attention and Memory Lab Twitter: @ciaragreene01. Book: Memory Lane: The Perfectly Imperfect Ways We Remember Read the transcript. 0:00 - Intro 5:35 - The function of memory 6:41 - Reconstructive nature of memory 13:50 - Memory schemas, highly superior autobiographical memory 20:49 - Misremembering and flashbulb memories 27:52 - Forgetting and schemas 36:06 - What is a "good" memory? 39:35 - Memories and intention 43:47 - Memory and context 49:55 - Implanting false memories 1:04:10 - Memory suggestion during interrogations 1:06:30 - Memory, imagination, and creativity 1:13:45 - Artificial intelligence and memory 1:21:21 - Driven by questions
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: To explore more neuroscience news and perspectives, visit thetransmitter.org. Since the 1940s and 50s, back at the origins of what we now think of as artificial intelligence, there have been lots of ways of conceiving what it is that brains do, or what the function of the brain is. One of those conceptions, going to back to cybernetics, is that the brain is a controller that operates under the principles of feedback control. This view has been carried down in various forms to us in present day. Also since that same time period, when McCulloch and Pitts suggested that single neurons are logical devices, there have been lots of ways of conceiving what it is that single neurons do. Are they logical operators, do they each represent something special, are they trying to maximize efficiency, for example? Dmitri Chklovskii, who goes by Mitya, runs the Neural Circuits and Algorithms lab at the Flatiron Institute. Mitya believes that single neurons themselves are each individual controllers. They're smart agents, each trying to predict their inputs, like in predictive processing, but also functioning as an optimal feedback controller. We talk about historical conceptions of the function of single neurons and how this differs, we talk about how to think of single neurons versus populations of neurons, some of the neuroscience findings that seem to support Mitya's account, the control algorithm that simplifies the neuron's otherwise impossible control task, and other various topics. We also discuss Mitya's early interests, coming from a physics and engineering background, in how to wire up our brains efficiently, given the limited amount of space in our craniums. Obviously evolution produced its own solutions for this problem. This pursuit led Mitya to study the C. elegans worm, because its connectome was nearly complete- actually, Mitya and his team helped complete the connectome so he'd have the whole wiring diagram to study it. So we talk about that work, and what knowing the whole connectome of C. elegans has and has not taught us about how brains work. Chklovskii Lab. Twitter: @chklovskii. Related papers The Neuron as a Direct Data-Driven Controller. Normative and mechanistic model of an adaptive circuit for efficient encoding and feature extraction. Related episodes BI 143 Rodolphe Sepulchre: Mixed Feedback Control BI 119 Henry Yin: The Crisis in Neuroscience Read the transcript. 0:00 - Intro 7:34 - Physicists approach for neuroscience 12:39 - What's missing in AI and neuroscience? 16:36 - Connectomes 31:51 - Understanding complex systems 33:17 - Earliest models of neurons 39:08 - Smart neurons 42:56 - Neuron theories that influenced Mitya 46:50 - Neuron as a controller 55:03 - How to test the neuron as controller hypothesis 1:00:29 - Direct data-driven control 1:11:09 - Experimental evidence 1:22:25 - Single neuron doctrine and population doctrine 1:25:30 - Neurons as agents 1:28:52 - Implications for AI 1:30:02 - Limits to control perspective
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: To explore more neuroscience news and perspectives, visit thetransmitter.org. When you play hide and seek, as you do on a regular basis I'm sure, and you count to ten before shouting, "Ready or not, here I come," how do you keep track of time? Is it a clock in your brain, as many neuroscientists assume and therefore search for in their research? Or is it something else? Maybe the rhythm of your vocalization as you say, "one-one thousand, two-one thousand"? Even if you’re counting silently, could it be that you’re imagining the movements of speaking aloud and tracking those virtual actions? My guest today, neuroscientist David Robbe, believes we don't rely on clocks in our brains, or measure time internally, or really that we measure time at all. Rather, our estimation of time emerges through our interactions with the world around us and/or the world within us as we behave. David is group leader of the Cortical-Basal Ganglia Circuits and Behavior Lab at the Institute of Mediterranean Neurobiology. His perspective on how organisms measure time is the result of his own behavioral experiments with rodents, and by revisiting one of his favorite philosophers, Henri Bergson. So in this episode, we discuss how all of this came about - how neuroscientists have long searched for brain activity that measures or keeps track of time in areas like the basal ganglia, which is the brain region David focuses on, how the rodents he studies behave in surprising ways when he asks them to estimate time intervals, and how Bergson introduce the world to the notion of durée, our lived experience and feeling of time. Cortical-Basal Ganglia Circuits and Behavior Lab. Twitter: @dav_robbe Related papers Lost in time: Relocating the perception of duration outside the brain. Running, Fast and Slow: The Dorsal Striatum Sets the Cost ofMovement During Foraging. 0:00 - Intro 3:59 - Why behavior is so important in itself 10:27 - Henri Bergson 21:17 - Bergson's view of life 26:25 - A task to test how animals time things 34:08 - Back to Bergson and duree 39:44 - Externalizing time 44:11 - Internal representation of time 1:03:38 - Cognition as internal movement 1:09:14 - Free will 1:15:27 - Implications for AI
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released. David Krakauer is the president of the Santa Fe Institute, where their mission is officially "Searching for Order in the Complexity of Evolving Worlds." When I think of the Santa Fe institute, I think of complexity science, because that is the common thread across the many subjects people study at SFI, like societies, economies, brains, machines, and evolution. David has been on before, and I invited him back to discuss some of the topics in his new book The Complex World: An Introduction to the Fundamentals of Complexity Science. The book on the one hand serves as an introduction and a guide to a 4 volume collection of foundational papers in complexity science, which you'll David discuss in a moment. On the other hand, The Complex World became much more, discussing and connecting ideas across the history of complexity science. Where did complexity science come from? How does it fit among other scientific paradigms? How did the breakthroughs come about? Along the way, we discuss the four pillars of complexity science - entropy, evolution, dynamics, and computation, and how complexity scientists draw from these four areas to study what David calls "problem-solving matter." We discuss emergence, the role of time scales, and plenty more all with my own self-serving goal to learn and practice how to think like a complexity scientist to improve my own work on how brains do things. Hopefully our conversation, and David's book, help you do the same. David's website. David's SFI homepage. The book: The Complex World: An Introduction to the Fundamentals of Complexity Science. The 4-Volume Series: Foundational Papers in Complexity Science. Mentioned: Aeon article: Problem-solving matter. The information theory of individuality. Read the transcript. 0:00 - Intro 3:45 - Origins of The Complex World 20:10 - 4 pillars of complexity 36:27 - 40s to 70s in complexity 42:33 - How to proceed as a complexity scientist 54:32 - Broken symmetries 1:02:40 - Emergence 1:13:25 - Time scales and complexity 1:18:48 - Consensus and how ideas migrate 1:29:25 - Disciplinary matrix (Kuhn) 1:32:45 - Intelligence vs. life
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new Brain Inspired episode is released. Eli Sennesh is a postdoc at Vanderbilt University, one of my old stomping grounds, currently in the lab of Andre Bastos. Andre’s lab focuses on understanding brain dynamics within cortical circuits, particularly how communication between brain areas is coordinated in perception, cognition, and behavior. So Eli is busy doing work along those lines, as you'll hear more about. But the original impetus for having him on his recently published proposal for how predictive coding might be implemented in brains. So in that sense, this episode builds on the last episode with Rajesh Rao, where we discussed Raj's "active predictive coding" account of predictive coding.  As a super brief refresher, predictive coding is the proposal that the brain is constantly predicting what's about the happen, then stuff happens, and the brain uses the mismatch between its predictions and the actual stuff that's happening, to learn how to make better predictions moving forward. I refer you to the previous episode for more details. So Eli's account, along with his co-authors of course, which he calls "divide-and-conquer" predictive coding, uses a probabilistic approach in an attempt to account for how brains might implement predictive coding, and you'll learn more about that in our discussion. But we also talk quite a bit about the difference between practicing theoretical and experimental neuroscience, and Eli's experience moving into the experimental side from the theoretical side. Eli's website. Bastos lab. Twitter: @EliSennesh Related papers Divide-and-Conquer Predictive Coding: a Structured Bayesian Inference Algorithm. Related episode: BI 201 Rajesh Rao: Active Predictive Coding. Read the transcript. 0:00 - Intro 3:59 - Eli's worldview 17:56 - NeuroAI is hard 24:38 - Prediction errors vs surprise 55:16 - Divide and conquer 1:13:24 - Challenges 1:18:44 - How to build AI 1:25:56 - Affect 1:31:55 - Abolish the value function
Support the show to get full episodes, full archive, and join the Discord community. Today I'm in conversation with Rajesh Rao, a distinguished professor of computer science and engineering at the University of Washington, where he also co-directs the Center for Neurotechnology. Back in 1999, Raj and Dana Ballard published what became quite a famous paper, which proposed how predictive coding might be implemented in brains. What is predictive coding, you may be wondering? It's roughly the idea that your brain is constantly predicting incoming sensory signals, and it generates that prediction as a top-down signal that meets the bottom-up sensory signals. Then the brain computes a difference between the prediction and the actual sensory input, and that difference is sent back up to the "top" where the brain then updates its internal model to make better future predictions. So that was 25 years ago, and it was focused on how the brain handles sensory information. But Raj just recently published an update to the predictive coding framework, one that incorporates actions and perception, suggests how it might be implemented in the cortex - specifically which cortical layers do what - something he calls "Active predictive coding." So we discuss that new proposal, we also talk about his engineering work on brain-computer interface technologies, like BrainNet, which basically connects two brains together, and like neural co-processors, which use an artificial neural network as a prosthetic that can do things like enhance memories, optimize learning, and help restore brain function after strokes, for example. Finally, we discuss Raj's interest and work on deciphering an ancient Indian text, the mysterious Indus script. Raj's website. Twitter: @RajeshPNRao. Related papers A sensory–motor theory of the neocortex. Brain co-processors: using AI to restore and augment brain function. Towards neural co-processors for the brain: combining decoding and encoding in brain–computer interfaces. BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains. Read the transcript. 0:00 - Intro 7:40 - Predictive coding origins 16:14 - Early appreciation of recurrence 17:08 - Prediction as a general theory of the brain 18:38 - Rao and Ballard 1999 26:32 - Prediction as a general theory of the brain 33:24 - Perception vs action 33:28 - Active predictive coding 45:04 - Evolving to augment our brains 53:03 - BrainNet 57:12 - Neural co-processors 1:11:19 - Decoding the Indus Script 1:20:18 - Transformer models relation to active predictive coding
Support the show to get full episodes, full archive, and join the Discord community. Joe Monaco and Grace Hwang  co-organized a recent workshop I participated in, the 2024 BRAIN NeuroAI Workshop. You may have heard of the BRAIN Initiative, but in case not, BRAIN is is huge funding effort across many agencies, one of which is the National Institutes of Health, where this recent workshop was held. The BRAIN Initiative began in 2013 under the Obama administration, with the goal to support developing technologies to help understand the human brain, so we can cure brain based diseases. BRAIN Initiative just became a decade old, with many successes like recent whole brain connectomes, and discovering the vast array of cell types. Now the question is how to move forward, and one area they are curious about, that perhaps has a lot of potential to support their mission, is the recent convergence of neuroscience and AI... or NeuroAI. The workshop was designed to explore how NeuroAI might contribute moving forward, and to hear from NeuroAI folks how they envision the field moving forward. You'll hear more about that in a moment. That's one reason I invited Grace and Joe on. Another reason is because they co-wrote a position paper a while back that is impressive as a synthesis of lots of cognitive sciences concepts, but also proposes a specific level of abstraction and scale in brain processes that may serve as a base layer for computation. The paper is called Neurodynamical Computing at the Information Boundaries, of Intelligent Systems, and you'll learn more about that in this episode. Joe's NIH page. Grace's NIH page. Twitter:  Joe: @j_d_monaco Related papers Neurodynamical Computing at the Information Boundaries of Intelligent Systems. Cognitive swarming in complex environments with attractor dynamics and oscillatory computing. Spatial synchronization codes from coupled rate-phase neurons. Oscillators that sync and swarm. Mentioned A historical survey of algorithms and hardware architectures for neural-inspired and neuromorphic computing applications. Recalling Lashley and reconsolidating Hebb. BRAIN NeuroAI Workshop (Nov 12–13) NIH BRAIN NeuroAI Workshop Program Book NIH VideoCast – Day 1 Recording – BRAIN NeuroAI Workshop NIH VideoCast – Day 2 Recording – BRAIN NeuroAI Workshop Neuromorphic Principles in Biomedicine and Healthcare Workshop (Oct 21–22) NPBH 2024 BRAIN Investigators Meeting 2020 Symposium & Perspective Paper BRAIN 2020 Symposium on Dynamical Systems Neuroscience and Machine Learning (YouTube) Neurodynamical Computing at the Information Boundaries of Intelligent Systems | Cognitive Computation NSF/CIRC Community Infrastructure for Research in Computer and Information Science and Engineering (CIRC) | NSF - National Science Foundation THOR Neuromorphic Commons - Matrix: The UTSA AI Consortium for Human Well-Being Read the transcript. 0:00 - Intro 25:45 - NeuroAI Workshop - neuromorphics 33:31 - Neuromorphics and theory 49:19 - Reflections on the workshop 54:22 - Neurodynamical computing and information boundaries 1:01:04 - Perceptual control theory 1:08:56 - Digital twins and neural foundation models 1:14:02 - Base layer of computation
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: https://www.thetransmitter.org/newsletters/ To explore more neuroscience news and perspectives, visit thetransmitter.org. Hessam Akhlaghpour is a postdoctoral researcher at Rockefeller University in the Maimon lab. His experimental work is in fly neuroscience mostly studying spatial memories in fruit flies. However, we are going to be talking about a different (although somewhat related) side of his postdoctoral research. This aspect of his work involves theoretical explorations of molecular computation, which are deeply inspired by Randy Gallistel and Adam King's book Memory and the Computational Brain. Randy has been on the podcast before to discuss his ideas that memory needs to be stored in something more stable than the synapses between neurons, and how that something could be genetic material like RNA. When Hessam read this book, he was re-inspired to think of the brain the way he used to think of it before experimental neuroscience challenged his views. It re-inspired him to think of the brain as a computational system. But it also led to what we discuss today, the idea that RNA has the capacity for universal computation, and Hessam's development of how that might happen. So we discuss that background and story, why universal computation has been discovered in organisms yet since surely evolution has stumbled upon it, and how RNA might and combinatory logic could implement universal computation in nature. Hessam's website. Maimon Lab. Twitter: @theHessam. Related papers An RNA-based theory of natural universal computation. The molecular memory code and synaptic plasticity: a synthesis. Lifelong persistence of nuclear RNAs in the mouse brain. Cris Moore's conjecture #5 in this 1998 paper. (The Gallistel book): Memory and the Computational Brain: Why Cognitive Science Will Transform Neuroscience. Related episodes BI 126 Randy Gallistel: Where Is the Engram? BI 172 David Glanzman: Memory All The Way Down Read the transcript. 0:00 - Intro 4:44 - Hessam's background 11:50 - Randy Gallistel's book 14:43 - Information in the brain 17:51 - Hessam's turn to universal computation 35:30 - AI and universal computation 40:09 - Universal computation to solve intelligence 44:22 - Connecting sub and super molecular 50:10 - Junk DNA 56:42 - Genetic material for coding 1:06:37 - RNA and combinatory logic 1:35:14 - Outlook 1:42:11 - Reflecting on the molecular world
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: https://www.thetransmitter.org/newsletters/ To explore more neuroscience news and perspectives, visit thetransmitter.org. Tony Zador runs the Zador lab at Cold Spring Harbor Laboratory. You've heard him on Brain Inspired a few times in the past, most recently in a panel discussion I moderated at this past COSYNE conference - a conference Tony co-founded 20 years ago. As you'll hear, Tony's current and past interests and research endeavors are of a wide variety, but today we focus mostly on his thoughts on NeuroAI. We're in a huge AI hype cycle right now, for good reason, and there's a lot of talk in the neuroscience world about whether neuroscience has anything of value to provide AI engineers - and how much value, if any, neuroscience has provided in the past. Tony is team neuroscience. You'll hear him discuss why in this episode, especially when it comes to ways in which development and evolution might inspire better data efficiency, looking to animals in general to understand how they coordinate numerous objective functions to achieve their intelligent behaviors - something Tony calls alignment - and using spikes in AI models to increase energy efficiency. Zador Lab Twitter: @TonyZador Previous episodes: BI 187: COSYNE 2024 Neuro-AI Panel. BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys BI 034 Tony Zador: How DNA and Evolution Can Inform AI Related papers Catalyzing next-generation Artificial Intelligence through NeuroAI. Encoding innate ability through a genomic bottleneck. Essays NeuroAI: A field born from the symbiosis between neuroscience, AI. What the brain can teach artificial neural networks. Read the transcript. 0:00 - Intro 3:28 - "Neuro-AI" 12:48 - Visual cognition history 18:24 - Information theory in neuroscience 20:47 - Necessary steps for progress 24:34 - Neuro-AI models and cognition 35:47 - Animals for inspiring AI 41:48 - What we want AI to do 46:01 - Development and AI 59:03 - Robots 1:25:10 - Catalyzing the next generation of AI
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Karen Adolph runs the Infant Action Lab at NYU, where she studies how our motor behaviors develop from infancy onward. We discuss how observing babies at different stages of development illuminates how movement and cognition develop in humans, how variability and embodiment are key to that development, and the importance of studying behavior in real-world settings as opposed to restricted laboratory settings. We also explore how these principles and simulations can inspire advances in intelligent robots. Karen has a long-standing interest in ecological psychology, and she shares some stories of her time studying under Eleanor Gibson and other mentors. Finally, we get a surprise visit from her partner Mark Blumberg, with whom she co-authored an opinion piece arguing that "motor cortex" doesn't start off with a motor function, oddly enough, but instead processes sensory information during the first period of animals' lives. Infant Action Lab (Karen Adolph's lab) Sleep and Behavioral Development Lab (Mark Blumberg's lab) Related papers Motor Development: Embodied, Embedded, Enculturated, and Enabling An Ecological Approach to Learning in (Not and) Development An update of the development of motor behavior Protracted development of motor cortex constrains rich interpretations of infant cognition Read the transcript.
loading
Comments (1)

Alex Spies

Fantastic Podcast - Goes into a fair amount of detail, whilst retaining a broad perspective on the context and potential of specific cutting-edge research - highly entertaining!

Mar 28th
Reply