Discover
Theoretical Neuroscience Podcast
34 Episodes
Reverse
An important discovery that has come out of computational neuroscience, is that cortical neurons in vivo appear to receive so-called balanced inputs. In the balanced state the excitatory and inhibitory synaptic inputs to a neuron are about equal, and action potentials occur when a fluctuation temporarily makes the excitation dominate. The theory, for example, explains the observed irregular firing of cortical neurons in the background state. Today's guest was one of the key developers of the theory in the late 1990s.
How can computational neuroscience contribute to developing neurotechnology to help people with brain disorders and disabilities? This was the topic of a panel debate I hosted at the 34th Annual Computational Neuroscience Meeting in Florence in July this year. Electric or magnetic recording and/or stimulation are key clinical tools for helping patients, and the three panelists have all used computational methods to aid this endeavor.
In an adversarial collaboration researchers with opposing theories jointly investigate a disputed topic by designing and implementing a study in a mutually agreed unbiased way. Results from adversarial testing of two well-known theories for consciousness, Global Neuronal Workspace Theory (GNWT) and Integrated Information Theory (IIT), were presented earlier this year. In this podcast one of the proponents and developers of IIT describes this candidate theory, and also the design of, and results from, the adversarial study.
A promise of basic neuroscience research is that the new insights will lead to new cures for brain diseases. But has that happened so far? Today's guest, an accomplished professor of neuroscience, decided to investigate. Her book "Elusive cures: why neuroscience hasn't solved brain disorders - and how we can change that" came out this summer. Here she argues that we need to consider the brain as a complex adaptive system, not as a chain of dominos as in the typical linear thinking.
Synaptic plasticity underlies several key brain functions including learning, information filtering and homeostatic regulation of overall neural activity. While several mathematical rules have been developed for plasticity both at excitatory and inhibitory synapses, it has been difficult to make such rules co-exist in network models. Recently the group of the guest has explored how co-dependent plasticity rules can remedy the situation and, for example, assure that long-term memories can be stored in excitatory synapses while inhibitory synapses assure long-term stability.
Computational neuroscientists rely on simplification when they make their models. But what is the right level of simplification? When should we, for example, use a biophysically detailed model and when a simplified abstract model when modelling neural dynamics? What are the problems of simplifying too much, or too little? This was the topic of the panel discussion between a science philosopher (MC), author of the recent book "The Brain Abstracted", and an experienced modeler (TS) at the FENS Regional Meeting in Oslo in June 2025.
A future computational neuroscience project could be to model not only the signal processing properties of neurons, but also all processes that keep a neuron alive for, say, a 100-year life span. In 2012 the group of the guest published the first such whole-cell model for a very simple bacterium (M. genitalia). In 2020 a model of the larger E. coli bacterium comprising 10.000 equations and 19.000 model parameters was presented. How are such models built, and what can they do?
Numerous neuron models have been made, but most of them are "single-purpose" in that they are made to address a single scientific question. In contrast, multipurpose neuron models are made to be used to address many scientific questions. In 2011, the guest published a multipurpose rodent pyramidal-cell model which has been actively used by the community ever since. We talk about how such models are made, and how his group later built human neuron models to explore network dynamics in brains of depressed patients.
With modern electrical and optical measurement techniques, we can now measure neural activity in hundreds or thousands of neurons simultaneously. This allows for the investigation of population codes, that is, of how groups of neurons together encode information. In 2019 today's guest published a seminal paper with collaborators at UCL in London where analysis of optophysiological data from 10.000 neurons in mouse visual cortex revealed an intriguing population code balancing the needs for efficient and robust coding. We discuss the paper and (towards the end) also how new AI tools may be a game-changer for neuroscience data analysis.
The observed variety of dendritic structures in the brains is striking. Why are they so different, and what determine the branching patterns? Following the dictum "if you understand it, you can build it", the lab of the guest builds dendritic structures in a computer and explore the underlying principles. Two key principles seem to be to minimize (i) the overall length of dendrites and (ii) the path length from the synapses to the soma.
The term "foundation model" refers to machine learning models that are trained on vast datasets and can be applied to a wide range of situations. The large language model GPT-4 is an example. The group of the guest has recently presented a foundation model for optophysiological responses in mouse visual cortex trained on recordings from 135.000 neurons in mice watching movies. We discuss the design, validation, use of this and future neuroscience foundation models.
A holy grail of the multiscale approach for physical brain modelling is to link the different scales from molecules, via cells and local neural networks, up to whole-brain models. The goal of the Virtual Brain Twin project, lead by today's guest, is to use personalized human whole-brain models to aid clinicians in treating brain ailments. The podcast discusses how such models are presently made using neural field models, starting with neuron population dynamics rather than molecular dynamics.
In 1982 John Hopfield published the paper "Neural networks and physical systems with emergent collective computational abilities" describing a simple network model functioning as an associative and content-addressable memory. The paper started a new subfield in computational neuroscience and led to the influx of numerous theoretical scientists, in particular physicists, to the field. The podcast guest wrote his PhD thesis on the model in the early 1990s, and we talk about the history and present impact of the model.
The leading theory for learning and memorization in the brain is that learning is provided by synaptic learning rules and memories stored in synaptic weights between neurons. But this is for long-term memory. What about short-term, or working, memory where objects are kept in memory for only a few seconds? The traditional theory held that here the mechanism is different, namely persistent firing of select neurons in areas such as prefrontal cortex. But this view is challenged by recent synapse-based models explored by today's guest and others.
In September Paul Middlebrooks, the producer of the podcast BrainInspired, and I were both on a neuro-AI workshop on a coast liner cruising the Norwegian fjords. We decided to make two joint podcasts with some of the participants where we discuss the role of AI in neuroscience. In this second part we discuss the topic with Cristina Savin and Tim Vogels and round off with a brief discussion with Mikkel Lepperød, the main organizer of the workshop, about what he learned from the workshop.
In September Paul Middlebrooks, the producer of the podcast BrainInspired, and I were both on a neuro-AI workshop on a coast liner cruising the Norwegian fjords. We decided to make two joint podcasts with some of the participants where we discuss the role of AI in neuroscience. In this first part we talk with Mikkel Lepperod, the main organizer about the goal of the workshop, and with Ken Harris and Andreas Tolias about how AI has affected their research neuroscientists and their thoughts about the future of neuro-AI.
Most of what we have learned about the functioning of the living brain has come from extracellular electrical recordings, like the measurement of spikes, LFP, ECoG and EEG signals. And most analysis of these recordings has been statistical, looking for correlations between the recorded signals and what the animal/human is doing or being exposed to. However, starting with the neuron rather than the data, these electrical brain signals can also be computed from biophysics-based forward models, and this is topic of this podcast.
The most prominent visual characteristic of neurons is their dendrites. Even more than 100 years after their first observation by Cajal, their function is not fully understood. Biophysical modeling based on cable theory is a key research tool for exploring putative functions, and today's guest is one the leading researchers in this field. We talk about of passive and active dendrites, the kind of filtering of synaptic inputs they support, the key role of synapse placements, and how the inclusion of dendrites may facilitate AI.
The greatest mystery of all is why a group of atoms, like the ones constituting me, can feel anything. The mind-brain problem has puzzled philosophers for millennia. Thanks to pioneers like Christof Koch, consciousness studies have recently become a legitimate field of scientific inquiry. In this vintage episode, recorded in February 2021, we discuss many aspects of the phenomenon, including an intriguing candidate theory: Integrated Information Theory.
Computational neuroscientists use many software tools, and NEURON has become the leading tool for biophysical modeling of neurons and neural network. Today's guest has been the leading developer of NEURON since the infancy almost 50 years ago. We talk about how the tool got started and the development up until today's modern version of the software, including CoreNEURON optimized for parallel execution of large-scale network models on multicore supercomputers.



