DiscoverBrain InspiredBI 183 Dan Goodman: Neural Reckoning
BI 183 Dan Goodman: Neural Reckoning

BI 183 Dan Goodman: Neural Reckoning

Update: 2024-02-06
Share

Description

Support the show to get full episodes and join the Discord community.













You may know my guest as the co-founder of Neuromatch, the excellent online computational neuroscience academy, or as the creator of the Brian spiking neural network simulator, which is freely available. I know him as a spiking neural network practitioner extraordinaire. Dan Goodman runs the Neural Reckoning Group at Imperial College London, where they use spiking neural networks to figure out how biological and artificial brains reckon, or compute.





All of the current AI we use to do all the impressive things we do, essentially all of it, is built on artificial neural networks. Notice the word "neural" there. That word is meant to communicate that these artificial networks do stuff the way our brains do stuff. And indeed, if you take a few steps back, spin around 10 times, take a few shots of whiskey, and squint hard enough, there is a passing resemblance. One thing you'll probably still notice, in your drunken stupor, is that, among the thousand ways ANNs differ from brains, is that they don't use action potentials, or spikes. From the perspective of neuroscience, that can seem mighty curious. Because, for decades now, neuroscience has focused on spikes as the things that make our cognition tick.





We count them and compare them in different conditions, and generally put a lot of stock in their usefulness in brains.





So what does it mean that modern neural networks disregard spiking altogether?





Maybe spiking really isn't important to process and transmit information as well as our brains do. Or maybe spiking is one among many ways for intelligent systems to function well. Dan shares some of what he's learned and how he thinks about spiking and SNNs and a host of other topics.









0:00 - Intro
3:47 - Why spiking neural networks, and a mathematical background
13:16 - Efficiency
17:36 - Machine learning for neuroscience
19:38 - Why not jump ship from SNNs?
23:35 - Hard and easy tasks
29:20 - How brains and nets learn
32:50 - Exploratory vs. theory-driven science
37:32 - Static vs. dynamic
39:06 - Heterogeneity
46:01 - Unifying principles vs. a hodgepodge
50:37 - Sparsity
58:05 - Specialization and modularity
1:00:51 - Naturalistic experiments
1:03:41 - Projects for SNN research
1:05:09 - The right level of abstraction
1:07:58 - Obstacles to progress
1:12:30 - Levels of explanation
1:14:51 - What has AI taught neuroscience?
1:22:06 - How has neuroscience helped AI?

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

BI 183 Dan Goodman: Neural Reckoning

BI 183 Dan Goodman: Neural Reckoning

Paul Middlebrooks