DiscoverThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644
Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644

Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644

Update: 2023-08-281
Share

Description

Today we’re joined by Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara. In our conversation with Sophia, we explore the concept of universality between neural representations and deep neural networks, and how these principles of efficiency provide an ability to find consistent features across networks and tasks. We also discuss her recent paper on Bispectral Neural Networks which focuses on Fourier transform and its relation to group theory, the implementation of bi-spectral spectrum in achieving invariance in deep neural networks, the expansion of geometric deep learning on the concept of CNNs from other domains, the similarities in the fundamental structure of artificial neural networks and biological neural networks and how applying similar constraints leads to the convergence of their solutions.


The complete show notes for this episode can be found at twimlai.com/go/644.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644

Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644

Sam Charrington