DiscoverChaos Computer Club - recent events feedVariational Autoencorders: the cognitive scientist's favorite deep learning tool (realraum)
Variational Autoencorders: the cognitive scientist's favorite deep learning tool (realraum)

Variational Autoencorders: the cognitive scientist's favorite deep learning tool (realraum)

Update: 2025-10-24
Share

Description

Variational Autoencoders (VAEs) were first introduced as early concept learners in the vision domain. Since then, they have become a staple tool in generative modeling, representation learning, and unsupervised learning more broadly. Their use as analogues of human cognition is one of the first steps towards the understanding of more complex cognitive models leading up to models of human brain function and behavior. As part of a series of talks on cognitive science and deep learning at the realraum in Graz, this presentation will focus on the role of VAEs in cognitive science research.

Topics:
- Supervised vs. unsupervised learning
- Deep Learning basics: classifiers and backpropagation
- Autoencoders: architecture, training, embedding, and generative modeling
- Variational Autoencoders: statistical latent space, and the reparametrization trick
- Training VAEs: loss functions, optimization, and the KL divergence
- Concept learning: VAEs in cognitive science

https://creativecommons.org/licenses/by-sa/4.0/
about this event: https://cfp.realraum.at/realraum-october/talk/LHH3M9/
Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Variational Autoencorders: the cognitive scientist's favorite deep learning tool (realraum)

Variational Autoencorders: the cognitive scientist's favorite deep learning tool (realraum)

Xiutik