Variational Autoencorders: the cognitive scientist's favorite deep learning tool (realraum)
Update: 2025-10-24
Description
Variational Autoencoders (VAEs) were first introduced as early concept learners in the vision domain. Since then, they have become a staple tool in generative modeling, representation learning, and unsupervised learning more broadly. Their use as analogues of human cognition is one of the first steps towards the understanding of more complex cognitive models leading up to models of human brain function and behavior. As part of a series of talks on cognitive science and deep learning at the realraum in Graz, this presentation will focus on the role of VAEs in cognitive science research.
Topics:
- Supervised vs. unsupervised learning
- Deep Learning basics: classifiers and backpropagation
- Autoencoders: architecture, training, embedding, and generative modeling
- Variational Autoencoders: statistical latent space, and the reparametrization trick
- Training VAEs: loss functions, optimization, and the KL divergence
- Concept learning: VAEs in cognitive science
https://creativecommons.org/licenses/by-sa/4.0/
about this event: https://cfp.realraum.at/realraum-october/talk/LHH3M9/
Topics:
- Supervised vs. unsupervised learning
- Deep Learning basics: classifiers and backpropagation
- Autoencoders: architecture, training, embedding, and generative modeling
- Variational Autoencoders: statistical latent space, and the reparametrization trick
- Training VAEs: loss functions, optimization, and the KL divergence
- Concept learning: VAEs in cognitive science
https://creativecommons.org/licenses/by-sa/4.0/
about this event: https://cfp.realraum.at/realraum-october/talk/LHH3M9/
Comments
In Channel




