#144 Why is Bayesian Deep Learning so Powerful, with Maurizio Filippone - Learning Bayesian Statistics
Description
- Sign up for Alex's first live cohort, about Hierarchical Model building!
- Get 25% off "Building AI Applications for Data Scientists and Software Engineers"
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!
Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!
Visit our Patreon page to unlock exclusive Bayesian swag ;)
Takeaways:
- Why GPs still matter: Gaussian Processes remain a go-to for function estimation, active learning, and experimental design – especially when calibrated uncertainty is non-negotiable.
- Scaling GP inference: Variational methods with inducing points (as in GPflow) make GPs practical on larger datasets without throwing away principled Bayes.
- MCMC in practice: Clever parameterizations and gradient-based samplers tighten mixing and efficiency; use MCMC when you need gold-standard posteriors.
- Bayesian deep learning, pragmatically: Stochastic-gradient training and approximate posteriors bring Bayesian ideas to neural networks at scale.
- Uncertainty that ships: Monte Carlo dropout and related tricks provide fast, usable uncertainty – even if they’re approximations.
- Model complexity ≠ model quality: Understanding capacity, priors, and inductive bias is key to getting trustworthy predictions.
- Deep Gaussian Processes: Layered GPs offer flexibility for complex functions, with clear trade-offs in interpretability and compute.
- Generative models through a Bayesian lens: GANs and friends benefit from explicit priors and uncertainty – useful for safety and downstream decisions.
- Tooling that matters: Frameworks like GPflow lower the friction from idea to implementation, encouraging reproducible, well-tested modeling.
- Where we’re headed: The future of ML is uncertainty-aware by default – integrating UQ tightly into optimization, design, and deployment.
Chapters:
08:44 Function Estimation and Bayesian Deep Learning
10:41 Understanding Deep Gaussian Processes
25:17 Choosing Between Deep GPs and Neural Networks
32:01 Interpretability and Practical Tools for GPs
43:52 Variational Methods in Gaussian Processes
54:44 Deep Neural Networks and Bayesian Inference
01:06:13 The Future of Bayesian Deep Learning
01:12:28 Advice for Aspiring Researchers
01:22:09 Tackling Global Issues with AI
Thank you to my Patrons for making this episode possible!
Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli and Adam Tilmar Jakobsen.
Links from the show:
- Maurizio's website: https://mauriziofilippone.github.io
- Maurizio on Google Scholar: https://scholar.google.com/citations?user=ILUeAloAAAAJ&hl=en
- GANs Secretly Perform Approximate Bayesian Model Selection: https://www.youtube.com/watch?v=pnfQ2_6jGl4
- Videos of a couple of presentations on Bayesian Deep Learning:
- Aalto University 2023: https://www.youtube.com/watch?v=R2T3Z-Y3LXM
- AI Center in Prague 2023: https://www.youtube.com/watch?v=xE7TaQeLAXE
- UC Irvine 2022: https://www.youtube.com/watch?v=oZAuh686ipw
- Video of the discussion of the paper “Deep Gaussian Processes for Calibration of Computer Models” published in Bayesian Analysis as a discussion paper: https://www.youtube.com/watch?v=K_hPbvoo0_M
- Lecture on Deep Gaussian Processes at DeepBayes 2019: https://www.youtube.com/watch?v=750fRY9-uq8
- Lecture on Gaussian Processes at Deep Bayes 2018: https://www.youtube.com/watch?v=zBEV5ezyYmI
- A tutorial on GPs with E. V. Bonilla at IJCAI in 2021: https://ebonilla.github.io/gaussianprocesses/
- PyData Tutorial, Mastering Gaussian Processes with PyMC: https://github.com/AlexAndorra/advanced-gp-pydata#
- LBS #136 Bayesian Inference at Scale: Unveiling INLA, with Haavard Rue & Janet van Niekerk: https://learnbayesstats.com/episode/136-bayesian-inference-at-scale-unveiling-inla-haavard-rue-janet-van-niekerk
- LBS #129 Bayesian Deep Learning & AI for Science with Vincent Fortuin: https://learnbayesstats.com/episode/129-bayesian-deep-learning-ai-for-science-vincent-fortuin
- LBS #107 Amortized Bayesian Inference with Deep Neural Networks, with Marvin Schmitt: https://learnbayesstats.com/episode/107-amortized-bayesian-inference-deep-neural-networks-marvin-schmitt
- GPFlow documentation: https://www.gpflow.org/
- PyTorch docs: https://pytorch.org/
- Pyro docs: https://pyro.ai/
Transcript
This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.



















