DiscoverThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Author: Sam Charrington

Subscribed: 27,910Played: 316,588


Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders.

Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader.

Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, deep learning, computer science, data science and more.
356 Episodes
Today we’re joined by Jannis Born, Ph.D. student at ETH & IBM Research Zurich. We caught up with Jannis a few weeks back at NeurIPS, to discuss:  His research paper “PaccMann^RL: Designing anticancer drugs from transcriptomic data via reinforcement learning,” a framework built to accelerate new anticancer drug discovery.  How his background in cognitive science and computational neuroscience applies to his current ML research. How reinforcement learning fits into the goal of cancer drug discovery, and how deep learning has changed this research. Jannis describes a few interesting observations made during the training of their DRL learner.  And of course, Jannis offers us a step-by-step walkthrough of how the framework works to predict the sensitivity of cancer drugs on a cell and subsequently discover new anticancer drugs.  Check out the complete show notes for this episode at 
Today we’re joined by Blaise Aguera y Arcas, a distinguished scientist at Google. We had the pleasure of catching up with Blaise at NeurIPS last month, where he was invited to speak on “Social Intelligence.” In our conversation, we discuss: Blaise’s role at Google, where he leads the Cerebra team.  Their approach to machine learning at the company, and how they differ from the more forward-facing Google Brain team.  Blaise gives us a look into his presentation, discussing today’s ML landscape. The gap between AI and ML/DS research, what it means and why it exists. The difference between intelligent systems and what we would deem to be “actual intelligence.”  What does optimizing truly mean when training models? Check out the complete show notes for this episode at
Today we’re joined by Pablo Samuel Castro, Staff Research Software Developer at Google. Pablo, whose research is mainly focused on reinforcement learning, and I caught up at NeurIPS last month. We cover a lot of ground in our conversation, including his love for music, and how that has guided his work on the Lyric AI project, and a few of his other NeurIPS submissions, including “A Geometric Perspective on Optimal Representations for Reinforcement Learning” and “Estimating Policy Functions in Payments Systems using Deep Reinforcement Learning.”  Check out the complete show notes at
Today we close out AI Rewind 2019 joined by Amir Zamir, who recently began his tenure as an Assistant Professor of Computer Science at the Swiss Federal Institute of Technology. Amir joined us back in 2018 to discuss his CVPR Best Paper winner, and in today’s conversation, we continue with the thread of Computer Vision. In our conversation, we discuss quite a few topics, including Vision-for-Robotics, the expansion of the field of 3D Vision, Self-Supervised Learning for CV Tasks, and much more! Check out the rest of the series at The complete show notes for this episode can be found at  
Today we continue the AI Rewind 2019 joined by friend-of-the-show Nasrin Mostafazadeh, Senior AI Research Scientist at Elemental Cognition. We caught up with Nasrin to discuss the latest and greatest developments and trends in Natural Language Processing, including Interpretability, Ethics, and Bias in NLP, how large pre-trained models have transformed NLP research, and top tools and frameworks in the space. The complete show notes can be found at Check out the rest of the series at!
Today we keep the 2019 AI Rewind series rolling with friend-of-the-show Timnit Gebru, a research scientist on the Ethical AI team at Google. A few weeks ago at NeurIPS, Timnit joined us to discuss the ethics and fairness landscape in 2019. In our conversation, we discuss diversification of NeurIPS, with groups like Black in AI, WiML and others taking huge steps forward, trends in the fairness community, quite a few papers, and much more. We want to hear from you! Send your thoughts on the year that was 2019 below in the comments, or via twitter @samcharrington or @twimlai. The complete show notes for this episode can be found at Check out the rest of the series at!
Today we continue to review the year that was 2019 via our AI Rewind series, and do so with friend of the show Chelsea Finn, Assistant Professor in the Computer Science Department at Stanford University. Chelsea’s research focuses on Reinforcement Learning, so we couldn’t think of a better person to join us to discuss the topic. In this conversation, we cover topics like Model-based RL, solving hard exploration problems, along with RL libraries and environments that Chelsea thought moved the needle last year.  We want to hear from you! Send your thoughts on the year that was 2019 below in the comments, or via twitter @samcharrington or @twimlai. The complete show notes for this episode can be found at Check out the rest of the series at!
Today we kick off our 2019 AI Rewind Series joined by Zack Lipton, a jointly appointed Professor in the Tepper School of Business and the Machine Learning Department at CMU. You might remember Zack from our conversation earlier this year, “Fairwashing” and the Folly of ML Solutionism, which you can find at In our conversation, Zack recaps advancements across the vast fields of Machine Learning and Deep Learning, including trends, tools, research papers and more. We want to hear from you! Send your thoughts on the year that was 2019 below in the comments, or via Twitter @samcharrington or @twimlai. To get the complete show notes for this episode, head over to   
Today we close out our 2019 NeurIPS series with Mohamed Sidahmed, Machine Learning and Artificial Intelligence R&D Manager at Shell. In our conversation, we discuss:  The papers Mohamed and his team submitted to the conference this year, in particular:  Accelerating Least Squares Imaging Using Deep Learning Techniques, which details how researchers can computationally efficiently reconstruct imaging using a deep learning framework approach. FaciesNet: Machine Learning Applications for Facies Classification in Well Logs, which Mohamed describes as “A novel way of designing a new architecture for how we use sequence modeling and recurrent networks to be able to break out of the benchmark for classifying the different types of rock.”  The full show notes for this episode can be found at Make sure you head over to to follow along with this series!
Today we continue our 2019 NeurIPS coverage joined by Daphne Koller, co-Founder and former co-CEO of Coursera and Founder and CEO of Insitro. We caught up with Daphne to discuss:  Her background in machine learning, beginning in ‘93, and her work with the Stanford online machine learning courses, and eventually her work at Coursera. The current landscape of pharmaceutical drug discovery, including the current pricing of drugs and misnomers with why drugs are so expensive,  Her work at Insitro, a company looking to advance drug discovery and development with machine learning.  An overview of Insitro’s goal of using ML as a “compass” in drug discovery.  How Insitro functions as a company in this space, including their focus on the biology of drug discovery and the landscape of ML techniques being used Daphne’s thoughts on AutoML, and much more! The full show notes for this episode can be found at Make sure you head over to to follow along with this series!
Today we continue our 2019 NeurIPS coverage, this time around joined by Blake Richards, Assistant Professor at McGill University and a Core Faculty Member at Mila. In our conversation, we discuss: His invited talk at the Neuro-AI Workshop “Sensory Prediction Error Signals in the Neocortex.”  His recent studies on two-photon calcium imaging, predictive coding, and hierarchical inference. Blake’s recent work on memory systems for reinforcement learning.  The complete show notes for this episode can be found at Make sure you head over to to follow along with this series!
Today we begin our coverage of the 2019 NeurIPS conference with Celeste Kidd, Assistant Professor of Psychology at UC Berkeley. In our conversation, we discuss: The research at the Kidd Lab, which is focused on understanding “how people come to know what they know.” Her invited talk “How to Know,” which details the core cognitive systems people use to guide their learning about the world. Why people are curious about some things but not others. How our past experiences and existing knowledge shape our future interests. Why people believe what they believe, and how these beliefs are influenced in one direction or another. How machine learning figures into this equation. Check out the complete show notes for this episode at You can also follow along with this series at
Today we’re joined by Feng Yan, Assistant Professor at the University of Nevada, Reno. In our conversation, we discuss: ALERTWildfire, a camera-based network infrastructure that captures satellite imagery of wildfires. The many purposes of ALERTWildfire, including the discovery of wildfires, the ability to scale resources accordingly, and a few others The development of the machine learning models and surrounding infrastructure used in ALERTWildfire.  Problem formulation and challenges with using camera and satellite data in this use case. How they have combined the use of Infra-as-a-Service and Function-as-a-Service tools for cost-effectiveness and scalability.  Check out the complete show notes at
Today we’re joined by Dave Castillo, Managing Vice President for ML at Capital One and head of their Center for Machine Learning. We caught up with David at re:Invent to discuss the aforementioned Center for Machine Learning, and what has changed since our last discussing with Capital One, which you can find at In our conversation we explore: Capital One’s transition from “lab-based” machine learning to “enterprise-wide” adoption and support of ML. Surprising machine learning use cases like granting employee access privileges via an automated system. Their current platform ecosystem, including their design vision in building this into a larger, all-encompassing platform, pain points in building this platform, and more.  Check out the complete show notes for this episode at
Today we’re joined by Bryton Shang, Founder & CEO at Aquabyte. We caught up with Bryton after his talk at re:Invent’s ML Summit to discuss: Aquabyte, a company focused on the application of computer vision fish farming. How Bryton identified the various problems associated with mass fish farming and how he eventually moved to Norway to develop the solution. The challenges with developing machine learning solutions that can measure the height and weight of fish, How they use computer vision algorithms to asses issues like sea lice, which can be up to 25% of the cost associated with running farms. Cool new features currently in the works like facial recognition for fish! The complete show notes for this episode can be found at
Today we kick off our re:Invent 2019 series with Ville Tuulos, Machine Learning Infrastructure Manager at Netflix. At re:Invent, Netflix announced the open-sourcing of Metaflow, their “human-centric framework for data science.” In our conversation, we discuss all things Metaflow, including: The problem Metaflow is trying to solve Why it was important for Netflix to open-source Metaflow Core Features The user experience accessing and managing data, experimentation, training and model development The various supported tools and libraries If you’re interested in checking out a Metaflow democast with Villa, reach out at! 
Today we’re joined by Stephen Merity, startup founder and independent researcher, with  a focus on NLP and Deep Learning. In our conversation, we discuss: Stephen’s newest paper, Single Headed Attention RNN: Stop Thinking With Your Head. His motivations behind writing the paper; the fact that NLP research has been recently dominated by the use of transformer models, and the fact that these models are not the most accessible/trainable for broad use. The architecture of transformers models. How Stephen decided to use SHA-RNNs for this research. How Stephen built and trained the model, for which the code is available on Github. His approach to benchmarking this project. Stephen’s goals for this research in the broader NLP research community.  The complete show notes for this episode can be found at There you’ll find links to both the paper referenced in this interview, and the code. Enjoy!
In this TWIML Democast, we're joined by SigOpt Co-Founder and CEO Scott Clark. Scott details the SigOpt platform, and gives us a live demo! This episode is best consumed by watching the corresponding video demo, which you can find at     
In the final episode of our Azure ML series, we’re joined by Erez Barak, Partner Group Manager of Azure ML at Microsoft. In our conversation, we discuss: Erez’s AutoML philosophy, including how he defines “true AutoML” and his take on the AutoML space, its role and its importance. We also discuss in great detail the application of AutoML as a contributor to the end-to-end data science process, which Erez breaks down into 3 key areas; Featurization, Learner/Model Selection, and Tuning/Optimizing Hyperparameters. Finally, we discuss post-deployment AutoML use cases and other areas under the AutoML umbrella that are currently generating excitement. Check out the complete show notes at!
Today we continue our Azure ML at Microsoft Ignite series joined by Sarah Bird, Principal Program Manager at Microsoft. In our conversation, we discuss: Sarah’s work in machine learning systems, with a focus on bringing machine learning research into production through Azure ML, with an emphasis on responsible AI. A set of newly released tools focused on responsible machine learning, Azure Machine Learning 'Machine Learning Interpretability Toolkit’ Moving from “Black-Box” models to “Glass-Box Models” Sarah’s recent work in differential privacy, including risks and benefits Her work in the broader ML community, including being a founding member of the MLSys conference and workshops. Check out the complete show notes at
Comments (15)

Özgür Yüksel

Thanks a lot for introducing us to the genius of our age. Tremendously inspiring.

Dec 11th

Glory Dey

A very good insightful episode, Maki Moussavi explains the various points in a lucid manner. Truly, we are the captain of our life's ship. We are responsible for our own emotions and actions. Being proactive rather than reactive is the key to success and happiness! I will be reading this book! Thanks for sharing this interesting podcast. Have a great day!

Oct 15th

Glory Dey

I love this channel and all the great podcasts. The topics are very relevant and the speakers are well informed experts so the episodes are very educative. Only request, please change the opening music note of the podcast. It is very unpleasant tune sets a jarring effect right at the beginning. Otherwise all these episodes are very interesting in the field of innovations in Artificial Intelligence and Machine Learning! Regards!

Jun 25th

Billy Bloomer

so smart you can smell it

Jun 14th

raqueeb shaikh

great podcast

May 31st

Loza Boza

Phenomenal discussion. Thank you! Particularly enjoyed the parts on generative models and the link to Daniel Kahneman.

May 20th

simon abdou

Horrible Audio

May 9th

Özgür Yüksel

This is a very realistic and proper episode which explains quantum computing even as alone.

Apr 9th


Hello all, Thanks for podcast Can we combine the two agent learnings from same environment to find the best actions Thanks

Mar 14th

Bhavul Gauri

notes : * Data scientists are not trained to think of money optimisations. plotting cpu usage vs accuracy gives an idea about it. if u increase data 4x as much just to gain 1% increase in accuracy that may not be great because you're using 4 times as much CPU power * a team just decicated to monitoring. i. monitor inputs : should not go beyond a certain range for each feature that you are supposed to have. Nulls ratio shouldn't change by a lot. ii. monitor both business and model metrics. sometimes even if model metrics get better ur business metrics could go low....and this could be the case like better autocompletion makes for low performance spell check OR it could also depend upon other things that have changed. or seasonality. * Data scientists and ML engineers in pairs. ML Engineers get to learn about the model while Data Scientists come up with it. both use same language. ML Engineers make sure it gets scaled up and deployed to production. * Which parameters are somewhat stable no matter how many times you retrain vs what parameters are volatile. the volatile ones could cause drastic changes. so u can reverse engineer this way.

Mar 11th

Abhijeet Gulati

great podcast. do we reference to papers that were discussed by Ganju. good job

Jan 22nd

Khaled Zamer

Super.. very informative. Thanks

Aug 26th

Printing Printing

there is no content lol. Host, please invite real scientists

Jan 1st

James Flint

This is an incredible interview. Dopamine as a correlate of prediction error makes so much sense. Best Twiml talk to date!

Dec 30th

Qanit Al-Syed

conversations drag too much. gets boring. stop the marketing and get to the content

Dec 30th
Download from Google Play
Download from App Store