DiscoverThis Week in Machine Learning & Artificial Intelligence (AI) Podcast
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Claim Ownership

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Author: Sam Charrington

Subscribed: 14,085Played: 216,517


This Week in Machine Learning & AI is the most popular podcast of its kind, catering to a highly-targeted audience of machine learning & AI enthusiasts. They are data scientists, developers, founders, CTOs, engineers, architects, IT & product leaders, as well as tech-savvy business leaders. These creators, builders, makers and influencers value TWiML as an authentic, trusted and insightful guide to all that’s interesting and important in the world of machine learning and AI.
Technologies covered include: machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, deep learning and more.
306 Episodes
Today we are joined by Anubhav Jain, Staff Scientist & Chemist at Lawrence Berkeley National Lab. Anubhav leads the Hacker Materials Research Group, where his research focuses on applying computing to accelerate the process of finding new materials for functional applications. With the immense amount of published scientific research out there, it can be difficult to understand how that information can be applied to future studies, let alone find a way to read it all. In this episode we discuss: - His latest paper, ‘Unsupervised word embeddings capture latent knowledge from materials science literature’ - The design of a system that takes the literature and uses natural language processing to analyze, synthesize and then conceptualize complex material science concepts - How the method is shown to recommend materials for functional applications in the future - scientific literature mining at its best. Check out the complete show notes at
You asked, we listened! Today, by listener request, we are joined by Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, and Statistical Science at Duke University. Cynthia is passionate about machine learning and social justice, with extensive work and leadership in both areas. In this episode we discuss: Her paper, ‘Please Stop Explaining Black Box Models for High Stakes Decisions’ How interpretable models make for less error-prone and more comprehensible decisions - and why we should care A break down of black box and interpretable models, including their development, sample use cases, and more! Check out the complete show notes at 
Today we’re joined by Dr. Kate Darling, Research Specialist at the MIT Media Lab. Kate’s focus is on robot ethics and interaction, namely the social implication of how people treat robots and the purposeful design of robots in our daily lives. This episode is a fascinating look into the intersection of psychology and how we are using technology. We cover topics like: How to measure empathy The impact of robot treatment on kids behavior The correlation between animals and robots  Why ‘successful’ robots aren’t always humanoid and so much more!
Today we’re joined by Danny Stoll, Research Assistant at the University of Freiburg. Since high school, Danny has been fascinated by Deep Learning which has grown into a desire to make machine learning available to anyone with interest. Danny’s current research can be encapsulated in his latest paper, ‘Learning to Design RNA’. Designing RNA molecules has become increasingly popular as RNA is responsible for regulating biological process, even connected to diseases like Alzheimers and Epilepsy. In this episode, Danny discusses: The RNA design process through reverse engineering How his team’s deep learning algorithm is applied to train and design sequences Transfer learning & multitask learning Ablation studies, hyperparameter optimization, the difference between chemical and statistical based approaches and more!
Today we’re joined by Theofanis Karayannis, Assistant Professor at the Brain Research Institute of the University of Zurich. Theo’s research is currently focused on understanding how circuits in the brain are formed during development and modified by experiences. Working with animal models, Theo segments and classifies the brain regions, then detects cellular signals that make connections throughout and between each region. How? The answer is (relatively) simple: Deep Learning. In this episode we discuss: Adapting DL methods to fit the biological scope of work The distribution of connections that makes neurological decisions in both animals and humans every day The way images of the brain are collected Genetic trackability, and more!
Today we’re joined by Emma Strubell, currently a visiting scientist at Facebook AI Research. Emma’s focus is on NLP and bringing state of the art NLP systems to practitioners by developing efficient and robust machine learning models. Her paper, Energy and Policy Considerations for Deep Learning in NLP, hones in on one of the biggest topics of the generation: environmental impact. In this episode we discuss: How training neural networks have resulted in an increase in accuracy, however the computational resources required to train these models is staggering - and carbon footprints are only getting bigger Emma’s research methods for determining carbon emissions How companies are reacting to environmental concerns What we, as an industry, can be doing better
Today we’re joined by Zachary Lipton, Assistant Professor in the Tepper School of Business. With an overarching theme of data quality and interpretation, Zachary's research and work is focused on machine learning in healthcare, with the goal of not replacing doctors, but to assist through an understanding of the diagnosis and treatment process. Zachary is also working on the broader question of fairness and ethics in machine learning systems across multiple industries. We delve into these topics today, discussing:  Supervised learning in the medical field,  Guaranteed robustness under distribution shifts,  The concept of ‘fairwashing’, How there is insufficient language in machine learning to encompass abstract ethical behavior, and much, much more
Today we’re joined by Dr. Stephen Odaibo, Founder and CEO of RETINA-AI Health Inc. Stephen’s unique journey to machine learning and AI includes degrees in math, medicine and computer science, which led him to an ophthalmology practice before taking on the ultimate challenge as an entrepreneur. In this episode we discuss: How RETINA-AI Health harnesses the power of machine learning to build autonomous systems that diagnose and treat retinal diseases  The importance of domain experience and how Stephen’s expertise in ophthalmology and engineering along with the current state of both industries that led to the founding of his company His work with GANs to create artificial retinal images and why more data isn’t always better!
Today we’re joined by Rayid Ghani, Director of the Center for Data Science and Public Policy at the University of Chicago. Rayid’s goal is to combine his skills in machine learning and data with his desire to improve public policy and the social sector. Drawing on his range of experience from the corporate world to Chief Scientist for the 2012 Obama Campaign, we delve into the world of automated predictions and explainability methods. Here we discuss: How automated predictions can be helpful, but they don’t always paint a full picture  When dealing with public policy and the social sector, the key to an effective explainability method is the correct context Machine feedback loops that help humans override the wrong predictions and reinforce the right ones Supporting proactive intervention through complex explanability tools
Today we’re joined by Michael Levin, Director of the Allen Discovery Institute at Tufts University. Michael joined us back at NeurIPS to discuss his invited talk “What Bodies Think About: Bioelectric Computation Beyond the Nervous System as Inspiration for New Machine Learning Platforms.” In our conversation, we talk about: Synthetic living machines, novel AI architectures and brain-body plasticity How our DNA doesn’t control everything like we thought and how the behavior of cells in living organisms can be modified and adapted Biological systems dynamic remodeling in the future of developmental biology and regenerative medicine...and more! The complete show notes for this episode can be found at  Register for TWIMLcon: AI Platforms now at!
Comments (9)

Glory Dey

I love this channel and all the great podcasts. The topics are very relevant and the speakers are well informed experts so the episodes are very educative. Only request, please change the opening music note of the podcast. It is very unpleasant tune sets a jarring effect right at the beginning. Otherwise all these episodes are very interesting in the field of innovations in Artificial Intelligence and Machine Learning! Regards!

Jun 25th

Billy Bloomer

so smart you can smell it

Jun 14th

raqueeb shaikh

great podcast

May 31st

Loza Boza

Phenomenal discussion. Thank you! Particularly enjoyed the parts on generative models and the link to Daniel Kahneman.

May 20th

simon abdou

Horrible Audio

May 9th

Özgür Yüksel

This is a very realistic and proper episode which explains quantum computing even as alone.

Apr 9th


Hello all, Thanks for podcast Can we combine the two agent learnings from same environment to find the best actions Thanks

Mar 14th

Bhavul Gauri

notes : * Data scientists are not trained to think of money optimisations. plotting cpu usage vs accuracy gives an idea about it. if u increase data 4x as much just to gain 1% increase in accuracy that may not be great because you're using 4 times as much CPU power * a team just decicated to monitoring. i. monitor inputs : should not go beyond a certain range for each feature that you are supposed to have. Nulls ratio shouldn't change by a lot. ii. monitor both business and model metrics. sometimes even if model metrics get better ur business metrics could go low....and this could be the case like better autocompletion makes for low performance spell check OR it could also depend upon other things that have changed. or seasonality. * Data scientists and ML engineers in pairs. ML Engineers get to learn about the model while Data Scientists come up with it. both use same language. ML Engineers make sure it gets scaled up and deployed to production. * Which parameters are somewhat stable no matter how many times you retrain vs what parameters are volatile. the volatile ones could cause drastic changes. so u can reverse engineer this way.

Mar 11th

Khaled Zamer

Super.. very informative. Thanks

Aug 26th
Download from Google Play
Download from App Store