DiscoverThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Author: Sam Charrington

Subscribed: 28,075Played: 349,935
Share

Description

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders.

Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader.

Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, deep learning, computer science, data science and more.
379 Episodes
Reverse
Today we’re joined by David Duvenaud, Assistant Professor at the University of Toronto. David, who joined us back on episode #96 back in January ‘18, is back to talk about the various papers that have come out of his lab over the last year and change, focused on Neural Ordinary Differential Equations, a type of continuous-depth neural network. In our conversation, we talk through quite a few of David’s papers on the topic, which you can find below on the show notes page. We discuss the problem that David is trying to solve with this research, the potential that ODEs have to replace “the backbone” of the neural networks that are used to train today, and David’s approach to engineering.  The complete show notes for this episode can be found at twimlai.com/talk/364.
Today we’re joined by Sharad Goel, Assistant Professor in the management science & engineering department at Stanford. Sharad, who also has appointments in the computer science, sociology, and law departments, has spent the recent years focused on applying machine learning to better understand and improve public policy.  In our conversation, we dive into Sharad’s non-traditional path to academia, which includes extensive work on discriminatory policing, including practices like stop-and-frisk, leading up to his work on The Stanford Open Policing Project, which uses data from over 200 million traffic stops nationwide to “help researchers, journalists, and policymakers investigate and improve interactions between police and the public.” Finally, we discuss Sharad’s paper “The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning,” which identifies three formal definitions of fairness in algorithms, the statistical limitations of each, and details how mathematical formalizations of fairness could be introduced into algorithms. Check out the complete show notes for this episode at twimlai.com/talk/363.
Today we’re joined by Cathy Wu, Gilbert W. Winslow Career Development Assistant Professor in the department of Civil and Environmental Engineering at MIT. We had the pleasure of catching up with Cathy at NeurIPS to discuss her talk “Mixed Autonomy Traffic: A Reinforcement Learning Perspective.”  In our conversation, we discuss Cathy’s transition to applying machine learning to civil engineering, specifically, understanding the potential impact autonomous vehicles would have on traffic once deployed. To better understand this, Cathy built multiple reinforcement learning simulations, including a track and intersection scenarios. We talk through how each scenario is set up, how human drivers are modeled for this simulation, and the results of the experiments. Check out the complete show notes for this episode at twimlai.com/talk/362.
Today we’re joined by one of, if not the most cited computer scientist in the world, Yoshua Bengio. Yoshua is a Professor in the Department of Computer Science and Operations Research at the University of Montreal and the Founder and Scientific Director of MILA. We caught up with Yoshua just a few weeks into the coronavirus pandemic, so we spend a bit of time discussing his work both broadly on the impact of AI in society, as well as his current endeavor in building a COVID-19 tracing application, and the use of ML to propose experimental candidate drugs. We also explore his work on consciousness, including how Yoshua defines consciousness, his paper “The Consciousness Prior,” the relationship between consciousness and intelligence, how attention could be used to train consciousness, the current state of consciousness research, and how he sees it evolving.  Check out the complete show notes page at twimlai.com/talk/361.
Today we’re joined by Josh Tobin, Co-Organizer of the machine learning training program Full Stack Deep Learning, and more recently, the founder of a stealth startup. We had the pleasure of sitting down with Josh prior to his presentation of his paper Geometry-Aware Neural Rendering at NeurIPS. This work looks to build upon DeepMind’s “Neural scene representation and rendering,” with the goal of developing implicit scene understanding. We discuss challenges, the various datasets used to train his model, and the similarities between variational autoencoder training and his process.  The complete show notes for this episode can be found at twimlai.com/talk/360.
Today we’re joined by Ken Goldberg, professor of engineering and William S. Floyd Jr. distinguished chair in engineering at UC Berkeley. Ken, who is also an accomplished artist, and collaborator on projects such as DexNet and The Telegarden, has recently been focusing on robotic learning for grasping. In our conversation with Ken, we chat about some of the challenges that arise when working on robotic grasping, including uncertainty in perception, control, and physics. We also discuss his view on the role of physics in robotic learning, citing co-contributors Sergey Levine and Pieter Abbeel along the way. Finally, we discuss some of his thoughts on potential robot use cases, from the use of robots in assisting in telemedicine, and agriculture, and even robotic Covid-19 testing. The complete show notes for this episode can be found at twimlai.com/talk/359.
Today we’re joined by Stefan Lee, assistant professor at the school of electrical engineering and computer science at Oregon State University. Stefan, who we sat down with at NeurIPS this past winter, is focused on the development of agents that can perceive their environment and communicate their understanding with humans in order to coordinate their actions to achieve mutual goals.  In our conversation, we focus on his paper ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks, a model for learning joint representations of image content and natural language. We talk through the development and training process for this model, the adaptation of the training process to incorporate additional visual information to BERT models, where this research leads from the perspective of integration between visual and language tasks and finally, we discuss the importance of visual grounding. Check out the complete show notes page at twimlai.com/talk/358.
Today we’re joined by Jürgen Schmidhuber, Co-Founder and Chief Scientist of NNAISENSE, the Scientific Director at IDSIA, as well as a Professor of AI at USI and SUPSI in Switzerland. Jürgen’s lab is well known for creating the Long Short-Term Memory (LSTM) network which has become a prevalent neural network, used commonly devices such as smartphones, which we discuss in detail in our first conversation with Jürgen back in 2017. In this conversation, we dive into some of Jürgen’s more recent work, including his recent paper, Reinforcement Learning Upside Down: Don’t Predict Rewards — Just Map Them to Actions. Check out the show notes page at twimlai.com/talk/357.
Today we're joined by Beidi Chen, PhD student at Rice University. Beidi is part of the team that developed a cheaper, algorithmic, CPU alternative to state-of-the-art GPU machines. They presented their findings at NeurIPS 2019 and have since gained a lot of attention for their paper, SLIDE: In Defense of Smart Algorithms Over Hardware Acceleration for Large-Scale Deep Learning Systems. In this interview, Beidi shares how the team took a new look at deep learning with the case of extreme classification by turning it into a search problem and using locality-sensitive hashing.   Check out the complete show notes at twimlai.com/talk/356. 
Today we're joined by Sergey Levine, an Assistant Professor in the Department of Electrical Engineering and Computer Science at UC Berkeley. We last heard from Sergey back in 2017, where we explored Deep Robotic Learning. We caught up with Sergey at NeurIPS 2019, where Sergey and his team presented 12 different papers -- which means a lot of ground to cover! Sergey and his lab’s recent efforts have been focused on contributing to a future where machines can be “out there in the real world, learning continuously through their own experience.” Sergey shares how many of the papers presented at the most recent NeurIPS conference are working to make that happen. Some of the major developments have been in the research fields of model-free reinforcement learning, causality and imitation learning, and offline reinforcement learning. Check out the complete show notes page at twimlai.com/talk/355.
Imagine spending years learning ML from the ground up, from its theoretical foundations, but still feeling like you didn’t really know how to apply it. That’s where David Odaibo found himself in 2015, after the second year of his PhD. David’s solution was Kaggle, a popular platform for data science competitions. Fast forward four years, and David is now a Kaggle Grandmaster, the highest designation, with particular accomplishment in computer vision competitions. Having completed his degree last year, he is currently co-founder and CTO of Analytical AI, a company that grew out of one of his recent Kaggle successes. David has a background in deep learning and medical imaging–something he shares with his brother, Stephen Odaibo, who we interviewed last year about his work in Retinal Image Generation for Disease Discovery. Check out the full article and interview at twimlai.com/talk/354
Predicting the future of science, particularly physics, is the task that Matteo Chinazzi, an associate research scientist at Northeastern University focused on in his paper Mapping the Physics Research Space: a Machine Learning Approach, along with co-authors including former TWIML AI Podcast guest Bruno Gonçalves. In addition to predicting the trajectory of physics research, Matteo is also active in the computational epidemiology field. His work in that area involves building simulators that can model the spread of diseases like Zika or the seasonal flu at a global scale.  Check out our full article on this episode at twimlai.com/talk/353.
The unfortunate reality is that many of the most commonly used machine learning metrics don't account for the complex trade-offs that come with real-world decision making. This is one of the challenges that today’s guest, Sanmi Koyejo has dedicated his research to address. Sanmi is an assistant professor at the Department of Computer Science at the University of Illinois, where he applies his background in cognitive science, probabilistic modeling, and Bayesian inference to pursue his research which focuses broadly on “adaptive and robust machine learning.” Check out the full episode write-up at twimlai.com/talk/352.
Today we’re joined by Ilias Diakonikolas, faculty in the CS department at the University of Wisconsin-Madison, and author of the paper Distribution-Independent PAC Learning of Halfspaces with Massart Noise, which was the recipient of the NeurIPS 2019 Outstanding Paper award. The paper, which focuses on high-dimensional robust learning, is regarded as the first progress made around distribution-independent learning with noise since the 80s. In our conversation, we explore robustness in machine learning, problems with corrupt data in high-dimensional settings, and of course, a deep dive into the paper.  Check out our full write up on the paper and the interview at twimlai.com/talk/351.
Today we’re joined by Kamran Khan, founder & CEO of BlueDot, and professor of medicine and public health at the University of Toronto. BlueDot, a digital health company with a focus on surveilling global infectious disease outbreaks, has been the recipient of a lot of attention for being the first to publicly warn about the coronavirus that started in Wuhan. How did the company’s system of algorithms and data processing techniques help flag the potential dangers of the disease? In this interview, Kamran talks us through how the technology works, its limits, and the motivation behind the work.  Check out our new and improved show notes article at twimlai.com/talk/350.
Today we’re joined by Emmanuel Ameisen, machine learning engineer at Stripe, and author of the recently published book “Building Machine Learning Powered Applications; Going from Idea to Product.” In our conversation, we discuss structuring end-to-end machine learning projects, debugging and explainability in the context of models, the various types of models covered in the book, and the importance of post-deployment monitoring.  Check out our full show notes article at twimlai.com/talk/349.
Today we’re joined by Abeba Birhane, PhD Student at University College Dublin and author of the recent paper Algorithmic Injustices: Towards a Relational Ethics. We caught up with Abeba, whose aforementioned paper was the recipient of the Best Paper award at the most recent Black in AI Workshop at NeurIPS, to go in-depth on the paper and the thought process around AI ethics. In our conversation, we discuss the “harm of categorization”, and how the thinking around these categorizations should be discussed, how ML generally doesn’t account for the ethics of various scenarios and how relational ethics could solve this issue, her most recent paper “Robot Rights? Let’s Talk about Human Welfare Instead,” and much more. Check out our complete write-up and resource page at twimlai.com/talk/348. 
Today we’re excited to kick off our annual Black in AI Series joined by Nemo Semret, CTO of Gro Intelligence. Gro provides an agricultural data platform dedicated to improving global food security, focused on applying AI at macro scale. In our conversation with Nemo, we discuss Gro’s approach to data acquisition, how they apply machine learning to various problems, and their approach to modeling.    Check out the full interview and show notes at twimlai.com/talk/347.
Today we’re joined by Ryan Rogers, Senior Software Engineer at LinkedIn. We caught up with Ryan at NeurIPS, where he presented the paper “Practical Differentially Private Top-k Selection with Pay-what-you-get Composition” as a spotlight talk. In our conversation, we discuss how LinkedIn allows its data scientists to access aggregate user data for exploratory analytics while maintaining its users’ privacy with differential privacy, and the major components of the paper. We also talk through one of the big innovations in the paper, which is discovering the connection between a common algorithm for implementing differential privacy, the exponential mechanism, and Gumbel noise, which is commonly used in machine learning.   The complete show notes for this episode can be found at twimlai.com/talk/346. 
Today we conclude our KubeCon ‘19 Series joined by Erez Cohen, VP of CloudX & AI at Mellanox. In our conversation, we discuss: Erez’s talk “Networking Optimizations for Multi-Node Deep Learning on Kubernetes.” where he discusses problems and solutions related to networking discovered during the journey to reduce training time.  NVIDIA’s recent acquisition of Mellanox, and what fruits that relationship hopes to bear.  The evolution of technologies like RDMA, GPU Direct, and Sharp, Mellanox’s solution to improve the performance of MPI operations, which can be found in NVIDIA’s NCCL collective communications library. How Mellanox is enabling Kubernetes and other platforms to take advantage of the various technologies mentioned above.  Why we should care about networking in Deep Learning, which is inherently a compute-bound process.  The complete show notes for this episode can be found at twimlai.com/talk/345.
loading
Comments (15)

Özgür Yüksel

Thanks a lot for introducing us to the genius of our age. Tremendously inspiring.

Dec 11th
Reply

Glory Dey

A very good insightful episode, Maki Moussavi explains the various points in a lucid manner. Truly, we are the captain of our life's ship. We are responsible for our own emotions and actions. Being proactive rather than reactive is the key to success and happiness! I will be reading this book! Thanks for sharing this interesting podcast. Have a great day!

Oct 15th
Reply

Glory Dey

I love this channel and all the great podcasts. The topics are very relevant and the speakers are well informed experts so the episodes are very educative. Only request, please change the opening music note of the podcast. It is very unpleasant tune sets a jarring effect right at the beginning. Otherwise all these episodes are very interesting in the field of innovations in Artificial Intelligence and Machine Learning! Regards!

Jun 25th
Reply

Billy Bloomer

so smart you can smell it

Jun 14th
Reply

raqueeb shaikh

great podcast

May 31st
Reply

Loza Boza

Phenomenal discussion. Thank you! Particularly enjoyed the parts on generative models and the link to Daniel Kahneman.

May 20th
Reply

simon abdou

Horrible Audio

May 9th
Reply

Özgür Yüksel

This is a very realistic and proper episode which explains quantum computing even as alone.

Apr 9th
Reply

Naadodi

Hello all, Thanks for podcast Can we combine the two agent learnings from same environment to find the best actions Thanks

Mar 14th
Reply

Bhavul Gauri

notes : * Data scientists are not trained to think of money optimisations. plotting cpu usage vs accuracy gives an idea about it. if u increase data 4x as much just to gain 1% increase in accuracy that may not be great because you're using 4 times as much CPU power * a team just decicated to monitoring. i. monitor inputs : should not go beyond a certain range for each feature that you are supposed to have. Nulls ratio shouldn't change by a lot. ii. monitor both business and model metrics. sometimes even if model metrics get better ur business metrics could go low....and this could be the case like better autocompletion makes for low performance spell check OR it could also depend upon other things that have changed. or seasonality. * Data scientists and ML engineers in pairs. ML Engineers get to learn about the model while Data Scientists come up with it. both use same language. ML Engineers make sure it gets scaled up and deployed to production. * Which parameters are somewhat stable no matter how many times you retrain vs what parameters are volatile. the volatile ones could cause drastic changes. so u can reverse engineer this way.

Mar 11th
Reply

Abhijeet Gulati

great podcast. do we reference to papers that were discussed by Ganju. good job

Jan 22nd
Reply

Khaled Zamer

Super.. very informative. Thanks

Aug 26th
Reply

Printing Printing

there is no content lol. Host, please invite real scientists

Jan 1st
Reply

James Flint

This is an incredible interview. Dopamine as a correlate of prediction error makes so much sense. Best Twiml talk to date!

Dec 30th
Reply

Qanit Al-Syed

conversations drag too much. gets boring. stop the marketing and get to the content

Dec 30th
Reply
loading
Download from Google Play
Download from App Store