DiscoverThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Author: Sam Charrington

Subscribed: 28,437Played: 405,262
Share

Description

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders.

Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader.

Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.
428 Episodes
Reverse
Today we’re joined by Burr Settles, Research Director at Duolingo. Most would acknowledge that one of the most effective ways to learn is one on one with a tutor, and Duolingo’s main goal is to replicate that at scale. In our conversation with Burr, we dig how the business model has changed over time, the properties that make a good tutor, and how those features translate to the AI tutor they’ve built. We also discuss the Duolingo English Test, and the challenges they’ve faced with maintaining the platform while adding languages and courses. Check out the complete show notes for this episode at twimlai.com/go/412.
Today we’re joined by Artur Yakimovich, Co-Founder at Artificial Intelligence for Life Sciences and a visiting scientist in the Lab for Molecular Cell Biology at University College London. In our conversation with Artur, we explore the gulf that exists between life science researchers and the tools and applications used by computer scientists.  While Artur’s background is in viral chemistry, he has since transitioned to a career in computational biology to “see where chemistry stopped, and biology started.” We discuss his work in that middle ground, looking at quite a few of his recent work applying deep learning and advanced neural networks like capsule networks to his research problems.  Finally, we discuss his efforts building the Artificial Intelligence for Life Sciences community, a non-profit organization he founded to bring scientists from different fields together to share ideas and solve interdisciplinary problems.  Check out the complete show notes at twimlai.com/go/411.
Today we’re joined by Kavita Bala, the Dean of Computing and Information Science at Cornell University.  Kavita, whose research explores the overlap of computer vision and computer graphics, joined us to discuss a few of her projects, including GrokStyle, a startup that was recently acquired by Facebook and is currently being deployed across their Marketplace features. We also talk about StreetStyle/GeoStyle, projects focused on using social media data to find style clusters across the globe.  Kavita shares her thoughts on the privacy and security implications, progress with integrating privacy-preserving techniques into vision projects like the ones she works on, and what’s next for Kavita’s research. The complete show notes for this episode can be found at twimlai.com/go/410.
Today we’re joined by Nikos Athanasiou, Muhammed Kocabas, Ph.D. students, and Michael Black, Director of the Max Planck Institute for Intelligent Systems.  We caught up with the group to explore their paper VIBE: Video Inference for Human Body Pose and Shape Estimation, which they submitted to CVPR 2020. In our conversation, we explore the problem that they’re trying to solve through an adversarial learning framework, the datasets (AMASS) that they’re building upon, the core elements that separate this work from its predecessors in this area of research, and the results they’ve seen through their experiments and testing.  The complete show notes for this episode can be found at https://twimlai.com/go/409. Register for TWIMLfest today!
Today we’re joined by Georgia Gkioxari, a research scientist at Facebook AI Research.  Georgia was hand-picked by the TWIML community to discuss her work on the recently released open-source library PyTorch3D. In our conversation, Georgia describes her experiences as a computer vision researcher prior to the 2012 deep learning explosion, and how the entire landscape has changed since then.  Georgia walks us through the user experience of PyTorch3D, while also detailing who the target audience is, why the library is useful, and how it fits in the broad goal of giving computers better means of perception. Finally, Georgia gives us a look at what it’s like to be a co-chair for CVPR 2021 and the challenges with updating the peer review process for the larger academic conferences.  The complete show notes for this episode can be found at twimlai.com/go/408.
Today we’re joined by the legendary Michael I. Jordan, Distinguished Professor in the Departments of EECS and Statistics at UC Berkeley.  Michael was gracious enough to connect us all the way from Italy after being named IEEE’s 2020 John von Neumann Medal recipient. In our conversation with Michael, we explore his career path, and how his influence from other fields like philosophy shaped his path.  We spend quite a bit of time discussing his current exploration into the intersection of economics and AI, and how machine learning systems could be used to create value and empowerment across many industries through “markets.” We also touch on the potential of “interacting learning systems” at scale, the valuation of data, the commoditization of human knowledge into computational systems, and much, much more. The complete show notes for this episode can be found at. twimlai.com/go/407.
Today we’re joined by Sameer Singh, an assistant professor in the department of computer science at UC Irvine.  Sameer’s work centers on large-scale and interpretable machine learning applied to information extraction and natural language processing. We caught up with Sameer right after he was awarded the best paper award at ACL 2020 for his work on Beyond Accuracy: Behavioral Testing of NLP Models with CheckList. In our conversation, we explore CheckLists, the task-agnostic methodology for testing NLP models introduced in the paper. We also discuss how well we understand the cause of pitfalls or failure modes in deep learning models, Sameer’s thoughts on embodied AI, and his work on the now famous LIME paper, which he co-authored alongside Carlos Guestrin.  The complete show notes for this episode can be found at twimlai.com/go/406.
Today we’re joined by Gary Ren, a machine learning engineer for the logistics team at DoorDash.  In our conversation, we explore how machine learning powers the entire logistics ecosystem. We discuss the stages of their “marketplace,” and how using ML for optimized route planning and matching affects consumers, dashers, and merchants. We also talk through how they use traditional mathematics, classical machine learning, potential use cases for reinforcement learning frameworks, and challenges to implementing these explorations.   The complete show notes for this episode can be found at twimlai.com/go/405! Check out our upcoming event at twimlai.com/twimlfest
Today we’re joined by Dillon Erb, Co-founder & CEO of Paperspace. We’ve followed Paperspace since their origins offering GPU-enabled compute resources to data scientists and machine learning developers, to the release of their Jupyter-based Gradient service. Our conversation with Dillon centered on the challenges that organizations face building and scaling repeatable machine learning workflows, and how they’ve done this in their own platform by applying time-tested software engineering practices.  We also discuss the importance of reproducibility in production machine learning pipelines, how the processes and tools of software engineering map to the machine learning workflow, and technical issues that ML teams run into when trying to scale the ML workflow. The complete show notes for this episode can be found at twimlai.com/go/404.
Today we’re joined by Professor of Computer Science at UC Berkeley, Dawn Song. Dawn’s research is centered at the intersection of AI, deep learning, security, and privacy. She’s currently focused on bringing these disciplines together with her startup, Oasis Labs.  In our conversation, we explore their goals of building a ‘platform for a responsible data economy,’ which would combine techniques like differential privacy, blockchain, and homomorphic encryption. The platform would give consumers more control of their data, and enable businesses to better utilize data in a privacy-preserving and responsible way.  We also discuss how to privatize and anonymize data in language models like GPT-3, real-world examples of adversarial attacks and how to train against them, her work on program synthesis to get towards AGI, and her work on privatizing coronavirus contact tracing data. The complete show notes for this episode can be found twimlai.com/go/403.
Today we’re joined by Wilka Carvalho, a PhD student at the University of Michigan, Ann Arbor. We first met Wilka at the Black in AI workshop at last year’s NeurIPS conference, and finally got a chance to catch up about his latest research, ‘ROMA: A Relational, Object-Model Learning Agent for Sample-Efficient Reinforcement Learning.’ In the paper, Wilka explores the challenge of object interaction tasks, focusing on every day, in-home functions like filling a cup of water in a sink.  In our conversation, we discuss his interest in understanding the foundational building blocks of intelligence, how he’s addressing the challenge of ‘object-interaction’ tasks, the biggest obstacles he’s run into along the way. The complete show notes for this episode can be found at twimlai.com/go/402.
Today we’re bringing you the latest TWIML Discussion Series panel on Model Explainability. The use of machine learning in business, government, and other settings that require users to understand the model’s predictions has exploded in recent years. This growth, combined with the increased popularity of opaque ML models like deep learning, has led to the development of a thriving field of model explainability research and practice.  In this panel discussion, we bring together experts and researchers to explore the current state of explainability and some of the key emerging ideas shaping the field. Each guest will share their unique perspective and contributions to thinking about model explainability in a practical way. We explore concepts like stakeholder-driven explainability, adversarial attacks on explainability methods, counterfactual explanations, legal and policy implications, and more. We round out the session with an audience Q&A! Check out the list of resources below! The complete show notes for this episode can be found at twimlai.com/go/401.
Today we’re joined by Johannes Eichstaedt, an Assistant Professor of Psychology at Stanford University.  Johannes joined us at the outset of the coronavirus pandemic to discuss his use of Facebook and Twitter data to measure the psychological states of large populations and individuals. In our conversation, we explore how Johannes applies his physics background to a career as a computational social scientist, the differences in communication on social media vs the real world, and what language indicators point to changes in mental health.  We also discuss some of the major patterns in the data that emerged over the first few months of lockdown, including mental health, social norms, and political patterns. We also explore how Johannes built the process, and the techniques he’s using to collect, sift through, and understand the data. The complete show notes for this episode can be found at twimlai.com/go/400.
Today we’re joined by Devi Parikh, Associate Professor at the School of Interactive Computing at Georgia Tech, and research scientist at Facebook AI Research (FAIR).  While Devi’s work is more broadly focused on computer vision applications, we caught up to discuss her presentation on AI and Creativity at the CV for Fashion, Art and Design workshop at CVPR 2020. In our conversation, we touch on Devi’s definition of creativity,  explore multiple ways that AI could impact the creative process for artists, and help humans become more creative. We investigate tools like casual creator for preference prediction, neuro-symbolic generative art, and visual journaling.  The complete show notes for this episode can be found at twimlai.com/talk/399. A quick reminder that this is your last chance to register for tomorrow’s Model Explainability Forum! For more information, visit https://twimlai.com/explainabilityforum.
Today we’re joined by Max Welling, Vice President of Technologies at Qualcomm Netherlands, and Professor at the University of Amsterdam. In case you missed it, Max joined us last year to discuss his work on  Gauge Equivariant CNNs and Generative Models - the 2nd most popular episode of 2019.  In this conversation, we explore the concept and Max’s work in neural augmentation, and how it’s being deployed for channel tracking and other applications. We also discuss their current work on federated learning and incorporating the technology on devices to give users more control over the privacy of their personal data. Max also shares his thoughts on quantum mechanics and the future of quantum neural networks for chip design. The complete show notes for this episode can be found at twimlai.com/talk/398. This episode is sponsored by Qualcomm Technologies.
Today we conclude our 2020 ICML coverage joined by Iordanis Kerenidis, Research Director at Centre National de la Recherche Scientifique (CNRS) in Paris, and Head of Quantum Algorithms at QC Ware. Iordanis’ research centers around quantum algorithms of machine learning, and was an ICML main conference Keynote speaker on the topic! We focus our conversation on his presentation, exploring the prospects and challenges of quantum machine learning, as well as the field’s history, evolution, and future. We’ll also discuss the foundations of quantum computing, and some of the challenges to consider for breaking into the field. The complete show notes for this episode can be found at twimlai.com/talk/397. For complete ICML series details, visit twimlai.com/icml20.
Today we continue our ICML series with Elaine Nsoesie, assistant professor at Boston University.  Elaine presented a keynote talk at the ML for Global Health workshop at ICML 2020, where she shared her research centered around data-driven epidemiology. In our conversation, we discuss the different ways that machine learning applications can be used to address global health issues, including use cases like infectious disease surveillance via hospital parking lot capacity, and tracking search data for changes in health behavior in African countries. We also discuss COVID-19 epidemiology, focusing on the importance of recognizing how the disease is affecting people of different races, ethnicities, and economic backgrounds. To follow along with our 2020 ICML Series, visit twimlai.com/icml20. The complete show notes for this episode can be found at twimali.com/talk/396.
Today we’re joined by Hal Daume III, professor at the University of Maryland, Senior Principal Researcher at Microsoft Research, and Co-Chair of the 2020 ICML Conference.  We had the pleasure of catching up with Hal ahead of this year's ICML to discuss his research at the intersection of bias, fairness, NLP, and the effects language has on machine learning models.  We explore language in two categories as they appear in machine learning models and systems: (1) How we use language to interact with the world, and (2) how we “do” language. We also discuss ways to better incorporate domain experts into ML system development, and Hal’s experience as ICML Co-Chair. Follow along with our ICML coverage at twimlai.com/icml20. The complete show notes for this episode can be found at twimlai.com/talk/395.
Today we’re excited to be joined by return guest Michael Bronstein, Professor at Imperial College London, and Head of Graph Machine Learning at Twitter. We last spoke with Michael at NeurIPS in 2017 about Geometric Deep Learning.  Since then, his research focus has slightly shifted to exploring graph neural networks. In our conversation, we discuss the evolution of the graph machine learning space, contextualizing Michael’s work on geometric deep learning and research on non-euclidian unstructured data. We also talk about his new role at Twitter and some of the research challenges he’s faced, including scalability and working with dynamic graphs. Michael also dives into his work on differential graph modules for graph CNNs, and the various applications of this work. The complete show notes for this episode can be found at twimlai.com/talk/394.
Today we’re excited to bring ‘The Great ML Language (Un)Debate’ to the podcast! In the latest edition of our series of live discussions, we brought together experts and enthusiasts representing an array of both popular and emerging programming languages for machine learning. In the discussion, we explored the strengths, weaknesses, and approaches offered by Clojure, JavaScript, Julia, Probabilistic Programming, Python, R, Scala, and Swift. We round out the session with an audience Q&A (58:28), covering topics including favorite secondary languages, what languages pair well, quite a few questions about C++, and much more.  Head over to twimlai.com/talk/393 for more information about our panelists!
loading
Comments (16)

Daniel Sierra

Best podcast on machine learning an ai

May 27th
Reply

Özgür Yüksel

Thanks a lot for introducing us to the genius of our age. Tremendously inspiring.

Dec 11th
Reply

Glory Dey

A very good insightful episode, Maki Moussavi explains the various points in a lucid manner. Truly, we are the captain of our life's ship. We are responsible for our own emotions and actions. Being proactive rather than reactive is the key to success and happiness! I will be reading this book! Thanks for sharing this interesting podcast. Have a great day!

Oct 15th
Reply

Glory Dey

I love this channel and all the great podcasts. The topics are very relevant and the speakers are well informed experts so the episodes are very educative. Only request, please change the opening music note of the podcast. It is very unpleasant tune sets a jarring effect right at the beginning. Otherwise all these episodes are very interesting in the field of innovations in Artificial Intelligence and Machine Learning! Regards!

Jun 25th
Reply

Billy Bloomer

so smart you can smell it

Jun 14th
Reply

raqueeb shaikh

great podcast

May 31st
Reply

Loza Boza

Phenomenal discussion. Thank you! Particularly enjoyed the parts on generative models and the link to Daniel Kahneman.

May 20th
Reply

simon abdou

Horrible Audio

May 9th
Reply

Özgür Yüksel

This is a very realistic and proper episode which explains quantum computing even as alone.

Apr 9th
Reply

Naadodi

Hello all, Thanks for podcast Can we combine the two agent learnings from same environment to find the best actions Thanks

Mar 14th
Reply

Bhavul Gauri

notes : * Data scientists are not trained to think of money optimisations. plotting cpu usage vs accuracy gives an idea about it. if u increase data 4x as much just to gain 1% increase in accuracy that may not be great because you're using 4 times as much CPU power * a team just decicated to monitoring. i. monitor inputs : should not go beyond a certain range for each feature that you are supposed to have. Nulls ratio shouldn't change by a lot. ii. monitor both business and model metrics. sometimes even if model metrics get better ur business metrics could go low....and this could be the case like better autocompletion makes for low performance spell check OR it could also depend upon other things that have changed. or seasonality. * Data scientists and ML engineers in pairs. ML Engineers get to learn about the model while Data Scientists come up with it. both use same language. ML Engineers make sure it gets scaled up and deployed to production. * Which parameters are somewhat stable no matter how many times you retrain vs what parameters are volatile. the volatile ones could cause drastic changes. so u can reverse engineer this way.

Mar 11th
Reply

Abhijeet Gulati

great podcast. do we reference to papers that were discussed by Ganju. good job

Jan 22nd
Reply

Khaled Zamer

Super.. very informative. Thanks

Aug 26th
Reply

Printing Printing

there is no content lol. Host, please invite real scientists

Jan 1st
Reply

James Flint

This is an incredible interview. Dopamine as a correlate of prediction error makes so much sense. Best Twiml talk to date!

Dec 30th
Reply

Qanit Al-Syed

conversations drag too much. gets boring. stop the marketing and get to the content

Dec 30th
Reply
Download from Google Play
Download from App Store