DiscoverThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Author: Sam Charrington

Subscribed: 14,693Played: 276,353
Share

Description

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders.

Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader.

Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, deep learning, computer science, data science and more.
332 Episodes
Reverse
On today’s episode, we’re joined by Terrence Sejnowski, Francis Crick Chair, head of the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies and faculty member at UC San Diego. In our conversation with Terry, we discuss: His role as a founding researcher in the field of computational neuroscience, and as a founder of the annual Telluride Neuromorphic Cognition Engineering Workshop.  We dive deep into the world of spiking neural networks and brain architecture, the relationship of neuroscience to machine learning, and ways to make NN’s more efficient through spiking.  Terry also gives us some insight into hardware used in this field, characterizes the major research problems currently being undertaken, and the future of spiking networks.  Check out the complete show notes at twimlai.com/talk/317.  
Today we’re joined by return guest Xavier Amatriain, Co-founder and CTO of Curai. In our conversation, we discuss Curai’s goal of providing the world’s best primary care to patients via their smartphone, and how ML & AI will bring down costs healthcare accessible and scaleable.  The shortcomings of traditional primary care, and how Curai fills that role,  Some of the unique challenges his team faces in applying this use case in the healthcare space.  Their use of expert systems, how they develop and train their models with synthetic data through noise injection How NLP projects like BERT, Transformer, and GPT-2 fit into what Curai is building.  Check out the complete show notes page at twimlai.com/talk/316
Today we’re joined by Tom Dietterich, Distinguished Professor Emeritus at Oregon State University. We had the pleasure of discussing Tom’s recent blog post, “What does it mean for a machine to “understand,” in which he discusses: Tom’s position on what qualifies as machine “understanding”, including a few examples of systems that he believes exhibit understanding. The role of deep learning in achieving artificial general intelligence. The current “Hype Engine” that exists around AI Research, and SOOO much more.   Make sure you check out the show notes at twimlai.com/talk/315, where you’ll find links to Tom’s blog post, as well as a ton of other references. 
Today we’re joined by Jonathan Hung, Sr. Software Engineer at LinkedIn, who we caught up with at TensorFlow World last week. In our conversation, we discuss:  Jonathan’s presentation at the event focused on LinkedIn’s efforts scaling Tensorflow. Jonathan’s work as part of the Hadoop infrastructure team, including experimenting on Hadoop with various frameworks, and their motivation for using TensorFlow on their pre-existing Hadoop clusters infrastructure.  TonY, or TensorFlow on Yard, LinkedIn’s framework that natively runs deep learning jobs on Hadoop, and its relationship with Pro-ML, LinkedIn’s internal AI Platform, which we’ve discussed on earlier episodes of the podcast (Link). Finally, we discuss how far LinkedIn’s Hadoop infrastructure has come since 2017, and their foray into using Kubernetes for research.  The complete show notes can be found at twimlai.com/talk/314.
Today we’re joined by Omoju Miller, a Sr. machine learning engineer at GitHub. In our conversation, we discuss: Her dissertation, Hiphopathy, A Socio-Curricular Study of Introductory Computer Science,  Her work as an inaugural member of the Github machine learning team Her two presentations at Tensorflow World, “Why is machine learning seeing exponential growth in its communities” and “Automating your developer workflow on GitHub with Tensorflow.” The complete show notes for this episode can be found at twimlai.com/talk/313. 
Today we’re joined by Archana Venkataraman, John C. Malone Assistant Professor of Electrical and Computer Engineering at Johns Hopkins University, and MIT 35 innovators under 35 recipient. Archana’s research at the Neural Systems Analysis Laboratory focuses on developing tools, frameworks, and algorithms to better understand, and treat neurological and psychiatric disorders, including autism, epilepsy, and others. In our conversation, we explore her lab’s work in applying machine learning to these problems, including biomarker discovery, disorder severity prediction, as well as some of the various techniques and frameworks used. The complete show notes for this episode can be found at twimlai.com/talk/312.
Today we are joined by Phoebe DeVries, Postdoctoral Fellow in the Department of Earth and Planetary Sciences at Harvard and assistant faculty at the University of Connecticut and Brendan Meade, Professor of Earth and Planetary Sciences and affiliate faculty in computers sciences at Harvard. In this episode, we discuss: Phoebe and Brendan’s work is focused on discovering as much as possible about earthquakes before they happen, and through measuring how the earth’s surface moves, predicting future movement location Their recent paper, ‘Deep learning of aftershock patterns following large earthquakes’, and  The preliminary steps that guided them to using machine learning in the earth sciences Their current research involving calculating stress changes in the crust and upper mantle after a large earthquake and using a neural network to map those changes to predict aftershock locations The complex systems that encompass earth science studies, including the approaches, challenges, surprises, and results that come with incorporating machine learning models and data sets into a new field of study The complete show notes for this episode can be found at twimlai.com/talk/311.
An often forgotten about topic garnered high praise at TWIMLcon this month: operationalizing responsible and ethical AI. This important topic was combined with an impressive panel of speakers, including: Rachel Thomas, Director, Center for Applied Data Ethics at the USF Data Institute, Guillaume Saint-Jacques, Head of Computational Science at LinkedIn, and Parinaz Sobahni, Director of Machine Learning at Georgian Partners, moderated by Khari Johnson, Senior AI Staff Writer at VentureBeat. This episode covers: The basics of operationalizing AI ethics in a range of orgs and insight into an array of tools, approaches, and methods that have been found useful for teams to use The biggest concerns, like focusing more on harm as opposed to algorithmic bias and encouraging specific responsibility for systems Educating the general public on the realities and misconceptions of probabilistic methods and putting into place preventative guardrails has become imperative for any operation The long-term benefits of ethical decision-making and the challenges of established versus startup companies Questions from the TWIMLcon audience, some common examples of power dynamics in AI ethics, and what we as a community can be doing to push the needle in the very powerful world of responsible AI The complete show notes can be found at twimlai.com/talk/310
In this episode from a stellar TWIMLcon panel, the state and future of larger, more established brands is analyzed and discussed. Hear from Amr Awadallah, Founder and Global CTO of Cloudera, Pallav Agrawal, Director of Data Science at Levi Strauss & Co., and Jürgen Weichenberger, Data Science Senior Principal & Global AI Lead at Accenture, moderated by Josh Bloom Professor at UC Berkeley. In this episode we discuss: For an ML/AI initiative to be successful, a conscious and noticeable shift is now required in how things used to be managed while educating cross-functional teams in data science terms and methodologies  It can be tempting and exciting to constantly be trying out the latest technologies, but brand consistency and sustainability is imperative to success How the real business value - the money - can be found by putting your big ML/AI goals and projects in the core competencies of the company.   Are traditional enterprises fundamentally changing their business through ML/AI, and if so, why?  Real-world examples and thought-provoking ideas for scaling ML/AI in the traditional enterprise The complete show notes can be found at twimlai.com/talk/309.
TWIMLcon brought together so many in the ML/AI community to discuss the unique challenges to building and scaling machine learning platforms. In this episode, hear from a diverse set of panelists including: Pardis Noorzad, Data Science Manager at Twitter, Eric Colson, Chief Algorithms Officer Emeritus at Stitch Fix, and Jennifer Prendki, Founder & CEO at Alectio, moderated by Maribel Lopez, Founder & Principal Analyst at Lopez Research: How to approach changing the way companies think about machine learning Engaging different groups to work together effectively - i.e. c-suite, marketing, sales, engineering, etc.  The importance of clear communication about ML lifecycle management How full stack roles can provide immense value Tips and tricks to work faster, more efficiently, and create an org-wide culture that holds machine learning as a valued priority The complete show notes can be found at twimlai.com/talk/308.
loading
Comments (10)

Glory Dey

A very good insightful episode, Maki Moussavi explains the various points in a lucid manner. Truly, we are the captain of our life's ship. We are responsible for our own emotions and actions. Being proactive rather than reactive is the key to success and happiness! I will be reading this book! Thanks for sharing this interesting podcast. Have a great day!

Oct 15th
Reply

Glory Dey

I love this channel and all the great podcasts. The topics are very relevant and the speakers are well informed experts so the episodes are very educative. Only request, please change the opening music note of the podcast. It is very unpleasant tune sets a jarring effect right at the beginning. Otherwise all these episodes are very interesting in the field of innovations in Artificial Intelligence and Machine Learning! Regards!

Jun 25th
Reply

Billy Bloomer

so smart you can smell it

Jun 14th
Reply

raqueeb shaikh

great podcast

May 31st
Reply

Loza Boza

Phenomenal discussion. Thank you! Particularly enjoyed the parts on generative models and the link to Daniel Kahneman.

May 20th
Reply

simon abdou

Horrible Audio

May 9th
Reply

Özgür Yüksel

This is a very realistic and proper episode which explains quantum computing even as alone.

Apr 9th
Reply

Naadodi

Hello all, Thanks for podcast Can we combine the two agent learnings from same environment to find the best actions Thanks

Mar 14th
Reply

Bhavul Gauri

notes : * Data scientists are not trained to think of money optimisations. plotting cpu usage vs accuracy gives an idea about it. if u increase data 4x as much just to gain 1% increase in accuracy that may not be great because you're using 4 times as much CPU power * a team just decicated to monitoring. i. monitor inputs : should not go beyond a certain range for each feature that you are supposed to have. Nulls ratio shouldn't change by a lot. ii. monitor both business and model metrics. sometimes even if model metrics get better ur business metrics could go low....and this could be the case like better autocompletion makes for low performance spell check OR it could also depend upon other things that have changed. or seasonality. * Data scientists and ML engineers in pairs. ML Engineers get to learn about the model while Data Scientists come up with it. both use same language. ML Engineers make sure it gets scaled up and deployed to production. * Which parameters are somewhat stable no matter how many times you retrain vs what parameters are volatile. the volatile ones could cause drastic changes. so u can reverse engineer this way.

Mar 11th
Reply

Khaled Zamer

Super.. very informative. Thanks

Aug 26th
Reply
loading
Download from Google Play
Download from App Store