DiscoverPractical AI: Machine Learning. Data Science
Practical AI: Machine Learning. Data Science
Claim Ownership

Practical AI: Machine Learning. Data Science

Author: Changelog Media

Subscribed: 2,365Played: 66,935
Share

Description

Making artificial intelligence practical, productive, and accessible to everyone. Practical AI is a show in which technology professionals, business people, students, enthusiasts, and expert guests engage in lively discussions about Artificial Intelligence and related topics (Machine Learning, Deep Learning, Neural Networks, etc). The focus is on productive implementations and real-world scenarios that are accessible to everyone. If you want to keep up with the latest advances in AI, while keeping one foot in the real world, then this is the show for you!
139 Episodes
Reverse
William Falcon wants AI practitioners to spend more time on model development, and less time on engineering. PyTorch Lightning is a lightweight PyTorch wrapper for high-performance AI research that lets you train on multiple-GPUs, TPUs, CPUs and even in 16-bit precision without changing your code! In this episode, we dig deep into Lightning, how it works, and what it is enabling. William also discusses the Grid AI platform (built on top of PyTorch Lightning). This platform lets you seamlessly train 100s of Machine Learning models on the cloud from your laptop.
Chris and Daniel sit down to chat about some exciting new AI developments including wav2vec-u (an unsupervised speech recognition model) and meta-learning (a new book about “How To Learn Deep Learning And Thrive In The Digital World”). Along the way they discuss engineering skills for AI developers and strategies for launching AI initiatives in established companies.
Tuhin Srivastava tells Daniel and Chris why BaseTen is the application development toolkit for data scientists. BaseTen’s goal is to make it simple to serve machine learning models, write custom business logic around them, and expose those through API endpoints without configuring any infrastructure.
Today we’re sharing a special crossover episode from The Changelog podcast here on Practical AI. Recently, Daniel Whitenack joined Jerod Santo to talk with José Valim, Elixir creator, about Numerical Elixir. This is José’s newest project that’s bringing Elixir into the world of machine learning. They discuss why José chose this as his next direction, the team’s layered approach, influences and collaborators on this effort, and their awesome collaborative notebook that’s built on Phoenix LiveView.
Apache TVM and OctoML

Apache TVM and OctoML

2021-05-1849:061

90% of AI / ML applications never make it to market, because fine tuning models for maximum performance across disparate ML software solutions and hardware backends requires a ton of manual labor and is cost-prohibitive. Luis Ceze and his team created Apache TVM at the University of Washington, then left founded OctoML to bring the project to market.
To say that Jeff Adams is a trailblazer when it comes to speech technology is an understatement. Along with many other notable accomplishments, his team at Amazon developed the Echo, Dash, and Fire TV changing our perception of how we could interact with devices in our home. Jeff now leads Cobalt Speech and Language, and he was kind enough to join us for a discussion about human computer interaction, multimodal AI tasks, the history of language modeling, and AI for social good.
Smart home data is complicated. There are all kinds of devices, and they are in many different combinations, geographies, configurations, etc. This complicated data situation is further exacerbated during a pandemic when time series data seems to be filled with anomalies. Evan Welbourne joins us to discuss how Amazon is synthesizing this disparate data into functionality for the next generation of smart homes. He discusses the challenges of working with smart home technology, and he describes how they developed their latest feature called “hunches.”
Mapping the world

Mapping the world

2021-04-2753:101

Ro Gupta from CARMERA teaches Daniel and Chris all about road intelligence. CARMERA maintains the maps that move the world, from HD maps for automated driving to consumer maps for human navigation.
Nhung Ho joins Daniel and Chris to discuss how data science creates insights into financial operations and economic conditions. They delve into topics ranging from predictive forecasting to aid small businesses, to learning about the economic fallout from the COVID-19 Pandemic.
Dave Lacey takes Daniel and Chris on a journey that connects the user interfaces that we already know - TensorFlow and PyTorch - with the layers that connect to the underlying hardware. Along the way, we learn about Poplar Graph Framework Software. If you are the type of practitioner who values ‘under the hood’ knowledge, then this is the episode for you.
Nikola Mrkšić, CEO & Co-Founder of PolyAI, takes Daniel and Chris on a deep dive into conversational AI, describing the underlying technologies, and teaching them about the next generation of voice assistants that will be capable of handling true human-level conversations. It’s an episode you’ll be talking about for a long time!
Chris has the privilege of talking with Stanford Professor Margot Gerritsen, who co-leads the Women in Data Science (WiDS) Worldwide Initiative. This is a conversation that everyone should listen to. Professor Gerritsen’s profound insights into how we can all help the women in our lives succeed - in data science and in life - is a ‘must listen’ episode for everyone, regardless of gender.
David Sweet, author of “Tuning Up: From A/B testing to Bayesian optimization”, introduces Dan and Chris to system tuning, and takes them from A/B testing to response surface methodology, contextual bandit, and finally bayesian optimization. Along the way, we get fascinating insights into recommender systems and high-frequency trading!
Our Slack community wanted to hear about AI-driven drug discovery, and we listened. Abraham Heifets from Atomwise joins us for a fascinating deep dive into the intersection of deep learning models and molecule binding. He describes how these methods work and how they are beginning to help create drugs for “undruggable” diseases!
Green AI 🌲

Green AI 🌲

2021-03-0201:00:12

Empirical analysis from Roy Schwartz (Hebrew University of Jerusalem) and Jesse Dodge (AI2) suggests the AI research community has paid relatively little attention to computational efficiency. A focus on accuracy rather than efficiency increases the carbon footprint of AI research and increases research inequality. In this episode, Jesse and Roy advocate for increased research activity in Green AI (AI research that is more environmentally friendly and inclusive). They highlight success stories and help us understand the practicalities of making our workflows more efficient.
In this Fully-Connected episode, Chris and Daniel discuss low code / no code development, GPU jargon, plus more data leakage issues. They also share some really cool new learning opportunities for leveling up your AI/ML game!
Elad Walach of Aidoc joins Chris to talk about the use of AI for medical imaging interpretation. Starting with the world’s largest annotated training data set of medical images, Aidoc is the radiologist’s best friend, helping the doctor to interpret imagery faster, more accurately, and improving the imaging workflow along the way. Elad’s vision for the transformative future of AI in medicine clearly soothes Chris’s concern about managing his aging body in the years to come. ;-)
John Myers of Gretel puts on his apron and rolls up his sleeves to show Dan and Chris how to cook up some synthetic data for automated data labeling, differential privacy, and other purposes. His military and intelligence community background give him an interesting perspective that piqued the interest of our intrepid hosts.
The nose knows

The nose knows

2021-01-2654:581

Daniel and Chris sniff out the secret ingredients for collecting, displaying, and analyzing odor data with Terri Jordan and Yanis Caritu of Aryballe. It certainly smells like a good time, so join them for this scent-illating episode!
MLCommons launched in December 2020 as an open engineering consortium that seeks to accelerate machine learning innovation and broaden access to this critical technology for the public good. David Kanter, the executive director of MLCommons, joins us to discuss the launch and the ambitions of the organization. In particular we discuss the three pillars of the organization: Benchmarks and Metrics (e.g. MLPerf), Datasets and Models (e.g. People’s Speech), and Best Practices (e.g. MLCube).
loading
Comments (1)

Mark Cund

Great introduction to what's going on in AI. Already started on getting MachineBox up and running. Looking forward to my commutes so I can learn some more! Mark Cund (@AluminumBlonde)

Aug 9th
Reply
Download from Google Play
Download from App Store