DiscoverGradient Dissent - A Machine Learning Podcast by W&B
Gradient Dissent - A Machine Learning Podcast by W&B
Claim Ownership

Gradient Dissent - A Machine Learning Podcast by W&B

Author: Lukas Biewald

Subscribed: 137Played: 1,980
Share

Description

Brought to you by the folks at Weights & Biases, Gradient Dissent is a weekly machine learning podcast that takes you behind-the-scenes to learn how industry leaders are putting deep learning models in production at Facebook, Google, Lyft, OpenAI, Salesforce, iRobot, Stanford and more.
47 Episodes
Reverse
From Apache TVM to OctoML, Luis gives direct insight into the world of ML hardware optimization, and where systems optimization is heading. --- Luis Ceze is co-founder and CEO of OctoML, co-author of the Apache TVM Project, and Professor of Computer Science and Engineering at the University of Washington. His research focuses on the intersection of computer architecture, programming languages, machine learning, and molecular biology. Connect with Luis: 📍 Twitter: https://twitter.com/luisceze 📍 University of Washington profile: https://homes.cs.washington.edu/~luisceze/ --- ⏳ Timestamps: 0:00 Intro and sneak peek 0:59 What is TVM? 8:57 Freedom of choice in software and hardware stacks 15:53 How new libraries can improve system performance 20:10 Trade-offs between efficiency and complexity 24:35 Specialized instructions 26:34 The future of hardware design and research 30:03 Where does architecture and research go from here? 30:56 The environmental impact of efficiency 32:49 Optimizing and trade-offs 37:54 What is OctoML and the Octomizer? 42:31 Automating systems design with and for ML 44:18 ML and molecular biology 46:09 The challenges of deployment and post-deployment 🌟 Transcript: http://wandb.me/gd-luis-ceze 🌟 Links: 1. OctoML: https://octoml.ai/ 2. Apache TVM: https://tvm.apache.org/ 3. "Scalable and Intelligent Learning Systems" (Chen, 2019): https://digital.lib.washington.edu/researchworks/handle/1773/44766 4. "Principled Optimization Of Dynamic Neural Networks" (Roesch, 2020): https://digital.lib.washington.edu/researchworks/handle/1773/46765 5. "Cross-Stack Co-Design for Efficient and Adaptable Hardware Acceleration" (Moreau, 2018): https://digital.lib.washington.edu/researchworks/handle/1773/43349 6. "TVM: An Automated End-to-End Optimizing Compiler for Deep Learning" (Chen et al., 2018): https://www.usenix.org/system/files/osdi18-chen.pdf 7. Porcupine is a molecular tagging system introduced in "Rapid and robust assembly and decoding of molecular tags with DNA-based nanopore signatures" (Doroschak et al., 2020): https://www.nature.com/articles/s41467-020-19151-8 --- Get our podcast on these platforms: 👉 Apple Podcasts: http://wandb.me/apple-podcasts​​ 👉 Spotify: http://wandb.me/spotify​ 👉 Google Podcasts: http://wandb.me/google-podcasts​​ 👉 YouTube: http://wandb.me/youtube​​ 👉 Soundcloud: http://wandb.me/soundcloud​ Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​ Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected
Matthew explains how combining machine learning and computational biology can provide mainstream medicine with better diagnostics and insights. --- Matthew Davis is Head of AI at Invitae, the largest and fastest growing genetic testing company in the world. His research includes bioinformatics, computational biology, NLP, reinforcement learning, and information retrieval. Matthew was previously at IBM Research AI, where he led a research team focused on improving AI systems. Connect with Matthew: 📍 Personal website: https://www.linkedin.com/in/matthew-davis-51233386/ 📍 Twitter: https://twitter.com/deadsmiths --- ⏳ Timestamps: 0:00 Sneak peek, intro 1:02 What is Invitae? 2:58 Why genetic testing can help everyone 7:51 How Invitae uses ML techniques 14:02 Modeling molecules and deciding which genes to look at 22:22 NLP applications in bioinformatics 27:10 Team structure at Invitae 36:50 Why reasoning is an underrated topic in ML 40:25 Why having a clear buy-in is important 🌟 Transcript: http://wandb.me/gd-matthew-davis 🌟 Links: 📍 Invitae: https://www.invitae.com/en 📍 Careers at Invitae: https://www.invitae.com/en/careers/ --- Get our podcast on these platforms: 👉 Apple Podcasts: http://wandb.me/apple-podcasts​​ 👉 Spotify: http://wandb.me/spotify​ 👉 Google Podcasts: http://wandb.me/google-podcasts​​ 👉 YouTube: http://wandb.me/youtube​​ 👉 Soundcloud: http://wandb.me/soundcloud​ Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​ Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected
Clem explains the virtuous cycles behind the creation and success of Hugging Face, and shares his thoughts on where NLP is heading. --- Clément Delangue is co-founder and CEO of Hugging Face, the AI community building the future. Hugging Face started as an open source NLP library and has quickly grown into a commercial product used by over 5,000 companies. Connect with Clem: 📍 Twitter: https://twitter.com/ClementDelangue 📍 LinkedIn: https://www.linkedin.com/in/clementdelangue/ --- 🌟 Transcript: http://wandb.me/gd-clement-delangue 🌟 ⏳ Timestamps: 0:00 Sneak peek and intro 0:56 What is Hugging Face? 4:15 The success of Hugging Face Transformers 7:53 Open source and virtuous cycles 10:37 Working with both TensorFlow and PyTorch 13:20 The "Write With Transformer" project 14:36 Transfer learning in NLP 16:43 BERT and DistilBERT 22:33 GPT 26:32 The power of the open source community 29:40 Current applications of NLP 35:15 The Turing Test and conversational AI 41:19 Why speech is an upcoming field within NLP 43:44 The human challenges of machine learning Links Discussed: 📍 Write With Transformer, Hugging Face Transformer's text generation demo: https://transformer.huggingface.co/ 📍 "Attention Is All You Need" (Vaswani et al., 2017): https://arxiv.org/abs/1706.03762 📍 EleutherAI and GPT-Neo: https://github.com/EleutherAI/gpt-neo] 📍 Rasa, open source conversational AI: https://rasa.com/ 📍 Roblox article on BERT: https://blog.roblox.com/2020/05/scaled-bert-serve-1-billion-daily-requests-cpus/ --- Get our podcast on these platforms: 👉 Apple Podcasts: http://wandb.me/apple-podcasts​​ 👉 Spotify: http://wandb.me/spotify​ 👉 Google Podcasts: http://wandb.me/google-podcasts​​ 👉 YouTube: http://wandb.me/youtube​​ 👉 Soundcloud: http://wandb.me/soundcloud​ Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​ Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected
Wojciech joins us to talk the principles behind OpenAI, the Fermi Paradox, and the future stages of developments in AGI. --- Wojciech Zaremba is a co-founder of OpenAI, a research company dedicated to discovering and enacting the path to safe artificial general intelligence. He was also Head of Robotics, where his team developed general-purpose robots through new approaches to transfer learning, and taught robots complex behaviors. Connect with Wojciech: Personal website: https://wojzaremba.com// Twitter: https://twitter.com/woj_zaremba --- Topics Discussed: 0:00 Sneak peek and intro 1:03 The people and principles behind OpenAI 6:31 The stages of future AI developments 13:42 The Fermi paradox 16:18 What drives Wojciech? 19:17 Thoughts on robotics 24:58 Dota and other projects at OpenAI 33:42 What would make an AI conscious? 41:31 How to be succeed in robotics Transcript: http://wandb.me/gd-wojciech-zaremba Links: Fermi paradox: https://en.wikipedia.org/wiki/Fermi_paradox OpenAI and Dota: https://openai.com/projects/five/ --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts​​ Spotify: http://wandb.me/spotify​ Google Podcasts: http://wandb.me/google-podcasts​​ YouTube: http://wandb.me/youtube​​ Soundcloud: http://wandb.me/soundcloud​ Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​ Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected
Phil shares some of the approaches, like sparsity and low precision, behind the breakthrough performance of Graphcore's Intelligence Processing Units (IPUs). --- Phil Brown leads the Applications team at Graphcore, where they're building high-performance machine learning applications for their Intelligence Processing Units (IPUs), new processors specifically designed for AI compute. Connect with Phil: LinkedIn: https://www.linkedin.com/in/philipsbrown/ Twitter: https://twitter.com/phil_s_brown --- 0:00 Sneak peek, intro 1:44 From computational chemistry to Graphcore 5:16 The simulations behind weather prediction 10:54 Measuring improvement in weather prediction systems 15:35 How high performance computing and ML have different needs 19:00 The potential of sparse training 31:08 IPUs and computer architecture for machine learning 39:10 On performance improvements 44:43 The impacts of increasing computing capability 50:24 The ML chicken and egg problem 52:00 The challenges of converging at scale and bringing hardware to market Links Discussed: Rigging the Lottery: Making All Tickets Winners (Evci et al., 2019): https://arxiv.org/abs/1911.11134 Graphcore MK2 Benchmarks: https://www.graphcore.ai/mk2-benchmarks Check out the transcription and discover more awesome ML projects: http://wandb.me/gd-phil-brown --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts​​​ Spotify: http://wandb.me/spotify​​ Google Podcasts: http://wandb.me/google-podcasts​​​ YouTube: http://wandb.me/youtube​​​ Soundcloud: http://wandb.me/soundcloud​​ Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​​ Check out our Gallery, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/gallery
From working on COVID-19 vaccine rollout to writing a book on responsible ML, Alyssa shares her thoughts on meaningful projects and the importance of teamwork. --- Alyssa Simpson Rochwerger is as a Director of Product at Blue Shield of California, pursuing her dream of using technology to improve healthcare. She has over a decade of experience in building technical data-driven products and has held numerous leadership roles for machine learning organizations, including VP of AI and Data at Appen and Director of Product at IBM Watson. Connect with Sean: Personal website: https://seanjtaylor.com/ Twitter: https://twitter.com/seanjtaylor LinkedIn: https://www.linkedin.com/in/seanjtaylor/ --- Topics Discussed: 0:00 Sneak peak, intro 1:17 Working on COVID-19 vaccine rollout in California 6:50 Real World AI 12:26 Diagnosing bias in models 17:43 Common challenges in ML 21:56 Finding meaningful projects 24:28 ML applications in health insurance 31:21 Longitudinal health records and data cleaning 38:24 Following your interests 40:21 Why teamwork is crucial Transcript: http://wandb.me/gd-alyssa-s-rochwerger Links Discussed: My Turn: https://myturn.ca.gov/ "Turn the Ship Around!": https://www.penguinrandomhouse.com/books/314163/turn-the-ship-around-by-l-david-marquet/ --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts​​ Spotify: http://wandb.me/spotify​ Google Podcasts: http://wandb.me/google-podcasts​​ YouTube: http://wandb.me/youtube​​ Soundcloud: http://wandb.me/soundcloud​ Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​ Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected
Sean joins us to chat about ML models and tools at Lyft Rideshare Labs, Python vs R, time series forecasting with Prophet, and election forecasting. --- Sean Taylor is a Data Scientist at (and former Head of) Lyft Rideshare Labs, and specializes in methods for solving causal inference and business decision problems. Previously, he was a Research Scientist on Facebook's Core Data Science team. His interests include experiments, causal inference, statistics, machine learning, and economics. Connect with Sean: Personal website: https://seanjtaylor.com/ Twitter: https://twitter.com/seanjtaylor LinkedIn: https://www.linkedin.com/in/seanjtaylor/ --- Topics Discussed: 0:00 Sneak peek, intro 0:50 Pricing algorithms at Lyft 07:46 Loss functions and ETAs at Lyft 12:59 Models and tools at Lyft 20:46 Python vs R 25:30 Forecasting time series data with Prophet 33:06 Election forecasting and prediction markets 40:55 Comparing and evaluating models 43:22 Bottlenecks in going from research to production Transcript: http://wandb.me/gd-sean-taylor Links Discussed: "How Lyft predicts a rider’s destination for better in-app experience"": https://eng.lyft.com/how-lyft-predicts-your-destination-with-attention-791146b0a439 Prophet: https://facebook.github.io/prophet/ Andrew Gelman's blog post "Facebook's Prophet uses Stan": https://statmodeling.stat.columbia.edu/2017/03/01/facebooks-prophet-uses-stan/ Twitter thread "Election forecasting using prediction markets": https://twitter.com/seanjtaylor/status/1270899371706466304 "An Updated Dynamic Bayesian Forecasting Model for the 2020 Election": https://hdsr.mitpress.mit.edu/pub/nw1dzd02/release/1 --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts​​ Spotify: http://wandb.me/spotify​ Google Podcasts: http://wandb.me/google-podcasts​​ YouTube: http://wandb.me/youtube​​ Soundcloud: http://wandb.me/soundcloud​ Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​ Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected
Polly explains how microfluidics allow bioengineering researchers to create high throughput data, and shares her experiences with biology and machine learning. --- Polly Fordyce is an Assistant Professor of Genetics and Bioengineering and fellow of the ChEM-H Institute at Stanford. She is the Principal Investigator of The Fordyce Lab, which focuses on developing and applying new microfluidic platforms for quantitative, high-throughput biophysics and biochemistry. Twitter: https://twitter.com/fordycelab​ Website: http://www.fordycelab.com/​ --- Topics Discussed: 0:00​ Sneak peek, intro 2:11​ Background on protein sequencing 7:38​ How changes to a protein's sequence alters its structure and function 11:07​ Microfluidics and machine learning 19:25​ Why protein folding is important 25:17​ Collaborating with ML practitioners 31:46​ Transfer learning and big data sets in biology 38:42​ Where Polly hopes bioengineering research will go 42:43​ Advice for students Transcript: http://wandb.me/gd-polly-fordyce​ Links Discussed: "The Weather Makers": https://en.wikipedia.org/wiki/The_Wea...​ --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts​​​ Spotify: http://wandb.me/spotify​​ Google Podcasts: http://wandb.me/google-podcasts​​​ YouTube: http://wandb.me/youtube​​​ Soundcloud: http://wandb.me/soundcloud​​ Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​​ Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected
Adrien Gaidon shares his approach to building teams and taking state-of-the-art research from conception to production at Toyota Research Institute. --- Adrien Gaidon is the Head of Machine Learning Research at the Toyota Research Institute (TRI). His research focuses on scaling up ML for robot autonomy, spanning Scene and Behavior Understanding, Simulation for Deep Learning, 3D Computer Vision, and Self-Supervised Learning. Connect with Adrien: Twitter: https://twitter.com/adnothing LinkedIn: https://www.linkedin.com/in/adrien-gaidon-63ab2358/ Personal website: https://adriengaidon.com/ --- Topics Discussed: 0:00 Sneak peek, intro 0:48 Guitars and other favorite tools 3:55 Why is PyTorch so popular? 11:40 Autonomous vehicle research in the long term 15:10 Game-changing academic advances 20:53 The challenges of bringing autonomous vehicles to market 26:05 Perception and prediction 35:01 Fleet learning and meta learning 41:20 The human aspects of machine learning 44:25 The scalability bottleneck Transcript: http://wandb.me/gd-adrien-gaidon Links Discussed: TRI Global Research: https://www.tri.global/research/ todoist: https://todoist.com/ Contrastive Learning of Structured World Models: https://arxiv.org/abs/2002.05709 SimCLR: https://arxiv.org/abs/2002.05709 --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts​​ Spotify: http://wandb.me/spotify​ Google Podcasts: http://wandb.me/google-podcasts​​ YouTube: http://wandb.me/youtube​​ Soundcloud: http://wandb.me/soundcloud​ Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​ Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected
A look at how Nimrod and the team at Nanit are building smart baby monitor systems, from data collection to model deployment and production monitoring. --- Nimrod Shabtay is a Senior Computer Vision Algorithm Developer at Nanit, a New York-based company that's developing better baby monitoring devices. Connect with Nimrod: LinkedIn: https://www.linkedin.com/in/nimrod-shabtay-76072840/ --- Links Discussed: Guidelines for building an accurate and robust ML/DL model in production: https://engineering.nanit.com/guideli...​ Careers at Nanit: https://www.nanit.com/jobs​ --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts​​ Spotify: http://wandb.me/spotify​ Google: http://wandb.me/google-podcasts​​ YouTube: http://wandb.me/youtube​​ Soundcloud: http://wandb.me/soundcloud​ --- Join our community of ML practitioners where we host AMAs, share interesting projects, and more: http://wandb.me/slack​​ Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
Chris shares some of the incredible work and innovations behind deep space exploration at NASA JPL and reflects on the past, present, and future of machine learning. --- Chris Mattmann is the Chief Technology and Innovation Officer at NASA Jet Propulsion Laboratory, where he focuses on organizational innovation through technology. He's worked on space missions such as the Orbiting Carbon Observatory 2 and Soil Moisture Active Passive satellites. Chris is also a co-creator of Apache Tika, a content detection and analysis framework that was one of the key technologies used to uncover the Panama Papers, and is the author of "Machine Learning with TensorFlow, Second Edition" and "Tika in Action". Connect with Chris: Personal website: https://www.mattmann.ai/ Twitter: https://twitter.com/chrismattmann --- Topics Discussed: 0:00 Sneak peek, intro 0:52 On Perseverance and Ingenuity 8:40 Machine learning applications at NASA JPL 11:51 Innovation in scientific instruments and data formats 18:26 Data processing levels: Level 1 vs Level 2 vs Level 3 22:20 Competitive data processing 27:38 Kerbal Space Program 30:19 The ideas behind "Machine Learning with Tensorflow, Second Edition" 35:37 The future of MLOps and AutoML 38:51 Machine learning at the edge Transcript: http://wandb.me/gd-chris-mattmann Links Discussed: Perseverance and Ingenuity: https://mars.nasa.gov/mars2020/ Data processing levels at NASA: https://earthdata.nasa.gov/collaborate/open-data-services-and-software/data-information-policy/data-levels OCO-2: https://www.jpl.nasa.gov/missions/orbiting-carbon-observatory-2-oco-2 "Machine Learning with TensorFlow, Second Edition" (2020): https://www.manning.com/books/machine-learning-with-tensorflow-second-edition "Tika in Action" (2011): https://www.manning.com/books/tika-in-action Transcript: http://wandb.me/gd-chris-mattmann --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts​​ Spotify: http://wandb.me/spotify​ Google Podcasts: http://wandb.me/google-podcasts​​ YouTube: http://wandb.me/youtube​​ Soundcloud: http://wandb.me/soundcloud​ Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​ Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected
From legged locomotion to autonomous driving, Vladlen explains how simulation and abstraction help us understand embodied intelligence. --- Vladlen Koltun is the Chief Scientist for Intelligent Systems at Intel, where he leads an international lab of researchers working in machine learning, robotics, computer vision, computational science, and related areas. Connect with Vladlen: Personal website: http://vladlen.info/ LinkedIn: https://www.linkedin.com/in/vladlenkoltun/ --- 0:00 Sneak peek and intro 1:20 "Intelligent Systems" vs "AI" 3:02 Legged locomotion 9:26 The power of simulation 14:32 Privileged learning 18:19 Drone acrobatics 20:19 Using abstraction to transfer simulations to reality 25:35 Sample Factory for reinforcement learning 34:30 What inspired CARLA and what keeps it going 41:43 The challenges of and for robotics Links Discussed Learning quadrupedal locomotion over challenging terrain (Lee et al., 2020): https://robotics.sciencemag.org/content/5/47/eabc5986.abstract Deep Drone Acrobatics (Kaufmann et al., 2020): https://arxiv.org/abs/2006.05768 Sample Factory: Egocentric 3D Control from Pixels at 100000 FPS with Asynchronous Reinforcement Learning (Petrenko et al., 2020): https://arxiv.org/abs/2006.11751 CARLA: https://carla.org/ --- Check out the transcription and discover more awesome ML projects: http://wandb.me/vladlen-koltun​-podcast Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts​​ Spotify: http://wandb.me/spotify​ Google: http://wandb.me/google-podcasts​​ YouTube: http://wandb.me/youtube​​ Soundcloud: http://wandb.me/soundcloud​ --- Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​ Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
Dominik shares the story and principles behind Vega and Vega-Lite, and explains how visualization and machine learning help each other. --- Dominik is a co-author of Vega-Lite, a high-level visualization grammar for building interactive plots. He's also a professor at the Human-Computer Interaction Institute Institute at Carnegie Mellon University and an ML researcher at Apple. Connect with Dominik Twitter: https://twitter.com/domoritz GitHub: https://github.com/domoritz Personal website: https://www.domoritz.de/ --- 0:00 Sneak peek, intro 1:15 What is Vega-Lite? 5:39 The grammar of graphics 9:00 Using visualizations creatively 11:36 Vega vs Vega-Lite 16:03 ggplot2 and machine learning 18:39 Voyager and the challenges of scale 24:54 Model explainability and visualizations 31:24 Underrated topics: constraints and visualization theory 34:38 The challenge of metrics in deployment 36:54 In between aggregate statistics and individual examples Links Discussed Vega-Lite: https://vega.github.io/vega-lite/ Data analysis and statistics: an expository overview (Tukey and Wilk, 1966): https://dl.acm.org/doi/10.1145/1464291.1464366 Slope chart / slope graph: https://vega.github.io/vega-lite/examples/line_slope.html Voyager: https://github.com/vega/voyager Draco: https://github.com/uwdata/draco Check out the transcription and discover more awesome ML projects: http://wandb.me/gd-domink-moritz --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts​ Spotify: http://wandb.me/spotify​ Google: http://wandb.me/google-podcasts​ YouTube: http://wandb.me/youtube​ Soundcloud: http://wandb.me/soundcloud --- Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​ Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
How Cade got access to the stories behind some of the biggest advancements in AI, and the dynamic playing out between leaders at companies like Google, Microsoft, and Facebook. Cade Metz is a New York Times reporter covering artificial intelligence, driverless cars, robotics, virtual reality, and other emerging areas. Previously, he was a senior staff writer with Wired magazine and the U.S. editor of The Register, one of Britain’s leading science and technology news sites. His first book, "Genius Makers", tells the stories of the pioneers behind AI. Get the book: http://bit.ly/GeniusMakers Follow Cade on Twitter: https://twitter.com/CadeMetz/ And on Linkedin: https://www.linkedin.com/in/cademetz/ Topics discussed: 0:00 sneak peek, intro 3:25 audience and charachters 7:18 *spoiler alert* AGI 11:01 book ends, but story goes on 17:31 overinflated claims in AI 23:12 Deep Mind, OpenAI, building AGI 29:02 neuroscience and psychology, outsiders 34:35 Early adopters of ML 38:34 WojNet, where is credit due? 42:45 press covering AI 46:38 Aligning technology and need Read the transcript and discover awesome ML projects: http://wandb.me/cade-metz Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
Learn why traditional home security systems tend to fail and how Dave’s love of tinkering and deep learning are helping him and the team at Deep Sentinel avoid those same pitfalls. He also discusses the importance of combatting racial bias by designing race-agnostic systems and what their approach is to solving that problem. Dave Selinger is the co-founder and CEO of Deep Sentinel, an intelligent crime prediction and prevention system that stops crime before it happens using deep learning vision techniques. Prior to founding Deep Sentinel, Dave co-founded RichRelevance, an AI recommendation company. https://www.deepsentinel.com/ https://www.meetup.com/East-Bay-Tri-Valley-Machine-Learning-Meetup/ https://twitter.com/daveselinger Topics covered: 0:00 Sneak peek, smart vs dumb cameras, intro 0:59 What is Deep Sentinel, how does it work? 6:00 Hardware, edge devices 10:40 OpenCV Fork, tinkering 16:18 ML Meetup, Climbing the AI research ladder 20:36 Challenge of Safety critical applications 27:03 New models, re-training, exhibitionists and voyeurs 31:17 How do you prove your cameras are better? 34:24 Angel investing in AI companies 38:00 Social responsibility with data 43:33 Combatting bias with data systems 52:22 Biggest bottlenecks production Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Read the transcript and discover more awesome machine learning material here: http://wandb.me/Dave-selinger-podcast Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
Since reinforcement learning requires hefty compute resources, it can be tough to keep up without a serious budget of your own. Find out how the team at Facebook AI Research (FAIR) is looking to increase access and level the playing field with the help of NetHack, an archaic rogue-like video game from the late 80s. Links discussed: The NetHack Learning Environment: https://ai.facebook.com/blog/nethack-learning-environment-to-advance-deep-reinforcement-learning/ Reinforcement learning, intrinsic motivation: https://arxiv.org/abs/2002.12292 Knowledge transfer: https://arxiv.org/abs/1910.08210 Tim Rocktäschel is a Research Scientist at Facebook AI Research (FAIR) London and a Lecturer in the Department of Computer Science at University College London (UCL). At UCL, he is a member of the UCL Centre for Artificial Intelligence and the UCL Natural Language Processing group. Prior to that, he was a Postdoctoral Researcher in the Whiteson Research Lab, a Stipendiary Lecturer in Computer Science at Hertford College, and a Junior Research Fellow in Computer Science at Jesus College, at the University of Oxford. https://twitter.com/_rockt Heinrich Kuttler is an AI and machine learning researcher at Facebook AI Research (FAIR) and before that was a research engineer and team lead at DeepMind. https://twitter.com/HeinrichKuttler https://www.linkedin.com/in/heinrich-kuttler/ Topics covered: 0:00 a lack of reproducibility in RL 1:05 What is NetHack and how did the idea come to be? 5:46 RL in Go vs NetHack 11:04 performance of vanilla agents, what do you optimize for 18:36 transferring domain knowledge, source diving 22:27 human vs machines intrinsic learning 28:19 ICLR paper - exploration and RL strategies 35:48 the future of reinforcement learning 43:18 going from supervised to reinforcement learning 45:07 reproducibility in RL 50:05 most underrated aspect of ML, biggest challenges? Get our podcast on these other platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
From teaching at Stanford to co-founding Coursera, insitro, and Engageli, Daphne Koller reflects on the importance of education, giving back, and cross-functional research. Daphne Koller is the founder and CEO of insitro, a company using machine learning to rethink drug discovery and development. She is a MacArthur Fellowship recipient, member of the National Academy of Engineering, member of the American Academy of Arts and Science, and has been a Professor in the Department of Computer Science at Stanford University. In 2012, Daphne co-founded Coursera, one of the world's largest online education platforms. She is also a co-founder of Engageli, a digital platform designed to optimize student success. https://www.insitro.com/ https://www.insitro.com/jobs https://www.engageli.com/ https://www.coursera.org/ Follow Daphne on Twitter: https://twitter.com/DaphneKoller https://www.linkedin.com/in/daphne-koller-4053a820/ Topics covered: 0:00​ Giving back and intro 2:10​ insitro's mission statement and Eroom's Law 3:21​ The drug discovery process and how ML helps 10:05​ Protein folding 15:48​ From 2004 to now, what's changed? 22:09​ On the availability of biology and vision datasets 26:17​ Cross-functional collaboration at insitro 28:18​ On teaching and founding Coursera 31:56​ The origins of Engageli 36:38​ Probabilistic graphic models 39:33​ Most underrated topic in ML 43:43​ Biggest day-to-day challenges Get our podcast on these other platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
Piero shares the story of how Ludwig was created, as well as the ins and outs of how Ludwig works and the future of machine learning with no code. Piero is a Staff Research Scientist in the Hazy Research group at Stanford University. He is a former founding member of Uber AI, where he created Ludwig, worked on applied projects (COTA, Graph Learning for Uber Eats, Uber’s Dialogue System), and published research on NLP, Dialogue, Visualization, Graph Learning, Reinforcement Learning, and Computer Vision. Topics covered: 0:00 Sneak peek and intro 1:24 What is Ludwig, at a high level? 4:42 What is Ludwig doing under the hood? 7:11 No-code machine learning and data types 14:15 How Ludwig started 17:33 Model performance and underlying architecture 21:52 On Python in ML 24:44 Defaults and W&B integration 28:26 Perspective on NLP after 10 years in the field 31:49 Most underrated aspect of ML 33:30 Hardest part of deploying ML models in the real world Learn more about Ludwig: https://ludwig-ai.github.io/ludwig-docs/ Piero's Twitter: https://twitter.com/w4nderlus7 Follow Piero on Linkedin: https://www.linkedin.com/in/pieromolino/?locale=en_US Get our podcast on these other platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
How Rosanne is working to democratize AI research and improve diversity and fairness in the field through starting a non-profit after being a founding member of Uber AI Labs, doing lots of amazing research, and publishing papers at top conferences. Rosanne is a machine learning researcher, and co-founder of ML Collective, a nonprofit organization for open collaboration and mentorship. Before that, she was a founding member of Uber AI. She has published research at NeurIPS, ICLR, ICML, Science, and other top venues. While at school she used neural networks to help discover novel materials and to optimize fuel efficiency in hybrid vehicles. ML Collective: http://mlcollective.org/ Controlling Text Generation with Plug and Play Language Models: https://eng.uber.com/pplm/ LCA: Loss Change Allocation for Neural Network Training: https://eng.uber.com/research/lca-loss-change-allocation-for-neural-network-training/ Topics covered 0:00 Sneak peek, Intro 1:53 The origin of ML Collective 5:31 Why a non-profit and who is MLC for? 14:30 LCA, Loss Change Allocation 18:20 Running an org, research vs admin work 20:10 Advice for people trying to get published 24:15 on reading papers and Intrinsic Dimension paper 36:25 NeurIPS - Open Collaboration 40:20 What is your reward function? 44:44 Underrated aspect of ML 47:22 How to get involved with MLC Get our podcast on these other platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
In this episode of Gradient Dissent, Primer CEO Sean Gourley and Lukas Biewald sit down to talk about NLP, working with vast amounts of information, and how crucially it relates to national defense. They also chat about their experience of being second-time founders coming from a data science background and how it affects the way they run their companies. We hope you enjoy this episode! Sean Gourley is the founder and CEO Primer, a natural language processing startup in San Francisco. Previously, he was CTO of Quid an augmented intelligence company that he cofounded back in 2009. And prior to that, he worked on self-repairing nano circuits at NASA Ames. Sean has a PhD in physics from Oxford, where his research as a road scholar focused on graph theory, complex systems, and the mathematical patterns underlying modern war. Follow Sean on Twitter: https://primer.ai/ https://twitter.com/sgourley Topics Covered: 0:00 Sneak peek, intro 1:42 Primer's mission and purpose 4:29 The Diamond Age – How do we train machines to observe the world and help us understand it 7:44 a self-writing Wikipedia 9:30 second-time founder 11:26 being a founder as a data scientist 15:44 commercializing algorithms 17:54 Is GPT-3 worth the hype? The mind-blowing scale of transformers 23:00 AI Safety, military/defense 29:20 disinformation, does ML play a role? 34:55 Establishing ground truth and informational provenance 39:10 COVID misinformation, Masks, division 44:07 most underrated aspect of ML 45:09 biggest bottlenecks in ML? Visit our podcasts homepage for transcripts and more episodes! www.wandb.com/podcast Get our podcast on these other platforms: YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices. https://wandb.ai/gallery
loading
Comments 
loading
Download from Google Play
Download from App Store