DiscoverJay Shah Podcast
Jay Shah Podcast

Jay Shah Podcast

Author: Jay Shah

Subscribed: 19Played: 118
Share

Description

Interviews with scientists and engineers working in Machine Learning and AI, about their journey, insights, and discussion on latest research topics.
86 Episodes
Reverse
Dr. Petar Veličković is a Staff Research Scientist at Googe DeepMind and an Affiliated lecturer at the University of Cambridge. He is known for his research contributions in graph representation learning; particularly graph neural networks and graph attention networks. At DeepMind, he has been working on Neural Algorithmic Reasoning which we talk about more in this podcast. Petar’s research has been featured in numerous media articles and has been impactful in many ways including Google Maps’s improved predictions. Time stamps 00:00:00 Highlights 00:01:00 Introduction 00:01:50 Entry point in AI 00:03:44 Idea of Graph Attention Networks 00:06:50 Towards AGI 00:09:58 Attention in Deep learning 00:13:15 Attention vs Convolutions 00:20:20 Neural Algorithmic Reasoning (NAR) 00:25:40 End-to-end learning vs NAR 00:30:40 Improving Google Map predictions 00:34:08 Interpretability 00:41:28 Working at Google DeepMind 00:47:25 Fundamental vs Applied side of research 00:50:58 Industry vs Academia in AI Research 00:54:25 Tips to young researchers 01:05:55 Is a PhD required for AI research? More about Petar: https://petar-v.com/ Graph Attention Networks: https://arxiv.org/abs/1710.10903 Neural Algorithmic Reasoning: https://www.cell.com/patterns/pdf/S2666-3899(21)00099-4.pdf TacticAI paper: https://arxiv.org/abs/2310.10553 And his collection of invited talks:  @petarvelickovic6033  About the Host: Jay is a PhD student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
Dr. Yezhou Yang is an Associate Professor at Arizona State University and director of the Active Perception Group at ASU. He has research interests in Cognitive Robotics and Computer Vision, and understanding human actions from visual input and grounding them by natural language. Prior to joining ASU, he completed his Ph.D. from the University of Maryland and his postdoctoral at the Computer Vision Lab and Perception and Robotics Lab. Timestamps of the conversation 00:01:02 Introduction 00:01:46 Interest in AI 00:17:04 Entry in Robotics & AI Perception 00:20:59 Combining Vision & language to Improve Robot Perception 00:23:30 End-to-end learning vs traditional knowledge graphs 00:28:28 What do LLMs learn? 00:30:30 Nature of AI research 00:36:00 Why vision & language in AI? 00:45:40 Learning vs Reasoning in neural networks 00:53:05 Bringing AI to the general crowd 01:00:10 Transformers in Vision 01:08:54 Democratization of AI 01:13:42 Motivation for research: theory or application? 01:18:50 Surpassing human intelligence 01:25:13 Open challenges in computer vision research 01:30:19 Doing research is a privilege 01:35:00 Rejections, tips to read & write good papers 01:43:37 Tips for AI Enthusiasts 01:47:35 What is a good research problem? 01:50:30 Dos and Don'ts in AI research More about Dr. Yang: https://yezhouyang.engineering.asu.edu/ And his Twitter handle: https://twitter.com/Yezhou_Yang About the Host: Jay is a PhD student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Check-out Rora: https://teamrora.com/jayshah Guide to STEM PhD AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-2023 Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
Dr Hyrum Anderson is a Distinguished Machine Learning Engineer at Robust Intelligence. Prior to that, he was Principal Architect of Trustworthy Machine Learning at Microsoft where he also founded Microsoft’s AI Red Team; he also led security research at MIT Lincoln Laboratory, Sandia National Laboratories, and Mendiant, and was Chief Scientist at Endgame (later acquired by Elastic). He’s also the co-author of the book “Not a Bug, But with a Sticker” and his research interests include assessing the security and privacy of ML systems and building Robust AI models. Timestamps of the conversation 00:50 Introduction 01:40 Background in AI and ML security 04:45 Attacks on ML systems 08:20 Fractions of ML systems prone to Attacks 10:38 Operational risks with security measures 13:40 Solution from an algorithmic or policy perspective 15:46 AI regulation and policy making 22:40 Co-development of AI and security measures 24:06 Risks of Generative AI and Mitigation 27:45 Influencing an AI model 30:08 Prompt stealing on ChatGPT 33:50 Microsoft AI Red Team 38:46 Managing risks 39:41 Government Regulations 43:04 What to expect from the Book 46:40 Black in AI & Bountiful Children’s Foundation Check out Rora: https://teamrora.com/jayshah Guide to STEM Ph.D. AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-2023 Rora's negotiation philosophy: https://www.teamrora.com/post/the-biggest-misconception-about-negotiating-salaryhttps://www.teamrora.com/post/job-offer-negotiation-lies Hyrum's Linkedin: https://www.linkedin.com/in/hyrumanderson/ And Research: https://scholar.google.com/citations?user=pP6yo9EAAAAJ&hl=en Book - Not a Bug, But with a Sticker: https://www.amazon.com/Not-Bug-But-Sticker-Learning/dp/1119883989/ About the Host: Jay is a Ph.D. student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
Meredith is an associate professor at New York University and research director at the NYU Alliance for Public Interest Technology. Her research interests include using data analysis for good and ethical AI. She is also the author of the book “More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech” and we will discuss more about this with her in this podcast. Time stamps of the conversation 00:42 Introduction 01:17 Background 02:17 Meaning of “it is not a glitch” in the book title 04:40 How are biases coded into AI systems? 08:45 AI is not the solution to every problem 09:55 Algorithm Auditing 11:57 Why do organizations don't use algorithmic auditing more often? 15:12 Techno-chauvinism and drawing boundaries 23:18 Bias issues with ChatGPT and Auditing the model 27:55 Using AI for Public Good - AI on context 31:52 Advice to young researchers in AI Meredith's homepage: https://meredithbroussard.com/ And her Book: https://mitpress.mit.edu/9780262047654/more-than-a-glitch/ About the Host: Jay is a Ph.D. student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
Part-2 of my podcast with David Stutz. (Part-1: https://youtu.be/J7hzMYUcfto) David is a research scientist at DeepMind working on building robust and safe deep learning models. Prior to joining DeepMind, he was a PhD student at the Max Plank Institute of Informatics. He also maintains a fantastic blog on various topics related to machine learning and graduate life which is insightful to young researchers out there. 00:00:00 Working at DeepMind 00:08:20 Importance of Abstraction and Collaboration in Research 00:13:08 DeepMind internship project 00:19:39 What drives research projects at DeepMind 00:27:45 Research in Industry vs Academia 00:30:45 Interview tips for research roles, at DeepMind or other companies 00:44:38 Finding the right Advisor & Institute for PhD 01:02:12 Do you really need a Ph.D. to do AI/ML research? 01:08:28 Academia vs Industry: Making the choice 01:10:49 Pressure to publish more papers 01:21:35 Artificial General Intelligence (AGI) 01:33:24 Advice to young enthusiasts on getting started David's Homepage: https://davidstutz.de/ And his blog: https://davidstutz.de/category/blog/ Research work: https://scholar.google.com/citations?user=TxEy3cwAAAAJ&hl=en About the Host: Jay is a Ph.D. student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
Rora helps top AI researchers and professionals negotiate their pay -- often as they transition from academia into industry. Moving into tech is a huge transition for many PhDs and post-docs -- the pay is much more significant and the terms of employment are often quite different. In the past 5 years, the Rora team has helped over 1000 STEM professionals negotiate more than $10M in additional earnings from companies like DeepMind, OpenAI, Google Brain, and Anthropic -- and advocate for better roles, more alignment with their managers, and more flexible work. Referral link: https://teamrora.com/jayshah Guide to STEM Ph.D. AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-2023 (the majority of the STEM PhDs we support are going into tech roles) Rora's negotiation philosophy: https://www.teamrora.com/post/the-biggest-misconception-about-negotiating-salaryhttps://www.teamrora.com/post/job-offer-negotiation-lieshttps://www.teamrora.com/post/roras-3-keys-to-negotiating-a-new-job-offer00:00 Highlights 00:55 Introduction 01:42 About Rora 05:40 Myths in Job Negotiations 08:58 Fear of losing job offers 12:36 30-60-90 day roadmap for negotiation 15:28 Knowing if you should negotiate 20:46 Negotiating with only one offer 24:40 What to negotiate? 29:00 Knowing if you're low-balled in offers 31:31 When negotiations don't workout 35:00 When & How to Negotiate? 43:00 Negotiating promotions 46:45 Is there always room for Negotiation? 49:42 Quick advice to people who have offers in hand 55:32 Wrong assumptions Learn more about Jordan: https://www.linkedin.com/in/jordansale And Rora: https://teamrora.com/jayshah Also check-out these talks on all available podcast platforms: https://jayshah.buzzsprout.com About the Host: Jay is a Ph.D. student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
Part-1 of my podcast with David Stutz. (Part-2: https://youtu.be/IumJcB7bE20) David is a research scientist at DeepMind working on building robust and safe deep learning models. Prior to joining DeepMind, he was a Ph.D. student at the Max Plank Institute of Informatics. He also maintains a fantastic blog on various topics related to machine learning and graduate life which is insightful to young researchers out there. Check out Rora: https://teamrora.com/jayshah Guide to STEM Ph.D. AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-202300:00:00 Highlights and Sponsors 00:01:22 Intro 00:02:14 Interest in AI 00:12:26 Finding research interests 00:22:41 Robustness vs Generalization in deep neural networks 00:28:03 Generalization vs model performance trade-off 00:37:30 On-manifold adversarial examples for better generalization 00:48:20 Vision transformers 00:49:45 Confidence-calibrated adversarial training 00:59:25 Improving hardware architecture for deep neural networks 01:08:45 What's the tradeoff in quantization? 01:19:07 Amazing aspects of working at DeepMind 01:27:38 Learning the skills of Abstraction when collaborating David's Homepage: https://davidstutz.de/ And his blog: https://davidstutz.de/category/blog/ Research work: https://scholar.google.com/citations?user=TxEy3cwAAAAJ&hl=en About the Host: Jay is a Ph.D. student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
Dr. Subbarao Kambhampati is a Professor of Computer Science at Arizona State University and the director of the Yochan lab where his research focuses on decision-making and planning, specifically in the context of human-aware AI systems. He has been named a fellow of AAAI, AAAS, and ACM in recognition of his research contributions and also received a distinguished alumnus award from the University of Maryland and IIT Madras.Check out Rora: https://teamrora.com/jayshahGuide to STEM Ph.D. AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-2023Rora's negotiation philosophy:https://www.teamrora.com/post/the-biggest-misconception-about-negotiating-salaryhttps://www.teamrora.com/post/job-offer-negotiation-lies00:00:00 Highlights and Intro00:02:16 What is chatgpt doing?00:10:27 Does it really learn anything?00:17:28 Chatgpt hallucinations & getting facts wrong00:23:29 Generative vs Predictive Modeling in AI00:41:51 Learning common patterns from Language00:57:00 Implications in society01:03:28 Can we fix chatgpt hallucinations? 01:26:24 RLHF is not enough01:32:47 Existential risk of AI (or chatgpt) 01:49:04 Open sourcing in AI02:04:32 OpenAI is not "open" anymore02:08:51 Can AI program itself in the future?02:25:08 Deep & Narrow AI to Broad & Shallow AI02:30:03 AI as assistive technology - understanding its strengths & limitations02:44:14 SummaryArticles referred to in the conversationhttps://thehill.com/opinion/technology/3861182-beauty-lies-chatgpt-welcome-to-the-post-truth-world/More about Prof. RaoHomepage: https://rakaposhi.eas.asu.edu/Twitter: https://twitter.com/rao2zAlso check-out these talks on all available podcast platforms: https://jayshah.buzzsprout.comAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/
Karyna Naminas is the CEO of Label Your Data which provides data annotation services to different organizations interested in developing AI-based solutions.Check out Rora: https://teamrora.com/jayshahGuide to STEM Ph.D. AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-2023Rora's negotiation philosophy:https://www.teamrora.com/post/the-biggest-misconception-about-negotiating-salaryhttps://www.teamrora.com/post/job-offer-negotiation-lies00:00:00 Introduction and Sponsors00:02:28 Background before being a CEO00:06:38 Fascinating aspects of AI00:09:10 Data annotation outside of AI00:10:21 Effect of COVID, Russia-Ukraine War, and economic crisis on Business00:18:47 Sourcing data annotators 00:22:40 Challenges in annotation00:31:00 Data annotation for Military applications in Ukraine00:41:42 Tools used for annotation00:44:56 Segment anything and chatgpt to facilitate annotation00:51:00 Key responsibilities as a CEO00:53:58 Metrics for performance evaluation00:59:56 Building leadership01:06:06 Advice to aspiring entrepreneurs01:09:34 Dealing with failures as a CEO Learn more about Karyna: https://www.linkedin.com/in/karyna-naminas-923908200Label Your Data: https://labelyourdata.com/LinkedIn: https://www.linkedin.com/company/label-your-data/Also check-out these talks on all available podcast platforms: https://jayshah.buzzsprout.comAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/
Amey Dharwadker works as a Machine Learning Tech Lead Manager at Meta, supporting Facebook's Video Recommendations Ranking team and working on building and deploying personalization models for billions of users. He has also been instrumental in driving a significant increase in user engagement and revenue for the company through his work on News Feed and Ads ranking ML models. As an experienced researcher, he has co-authored publications at various AI/ML conferences and patents in the fields of recommender systems and machine learning. He has undergraduate and graduate degrees from the National Institute of Technology Tiruchirappalli (India) and Columbia University.Time stamps of the conversation00:00:46 Introduction00:01:46 Getting into recommendation systems00:05:25 Projects currently working on at Facebook, Meta00:06:55 User satisfaction to improve recommendations00:08:25 Implicit Metrics to improve engagement00:11:34 Video vs product recommendations based on fixed attributes00:13:20 Understanding video content00:15:55 Working at Scale00:20:02 Cold start problem00:22:41 Data privacy concerns00:24:36 Challenges of deploying machine learning models00:30:56 Trade-off in metrics to boost user engagement00:33:47 Introspecting recommender systems - Interpretability 00:37:14 Long video vs short video - how to adapt algorithms?00:42:17 Being a Machine Learning Tech Lead Manager at Meta - work routine00:45:00 Transitioning to leadership roles00:50:55 Tips on interviewing for Machine Learning roles00:57:23 Machine Learning job interviews01:02:30 Finding your interest in AI/machine learning01:05:24 Transitioning to ML roles within the industry 01:08:36 Remaining updated to research 01:12:00 Advice to young computer science studentsMore about Amey: https://research.facebook.com/people/dharwadker-amey-porobo/Linkedin: https://www.linkedin.com/in/ameydharwadker/Also check-out these talks on all available podcast platforms: https://jayshah.buzzsprout.comAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/
Dr. Aparna Taneja works at Google Research in India on innovative projects driving real-world social impact. Her team collaborates with an NGO called ARMMAN with the mission to improve maternal and child health outcomes in underserved communities of India. Prior to Google she was a Post-Doc at Disney Research, Zurich, and has a PhD from the Computer Vision and Geometry Group in ETH Zurich and a Bachelor's in Computer Science from the Indian Institute of Technology, Delhi.Time stamps of the conversation00:00:46 Introductions00:01:20 Background and Interest in AI00:03:59 Satellite imaging and AI at Google00:08:30 Multi-Agent systems for social impact - part of AI for social good00:10:30 Awareness of AI benefits in non-tech fields00:13:42 Project SAHELI - improving maternal and child health using AI00:20:05 Intuition for methodology 00:22:07 Measuring impact on health00:27:42 Challenges when working with real-world data00:32:58 Problem scoping and defining research statements00:38:16 Disconnect between tech and non-tech communities while collaborating00:43:22 What motivates you, the theoretical or application side of research00:47:17 What research skills are a must when working on real-world challenges using AI00:50:33 Factors considered before doing a PhD00:54:08 Significance of Ph.D. for research roles in the industry00:58:15 Choosing industry vs Academia01:02:38 Managing personal life with a research career01:07:58 Advice to young students interested in AI on getting startedLearn more about Aparna here: https://research.google/people/106890/Research: https://scholar.google.com/citations?user=XtMi1L0AAAAJ&hl=enAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/
Dr. Srijan Kumar is an Assistant professor at Georgia Tech with research interests in combating misinformation and harmful content on online platforms, building robust AI models prone to adversarial attacks, and behavior modeling for more accurate recommender systems. Before joining Georgia Tech, he was a postdoctoral fellow at Stanford University and completed his Ph.D. in computer science from the University of Maryland. He has received multiple awards for his research work, including Forbes 30u30 and being named a Kavli Fellow by the National Academy of Sciences.Time stamps of the conversation00:01:00 Introductions00:01:45 Background and Interest in AI00:05:27 Current research interests00:09:50 What is misinformation?00:15:07 ChatGPT and misinformation00:23:40 How can AI help detect misinformation?00:39:15 Twitter's Birdwatch platform to detect fake/misleading news00:56:38 Detecting fake bots on Twitter01:03:39 Adversarial training to build robust AI models01:05:31 Robustness vs Generalizability in machine learning01:11:40 Navigating your interest in the field of AI/machine learning01:19:22 Doing a Ph.D. and working in Industry vs Academia01:24:22 Focusing on Quality of Research rather than Quantity01:31:23 Advice to young people interested in AIDr. Kumar's homepage: https://cc.gatech.edu/~srijan/Twitter: https://twitter.com/srijankediaLinkedin: https://www.linkedin.com/in/srijankrAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/
Emma is a final-year medical student at the University of Cambridge and also pursuing her Ph.D. in Machine Learning. With her knowledge of clinical decision-making, she is working on research projects that leverage machine-learning techniques to improve clinical workflow. She will be taking her role as an academic doctor post her graduation. Time stamps of the conversation00:00:00 Introduction00:02:08 From clinical science to learning AI00:13:15 Learning the basics of Artificial Intelligence00:20:12 Promise of AI in medicine00:30:13 Do we really need interpretable AI models for clinical decision-making? 00:38:47 Using AI for more clinically-useful problems00:50:55 Facilitating interdisciplinary efforts00:54:06 Predicting length of stay in ICUs using convolutional neural networks01:03:04 AI for improving clinical workflows and biomarker discovery   01:07:55 Clustering disease trajectories in mechanically ventilated patients using machine learning01:16:37 ChatGPT for medical research or clinical decision making01:25:21 Quality over quantity of AI works published nowadays01:31:07 Advice to researchersEmma's Homepage: https://emmarocheteau.com/LinkedIn: https://www.linkedin.com/in/emma-rocheteau-125384132/Also check-out these talks on all available podcast platforms: https://jayshah.buzzsprout.comAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/
Understanding why and how transformers are so efficient in large language models nowadays such as #chatgpt and more.Watch the full podcast with Dr. Surbhi Goel here: https://youtu.be/stB0cY_fffoFind Dr. Goel on social media Website: https://www.surbhigoel.com/Linkedin: https://www.linkedin.com/in/surbhi-goel-5455b25aTwitter: https://twitter.com/surbhigoel_?lang=enLearning Theory Alliance: https://let-all.com/index.htmlAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/
Anupam is the co-founder and President of TruEra and prior to that, he was a Professor at Carnegie Mellon University for 15 years. TruEra provides AI solutions that help enterprises use machine learning, improve and monitor model quality, and build trust. His research and other efforts are focused on privacy, fairness, and building trustworthy machine-learning models. He holds a Ph.D. in computer science from Stanford University and Bachelor’s degree in same from IIT Kharagpur in India.Time stamps of the conversation00:50 Introductions01:45 Background and TruEra05:30 Trustworthy AI11:55 Validating Large models in the real world 16:15 History of NLP and large language models29:25 Opportunities and challenges with ChatGPT36:52 Evaluating the reliability of ChatGPT39:10 Existing tools that aid explainability 43:12 AI trends to look for in 2023 More about Dr. DattaWebsite: https://www.andrew.cmu.edu/user/danupam/Linkedin: https://www.linkedin.com/in/anupamdattaResearch: https://scholar.google.com/citations?user=oK3QM1wAAAAJ&hl=enAbout TruEra: https://truera.com/About the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/
Surbhi is an Assistant Professor at the University of Pennsylvania. She got her Ph.D. in Computer Science from UT Austin and prior to joining UPenn as an Assistant Professor, she was a postdoctoral researcher at Microsoft Research NYC in the Machine Learning group. She has research expertise in theoretical computer science & machine learning, with a particular focus on developing theoretical foundations for modern deep learning paradigms. She also is a part of building the Learning Theory Alliance community that organizes and conducts several events useful for researchers and students in their careers. Time stamps of the conversation00:00:54 Introduction00:01:54 Background and research interests00:05:03 Interest in Machine Learning Theory00:13:02 Understanding how deep learning works00:16:30 Transformer architecture00:25:40 Scale of data and big models00:31:28 Reasoning in deep learning 00:38:52 Theoretical perspective on AGI, consciousness, and sentience in AI00:46:00 Remaining updated to the latest research00:53:38 Should one do a Ph.D.? 00:57:45 Is a Ph.D. mandatory for machine learning industry positions?01:01:38 What makes a good research thesis?01:05:30 Some best practices in research01:12:20 Learning Theory Alliance Group01:14:25 Job interviews in academia for researchers01:20:00 Advice to young researchers and students01:25:02 Decision to become a ProfessorFind Dr. Goel on social media Website: https://www.surbhigoel.com/Linkedin: https://www.linkedin.com/in/surbhi-goel-5455b25aTwitter: https://twitter.com/surbhigoel_?lang=enLearning Theory Alliance: https://let-all.com/index.htmlAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/
Sebastian Raschka​ is the lead AI educator at GridAI. He is the author of the book "Machine Learning with PyTorch and Scikit Learn" and also a few other books that cover the fundamentals of #machinelearning and #deeplearning techniques and implementing them with Python. He is also an Assistant Professor of Statistics at the University of Wisconsin-Madison and has been actively involved in making ML more accessible to beginners through his blogs, video tutorials, tweets and of course his books. He also holds a doctorate in Computational and Quantitative Biology from Michigan State University.Time Stamps of the Podcast00:00:00 Introductions00:02:40 Entry point in AI/ML that made you interested in it00:05:30 How did you go about learning the basics and implementation of various methods?00:11:45 What makes Python ideal for learning Machine Learning recently?00:21:54 What is your book about and who is this for?00:33:55 What goes into writing a good technical book?00:40:50 Applying ML to toy datasets vs real-world research problems00:47:40 Choosing b/w machine learning methods & deep learning methods00:56:22 Large models vs architecture efficient models 01:01:25 Interpretability & Explainability in AI01:08:45 Insights for people interested in machine learning research, academia or PhD01:14:17 Keeping up with research in deep learningSebastian's homepage: https://sebastianraschka.com/Twitter: https://mobile.twitter.com/rasbtLinkedIn: https://www.linkedin.com/in/sebastianraschka/His book: https://www.amazon.com/Machine-Learning-PyTorch-Scikit-Learn-scikit-learn-ebook-dp-B09NW48MR1/dp/B09NW48MR1/Video Tutorials:  @SebastianRaschka  About the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/
Dr. Matthew Lungren is currently the Chief Medical Information Officer at Nuance Communications - Microsoft company, and also holds part-time appointments with the University of California San Francisco as an Associate Clinical Professor and also as adjunct faculty at Stanford and Duke University. He is a radiologist by training and has led and contributed to multiple projects that use AI and deep learning for medical imaging and precision medicine. Time stamps from the conversation00:00:55 Introduction00:01:46 Role as a Chief Medical Information Officer 00:05:25  Leading research projects in the industry00:08:45 Is AI ready for primetime use cases in the real world?00:12:40 Regulations on AI systems in healthcare00:17:25 Interpretability vs a robust validation framework00:25:22 Promising directions to mitigate data issues in medical research00:32:24 Stable diffusion models 00:34:06 Making datasets public00:39:00 Vision transformers for multi-modal models00:44:35 Biomarker discovery00:48:20 Sentiment of AI in medicine 00:53:26 Bridging the communication gap between computer scientists and medical experts01:01:42 Advice to young researchers from medical and engineering schoolsFind Dr. Lungren on social media Twitter: https://twitter.com/mattlungrenmdLinkedIn: https://www.linkedin.com/in/mattlungrenmd/About the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/
Dr. Charles Fisher is the CEO and Founder of Unlearn(dot)AI which helps in faster drug development and efficient clinical trials. This year they also raised a series B funding of 50 million dollars. Charles holds a Ph.D. in biophysics from Harvard University and prior to founding Unlearn, he did his Postdoctorate at Boston University, followed by being a principal scientist at Pfizer and a machine learning engineer at a virtual reality company in silicon valley. Time stamps of the conversation00:00:30 Introduction00:01:16 What got you into Machine Learning?00:04:10 Learning the basics and implementation00:07:55 Digital twins for clinical trials and drug development00:13:06 Patient heterogeneity in medical research00:16:05 Error quantification of models00:17:17 ML models for drug development00:22:45 Adoption of AI in medical applications00:25:35 Building trust in AI systems 00:35:10 How to show AI models are safe in the real world?00:38:38 Moving from academia to industry to entrepreneurship00:45:08 Research projects in startups vs academia vs big companies00:53:12 Routine as a CEO00:57:50 Is a Ph.D. necessary for a research career in the industry?01:01:20 Taking inspiration from biology to improve machine learning01:05:25 Advice to young peopleAbout Charles:LinkedIn: https://www.linkedin.com/in/drckf/More about Unlearn: https://www.unlearn.ai/About the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/
Mina Ghashami is an Applied Scientist in the Alexa Video team at Amazon Science alongside being a lecturer at Stanford University. Prior to joining Amazon, she was a Research Scientist at Visa Research working on recommendation systems built on transactions from users and a few other projects. She completed her Ph.D. in Computer Science from the University of Utah followed by a PostDoctoral position at Rutgers University. At Amazon, she is mainly focused on Video-based ranking recommendation systems, something we talk about in detail in this conversation. Time stamps of the conversation00:00:50 Introductions00:01:40 Alexa Video - Ranking and Recommendation research00:05:25 Feature engineering for recommendation systems00:08:30  Ground truth for training recommendation systems00:12:46 What does an Applied Scientist do? (at Amazon)00:19:17 What got you into AI? And specifically recommendation systems00:24:30 Matrix approximation00:27:15 Challenges in recommendation research00:32:00 What's more interesting, theoretical or applied side of research?00:37:10 Over parametrization vs generalizability 00:39:55 Managing academic and industry positions at the same time00:46:26 Should one do a Ph.D. for research roles in the industry?00:50:00 Skills learned while pursuing a PhD00:54:22 Deciding industry vs academia00:56:20 Coping up with research in deep learning01:02:14 What makes a good research dissertation?01:04:16 Advice to young students navigating their interest in machine learningTo learn more about Mina:Homepage: https://mina-ghashami.github.io/Linkedin: https://www.linkedin.com/in/minaghashamiResearch: https://scholar.google.com/citations?user=msJHsYcAAAAJ&hl=enAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/
loading
Comments 
Download from Google Play
Download from App Store