DiscoverThe AI Fundamentalists
The AI Fundamentalists
Claim Ownership

The AI Fundamentalists

Author: Dr. Andrew Clark & Sid Mangalik

Subscribed: 4Played: 34
Share

Description

A podcast about the fundamentals of safe and resilient modeling systems behind the AI that impacts our lives and our businesses. 

18 Episodes
Reverse
Artificial Intelligence (AI) stands at a unique intersection of technology, ethics, and regulation. The complexities of responsible AI are brought into sharp focus in this episode featuring Anthony Habayeb, CEO and co-founder of Monitaur, As responsible AI is scrutinized for its role in profitability and innovation, Anthony and our hosts discuss the imperatives of safe and unbiased modeling systems, the role of regulations, and the importance of ethics in shaping AI.Show notesPrologue: ...
Baseline modeling is a necessary part of model validation. In our expert opinion, it should be required before model deployment. There are many baseline modeling types and in this episode, we're discussing their use cases, strengths, and weaknesses. We're sure you'll appreciate a fresh take on how to improve your modeling practices.Show notesIntroductions and news: why reporting and visibility is a good thing for AI 0:03Spoiler alert: Providing visibility to AI bias audits does NOT mean expos...
In this episode, we explore information theory and the not-so-obvious shortcomings of its popular metrics for model monitoring; and where non-parametric statistical methods can serve as the better option. Introduction and latest news 0:03Gary Marcus has written an article questioning the hype around generative AI, suggesting it may not be as transformative as previously thought.This in contrast to announcements out of the NVIDIA conference during the same week.Information theory and its ...
In this episode, the hosts focus on the basics of anomaly detection in machine learning and AI systems, including its importance, and how it is implemented. They also touch on the topic of large language models, the (in)accuracy of data scraping, and the importance of high-quality data when employing various detection methods. You'll even gain some techniques you can use right away to improve your training data and your models.Intro and discussion (0:03)Questions about Information Theory from...
We're taking a slight detour from modeling best practices to explore questions about AI and consciousness. With special guest Michael Herman, co-founder of Monitaur and TestDriven.io, the team discusses different philosophical perspectives on consciousness and how these apply to AI. They also discuss the potential dangers of AI in its current state and why starting fresh instead of iterating can make all the difference in achieving characteristics of AI that might resemble consciousness....
Data scientists, researchers, engineers, marketers, and risk leaders find themselves at a crossroads to expand their skills or risk obsolescence. The hosts discuss how a growth mindset and "the fundamentals" of AI can help.Our episode shines a light on this vital shift, equipping listeners with strategies to elevate their skills and integrate multidisciplinary knowledge. We share stories from the trenches on how each role affects robust AI solutions that adhere to ethical standards, and how e...
Get ready for 2024 and a brand new episode! We discuss non-parametric statistics in data analysis and AI modeling. Learn more about applications in user research methods, as well as the importance of key assumptions in statistics and data modeling that must not be overlooked, After you listen to the episode, be sure to check out the supplement material in Exploring non-parametric statistics.Welcome to 2024 (0:03)AI, privacy, and marketing in the tech industryOpenAI's GPT store launch. (...
It's the end of 2023 and our first season. The hosts reflect on what's happened with the fundamentals of AI regulation, data privacy, and ethics. Spoiler alert: a lot! And we're excited to share our outlook for AI in 2024.AI regulation and its impact in 2024.Hosts reflect on AI regulation discussions from their first 10 episodes, discussing what went well and what didn't.Its potential impact on innovation. 2:36AI innovation, regulation, and best practices. 7:05AI, privacy, and data security i...
Joshua Pyle joins us in a discussion about managing bias in the actuarial sciences. Together with Andrew's and Sid's perspectives from both the economic and data science fields, they deliver an interdisciplinary conversation about bias that you'll only find here.OpenAI news plus new developments in language models. 0:03The hosts get to discuss the aftermath of OpenAI and Sam Altman's return as CEOTension between OpenAI's board and researchers on the push for slow, responsible AI develop...
Episode 9. Continuing our series run about model validation. In this episode, the hosts focus on aspects of performance, why we need to do statistics correctly, and not use metrics without understanding how they work, to ensure that models are evaluated in a meaningful way.AI regulations, red team testing, and physics-based modeling. 0:03The hosts discuss the Biden administration's executive order on AI and its implications for model validation and performance.Evaluating machine learning mode...
Episode 8. This is the first in a series of episodes dedicated to model validation. Today, we focus on model robustness and resilience. From complex financial systems to why your gym might be overcrowded at New Year's, you've been directly affected by these aspects of model validation.AI hype and consumer trust (0:03) FTC article highlights consumer concerns about AI's impact on lives and businesses (Oct 3, FTC)Increased public awareness of AI and the masses of data needed to train it le...
Episode 7. To use or not to use? That is the question about digital twins that the fundamentalists explore. Many solutions continue to be proposed for making AI systems safer, but can digital twins really deliver for AI what we know they can do for physical systems? Tune in and find out.Show notesDigital twins by definition. 0:03Digital twins are one-to-one digital models of real-life products, systems, or processes, used for simulations, testing, monitoring, maintenance, or practice de...
Episode 6. What does systems engineering have to do with AI fundamentals? In this episode, the team discusses what data and computer science as professions can learn from systems engineering, and how the methods and mindset of the latter can boost the quality of AI-based innovations.Show notes News and episode commentary 0:03ChatGPT usage is down for the second straight month.The importance of understanding the data and how it affects the quality of synthetic data for non-tabular use cas...
Synthetic Data in AI

Synthetic Data in AI

2023-08-0831:46

Episode 5. This episode about synthetic data is very real. The fundamentalists uncover the pros and cons of synthetic data; as well as reliable use cases and the best techniques for safe and effective use in AI. When even SAG-AFTRA and OpenAI make synthetic data a household word, you know this is an episode you can't miss.Show notesWhat is synthetic data? 0:03Definition is not a succinct one-liner, which is one of the key issues with assessing synthetic data generation.Using general informati...
Episode 4. The AI Fundamentalists welcome Christoph Molnar to discuss the characteristics of a modeling mindset in a rapidly innovating world. He is the author of multiple data science books including Modeling Mindsets, Interpretable Machine Learning, and his latest book Introduction to Conformal Prediction with Python. We hope you enjoy this enlightening discussion from a model builder's point of view.To keep in touch with Christoph's work, subscribe to his newsletter Mindful Modeler - ...
Episode 3. Get ready because we're bringing stats back! An AI model can only learn from the data it has seen. And business problems can’t be solved without the right data. The Fundamentalists break down the basics of data from collection to regulation to bias to quality in AI. Introduction to this episodeWhy data matters.How do big tech's LLM models stack up to the proposed EU AI Act?How major models such as Open AI and Bard stack up against current regulations.Stanford HAI - Do Fo...
Truth-based AI: Large language models (LLMs) and knowledge graphs - The AI Fundamentalists, Episode 2Show NotesWhat’s NOT new and what is new in the world of LLMs. 3:10 Getting back to the basics of modeling best practices and rigor.What is AI and subsequently LLM regulation going to look like for tech organizations? 5:55Recommendations for reading on the topic.Andrew talks about regulation, monitoring, assurance, and alarm.What does it mean to regulate generative AI models? 7:51Concerns...
The AI Fundamentalists - Ep1 SummaryWelcome to the first episode. 0:03Welcome to the first episode of the AI Fundamentalists podcast.Introducing the hosts.Introducing Sid and Andrew. 1:23Introducing Andrew Clark, co-founder and CTO of Monitaur.Introduction of the podcast topic.What is the proper rigorous process for using AI in manufacturing? 3:44Large language models and AI.Rigorous systems for manufacturing and innovation.Predictive maintenance as an example of manufacturing. 6:28Predi...
Comments 
Download from Google Play
Download from App Store