DiscoverAI Explained
Claim Ownership
12 Episodes
Reverse
In this episode of AI Explained, we are joined by Navrina Singh, Founder and CEO at Credo AI.
We will discuss the comprehensive need for AI governance beyond regulated industries, the core principles of responsible AI, and the importance of AI governance in accelerating business innovation. The conversation also covers the challenges companies face when implementing responsible AI practices and dives into the latest regulations like the EU AI Act and state-specific laws in the U.S.
In this episode of AI Explained, we are joined by Jonathan Cohen, VP of Applied Research at NVIDIA.
We will explore the intricacies of NVIDIA's NeMo platform and its components like NeMo Guardrails and NIMS. Jonathan explains how these tools help in deploying and managing AI models with a focus on observability, security, and efficiency. They also explore topics such as the evolving role of AI agents, the importance of guardrails in maintaining responsible AI, and real-world examples of successful AI deployments in enterprises like Amdocs. Listeners will gain insights into NVIDIA's AI strategy and the practical aspects of deploying large language models in various industries.
On this episode, we’re joined by Kevin Schawinski, CEO and Co-Founder at Modulos AG
The EU AI Act was passed to redefine the landscape for AI development and deployment in Europe. But what does it really mean for enterprises, AI innovators, and industry leaders?
Schawinski will share actionable insights to help organizations stay ahead of the EU AI Act, and discuss risk implications to meeting transparency requirements, while advancing responsible AI practices.
In this episode, we’re joined by Robert Nishihara, Co-founder and CEO at Anyscale.
Enterprises are harnessing the full potential of GenAI across various facets of their operations for enhancing productivity, driving innovation, and gaining a competitive edge. However, scaling production GenAI deployments can be challenging due to the need for evolving AI infrastructure, approaches, and processes that can support advanced GenAI use cases.
Nishihara will discuss reliability challenges, building the right AI infrastructure, and implementing the latest practices in productionizing GenAI at scale.
In this episode, we’re joined by Pradeep Javangula, Chief AI Officer at RagaAI
Deploying LLM applications for real-world use cases requires a comprehensive workflow to ensure LLM applications generate high-quality and accurate content. Testing, fixing issues, and measuring impact are critical steps of the workflow to help LLM applications deliver value.
Pradeep Javangula, Chief AI Officer at RagaAI will discuss strategies and practical approaches organizations can follow to maintain high performing, correct, and safe LLM applications.
In this episode, we’re joined by Amal Iyer, Sr. Staff AI Scientist at Fiddler AI.
Large-scale AI models trained on internet-scale datasets have ushered in a new era of technological capabilities, some of which now match or even exceed human ability. However, this progress emphasizes the importance of aligning AI with human values to ensure its safe and beneficial societal integration. In this talk, we will provide an overview of the alignment problem and highlight promising areas of research spanning scalable oversight, robustness and interpretability.
On this episode, we’re joined by Kathy Baxter, Principal Architect of Responsible AI & Tech at Salesforce.
Generative AI has become widely popular with organizations finding ways to drive innovation and business growth. The adoption of generative AI, however, remains low due to ethical implications and unintended consequences that negatively impact the organization and its consumers.
Baxter will discuss ethical AI practices organizations can follow to minimize potential harms and maximize the social benefits of AI.
On this episode, we’re joined by Patrick Hall, Co-Founder of BNH.AI.
We will delve into critical aspects of AI, such as model risk management, generating adverse action notices, addressing algorithmic discrimination, ensuring data privacy, fortifying ML security, and implementing advanced model governance and explainability.
On this episode, we’re joined by Chaoyu Yang, Founder and CEO at BentoML.
AI-forward enterprises across industries are building generative AI applications to transform their businesses. While AI teams need to consider several factors ranging from ethical and social considerations to overall AI strategy, technical challenges remain to deploy these applications into production.
Yang, will explore key aspects of generative AI application development and deployment.
On this episode, we’re joined by Jure Leskovec, Stanford professor and co-founder at Kumo.ai.
Graph neural networks (GNNs) are gaining popularity in the AI community, helping ML teams build advanced AI applications that provide deep insights to tackle real-world problems. Stanford professor and co-founder at Kumo.AI, Jure Leskovec, whose work is at the intersection of graph neural networks, knowledge graphs, and generative AI, will explore how organizations can incorporate GNNs in their generative AI initiatives.
On this episode, we’re joined by Parul Pandey, Principal Data Scientist at H2O.ai and co-author of Machine Learning for High-Risk Applications.
Although AI is being widely adopted, it poses several adversarial risks that can be harmful to organizations and users. Listen to this episode to learn how data scientists and ML practitioners can improve AI outcomes with proper model risk management techniques.
On this episode, we’re joined by Peter Norvig, a Distinguished Education Fellow at the Stanford Institute for Human-Centered AI and co-author of popular books on AI, including Artificial Intelligence: A Modern Approach and more recently, Data Science in Context.
AI has the potential to improve humanity’s quality of life and day-to-day decisions. However, these advancements come with their own challenges that can cause harm. Listen to this episode to learn considerations and best practices organizations can take to preserve human control and ensure transparent and equitable AI.
Comments
Top Podcasts
The Best New Comedy Podcast Right Now – June 2024The Best News Podcast Right Now – June 2024The Best New Business Podcast Right Now – June 2024The Best New Sports Podcast Right Now – June 2024The Best New True Crime Podcast Right Now – June 2024The Best New Joe Rogan Experience Podcast Right Now – June 20The Best New Dan Bongino Show Podcast Right Now – June 20The Best New Mark Levin Podcast – June 2024
United States