DiscoverAI Safety Breakthrough
AI Safety Breakthrough
Claim Ownership

AI Safety Breakthrough

Author: AI SafeGuard

Subscribed: 2Played: 1
Share

Description

The future of AI is in our hands. Join AI SafeGuard on "AI Safety Breakthrough" as we explore the frontiers of AI safety research and discuss how we can ensure a future where AI remains beneficial for everyone. We delve into the latest breakthroughs, uncover potential risks, and empower listeners to become informed participants in the conversation about AI's role in society. Subscribe now and become part of the solution!

Intro about the author

J, graduated from Carnegie Mellon University, School of Computer Science, 10+ years in Cybersecurity, Cyber Threat Intelligence, Risk, Compliance, privacy and AI Safety.

9 Episodes
Reverse
Welcome to Agentic AI Unlocked, your deep dive into the transformative world of Agentic AI—systems combining large language models with advanced reasoning and autonomous action. These intelligent agents promise to disrupt industries, yet introduce a fundamentally new threat surface. Risks like memory poisoning, tool misuse, prompt injection, and insider threats highlight the urgent need for robust security and real-time governance.The OWASP GenAI Security Project aims to provide actionable insights into these challenges, helping organizations responsibly develop, deploy, and govern agentic AI. We advocate a proactive, defense-in-depth approach across the entire agent lifecycle.Join us as we explore crucial safeguards like fine-grained access control, runtime monitoring, memory hygiene, and secure tool integration. We'll also cover the evolving ecosystem of agent frameworks, emerging protocols, and complex regulatory landscapes like ISO/IEC 42001, NIST AI RMF, and the EU AI Act.Agentic AI offers immense promise alongside significant risks. This podcast equips you with the understanding and strategies for secure and responsible deployment. Let’s unlock the future of AI, securely.
This episode explores DeepSeek, a Chinese AI startup challenging the AI landscape with its free alternative to ChatGPT. We'll examine DeepSeek's innovative architecture, including Mixture-of-Experts (MoE) and Multi-head Latent Attention (MLA), which optimize efficiency. The discussion will highlight DeepSeek's use of reinforcement learning (RL) and its impact on reasoning capabilities, as well as how its open-source approach is democratizing AI access and innovation.We will also discuss ethical concerns, the competitive advantages and disadvantages of US-based models, and how DeepSeek is impacting cost structures and proprietary models. Join us as we analyze DeepSeek’s influence on the AI industry and the future of AI development and international collaboration
Are current AI safety benchmarks for multimodal models flawed? This podcast explores the groundbreaking research behind VLSBench, a new benchmark designed to address a critical flaw in existing safety evaluations: visual safety information leakage (VSIL)We delve into how sensitive information in images is often unintentionally revealed in the accompanying text prompts, allowing models to identify unsafe content based on text alone, without truly understanding the visual risks This "leakage" leads to a false sense of security and a bias towards simple textual alignment methods.Tune in to understand the critical need for leakless multimodal safety benchmarks and the importance of true multimodal alignment for responsible AI development. Learn how VLSBench is changing the way we evaluate AI safety and what it means for the future of AI.
This episode explores ASTPrompter, a novel approach to automated red-teaming for large language models (LLMs). Unlike traditional methods that focus on simply triggering toxic outputs, ASTPrompter is designed to discover likely toxic prompts – those that could naturally emerge during regular language model use. The approach uses Adaptive Stress Testing (AST), a technique that identifies likely failure points, and reinforcement learning to train an "adversary" model. This adversary generates prompts that aim to elicit toxic responses from a "defender" model, but importantly, these prompts have a low perplexity, meaning they are realistic and likely to occur, unlike many prompts generated by other methods.
This episode dives into the critical topic of Responsible AI (RAI), exploring how organizations worldwide are grappling with the ethical and practical challenges of AI adoption. We'll be drawing insights from a comprehensive survey of 1000 organizations across 20 industries and 19 geographical regions
In this episode, we dive into Ivy-VL, a groundbreaking lightweight multimodal AI model released by AI Safeguard in collaboration with Carnegie Mellon University (CMU) and Stanford University. With only 3 billion parameters, Ivy-VL processes both image and text inputs to generate text outputs, offering an optimal balance of performance, speed, and efficiency. Its compact design supports deployment on edge devices like AI glasses and smartphones, making advanced AI accessible on everyday hardware.Join us as we explore Ivy-VL's development, real-world applications, and how this collaborative effort is redefining the future of multimodal AI for smart devices. Whether you're an AI enthusiast, developer, or tech-savvy professional, tune in to learn how Ivy-VL is setting new standards for accessible AI technology.
Large Language Models (LLMs) are rapidly evolving, but how do we assess their ability to act as agents in complex, real-world scenarios? Join Jenny as we explore Agent Bench, a new benchmark designed to evaluate LLMs in diverse environments, from operating systems to digital card games. We'll delve into the key findings, including the strengths and weaknesses of different LLMs and the challenges of developing truly intelligent agents.
In this podcast, we delve into OpenAI's innovative approach to enhancing AI safety through red teaming—a structured process that uses both human expertise and automated systems to identify potential risks in AI models. We explore how OpenAI collaborates with external experts to test frontier models and employs automated methods to scale the discovery of model vulnerabilities. Join Jenny as we discuss the value of red teaming in developing safer, more reliable AI systems.
Explore how Precision Knowledge Editing (PKE) refines AI for safety and ethical behavior in Surgical Precision: PKE’s Role in AI Safety. Join experts as we uncover the science, challenges, and breakthroughs shaping trustworthy AI. Perfect for tech enthusiasts and professionals alike, this podcast reveals how PKE ensures AI serves humanity responsibly.
Comments