DiscoverAI Confidential
AI Confidential
Claim Ownership

AI Confidential

Author: Opaque Systems

Subscribed: 10Played: 51
Share

Description

AI Confidential is a podcast about AI and how to benefit from it responsibly. We speak with leaders from Microsoft, Accenture, NVIDIA, Google Cloud — and more — about how confidential AI is reshaping business.
18 Episodes
Reverse
Are LLMs Dead?

Are LLMs Dead?

2025-09-2344:51

OpenAI's ChatGPT-4, released in March 2023, was noticeably more intelligent than ChatGPT-3. It reasoned better, answered questions more accurately, and could handle more complex tasks. But LLM advancements have since slowed down. And critics have called OpenAI's latest release — ChatGPT-5  — "overhyped and underwhelming." In this episode, hosts Aaron Fulkerson and Mark Hinkle discuss: Whether LLMs are dead Why 95% of AI projects are failing How enterprises can harness AI without sacrificing data security If you want to learn more about AI and how to use it responsibly, visit opaque.co
In the age of agentic AI, Kellie Romack knows deploying secure agents needs to be a baseline for enterprises, not an aspiration. In this episode, host Aaron Fulkerson unpacks how confidential AI agents work and talks to Kellie Romack, CDIO at ServiceNow, about how her company is implementing confidential AI agents across its platform. Since partnering with OPAQUE, Microsoft Azure, and NVIDIA to deploy confidential AI agents, ServiceNow has seen average response times from its sales commission help desk speed up from four days to just eight seconds. Kellie and Aaron also discuss: How ServiceNow, a defining enterprise software company, is approaching AI governance Advice for enterprise leaders building out agentic workflows Predictions on how companies can gain the most value from AI If you want to learn more about AI and how to benefit from it responsibly, visit opaque.co Special thanks to Now Productions for filming this interview at ServiceNow's headquarters in Santa Clara.
Marco Palladino knows that the internet is changing quickly. As the Co-founder and CTO at Kong, an API management platform, he's on the forefront of integrating APIs with AI — and is wrestling with the questions around security and governance that this shift presents. In this episode, hosts Aaron Fulkerson and Mark Hinkle talk to Palladino about: LLM gateways and why we need them How AI is impacting API development Why AI governance is — and will continue to be — a challenge How enterprises can prepare for the agentic web If you want to learn more about AI and how to benefit from it responsibly, visit opaque.co
Last month at Opaque's annual Confidential Computing Summit™, hosts Aaron Fulkerson and Mark Hinkle interviewed several of our incredible speakers and sponsors about the agentic web, data security, and AI. This episode dives into a few of those conversations. First, Jason Clinton — CISO at Anthropic — talks about the future of agents, MCP, and Anthropic's latest safety upgrade, ASL-3, which is designed to prevent bad actors from misusing its models. Next, Daniel Rohrer — NVIVIA's VP of Software Product Security, Architecture & Research —shares how NVIDIA is handling AI security and scaling compute power and trust. Finally, Daniel J. Beutel — Co-Founder and CEO of Flower Labs — explains how his company is using federated learning to keep data secure in AI workloads. If you want to learn more about AI and how to benefit from it responsibly, visit opaque.co
Last month at Opaque's annual Confidential Computing Summit™, hosts Aaron Fulkerson and Mark Hinkle interviewed several of our incredible speakers about the future of confidential AI, genAI, and agents. This episode dives into a few of those conversations. First, Mark Russinovich — CTO, Deputy CISO, and Technical Fellow at Microsoft Azure — talks to us about recent developments in confidential AI from his research team, including a new framework for thinking about the various security levels available for data using confidential computing.  Next, James Kaplan — CTO at McKinsey Technology and Partner at McKinsey — shares takeaways from his research into how large enterprises are using AI and explains how genAI is allowing enterprises to mine troves of unstructured data.  Finally, Vinay Pillai — Chief Architect and VP of Engineering, Digital Technology, and Technology Platform at ServiceNow — explains how ServiceNow is using confidential agents powered by Opaque, Microsoft Azure, and NVIDIA to improve their sales commission desk response times from four days to eight seconds. If you want to learn more about AI and how to benefit from it responsibly, visit opaque.co
Vijoy Pandey, Head of Outshift by Cisco, understands that an internet populated by agents requires an open, interoperable framework for agent-to-agent communication and a seismic shift in governance. That's why Outshift by Cisco, in partnership with LangChain and Galileo, created AGNTCY — an open source collective for inter-agent collaboration. And we're excited to announce that Opaque is a member! In this episode, Pandey talks to hosts Aaron Fulkerson and Mark Hinkle about:  His aims for AGNTCY How we create an open-source, collaborative, interoperable, and secure internet of agents If you want to learn more about AI and how to benefit from it responsibly, visit opaque.co
What would Founder, Systems Architect, and Deeptech Strategist, John Willis, like you to understand? The history of AI and the risks associated with data exhaust. Willis recently published Rebels of Reason, an AI history told through the stories of lesser-known technologists who built the foundation for modern-day AI. And one of the biggest threats he's flagging for enterprise AI users is data exhaust, byproduct info that's generated while using digital systems. From manufacturing process details to AI model configurations, this data trail is growing fast and can be collected and exploited.  Also in this episode, you'll hear about: Why legacy systems and tech debt still haunt AI adoption How the NORMAL stack embeds AI governance into every layer of the enterprise Why RAG isn't dead If you want to learn more about AI and how to benefit from it responsibly, visit opaque.co
As GenAI tools have become more intelligent and robust, the reliability of their output has decreased. Enter Galileo: an AI reliability platform designed for GenAI applications. Atin Sanyal — the Co-founder and CTO of Galileo — has a background building machine learning tech at Uber and Apple. One innovative technique that sets Galileo apart is ChainPoll — their hallucination detection methodology that uses consensus scoring and prompts the LLM to outline its step-by-step reasoning process.  In this episode, hosts Aaron Fulkerson and Mark Hinkle talk to Atin about: What evaluation agents are, and why they get smarter over time How Galileo helps enterprises evolve their own AI quality metrics Why data quality and confidential computing will become increasingly important to enterprises building AI systems If you want to learn more about AI and how to benefit from it responsibly, visit opaque.co
João Moura built the agentic AI open source framework, CrewAI, with a central vision: to make agent creation feel borderline magical. Inspired by the Ruby on Rails philosophy of "convention over configuration," CrewAI is designed for speed, clarity, and ease of use. What we love about João's approach to agentic AI is that he's not just disrupting the market. He's shifting the way enterprise leaders think about long-term workflows and processes, too. In this episode, hosts Aaron Fulkerson and Mark Hinkle talk to João about: Why interoperability across your stack is essential for scaling AI The importance of building secure AI systems How managers are uniquely equipped to build effective agents The dangers of "agent washing" and how to see through the hype If you want to learn more about AI and how to benefit from it responsibly, visit opaque.co
James Kaplan has been consulting enterprise clients on how to adopt and extract value from cutting-edge technologies for over 25 years. His dual role as Partner at McKinsey and CTO at McKinsey Technology gives him a unique vantage point into how CIOs and CTOs can harness AI to drive automation, increase productivity, and enhance data security. In this episode, host Aaron Fulkerson talks to James about: How AI is changing the enterprise technology landscape What enterprises should be thinking about as they're adopting and building AI systems How to avoid the "IT doom loop" If you want to learn more about AI and how to benefit from it responsibly, visit opaque.co
Hosts Aaron Fulkerson and Mark Hinkle sit down with Sriram Raghavan, Vice President of IBM Research AI, at the AllThingsOpen.ai conference. As an experienced industry expert, Sriram leads a global team of over 750 research scientists and engineers focused on advancing AI.  Sriram and IBM believe that the future of AI is open source. That's why IBM recently contributed three AI projects — Docling, Data Prep Kit, and BeeAI — to the Linux Foundation. And while there is a lot of hype around LLMs right now, IBM is leaning into smaller "fit-for-purpose" models that give developers choice. In the episode, Sriram also digs into: IBM's approach to open-source models What innovations he sees coming for enterprise AI How his team is approaching responsible, secure AI
From cloud computing to infrastructure-as-a-service, Reuven Cohen (AKA rUv) has been on the cutting edge of every major technology supercycle — and he's now one of the most influential people in the agentic AI space. With the help of agents, rUv produced 10 million lines of usable code last year. That's 33,000 years of output from one human. rUv's approach proves one of two things: he's either an outlier or a leading indicator of what's to come. In this episode, we talk to rUv about:  His approach to training autonomous agents The rapid advancements in agentic code development  Why LLMs are vulnerable to data poisoning Why Python is bad for agentic development
What does it take to ensure AI is safe, ethical, and resilient? In this season finale, Anthropic's CISO joins Aaron to discuss the critical intersection of innovation, data privacy, and sovereignty, offering an optimistic perspective on what lies ahead for the future of confidential AI.   
AI is full of potential, but what does it really take to make it work in complex environments? Will Grannis, CTO and VP at Google Cloud, shares his perspective on bridging AI innovation with practical application, including insights into the real challenges and rewards of implementing AI in today's top enterprises.
Can AI be both innovative and private? NVIDIA VP of Software Security Daniel Rohrer and Opaque Co-Founder and President Raluca Ada Popa discuss the advancements in privacy-preserving technology that enable organizations to push AI forward without compromising data sovereignty.
Generative AI is transforming industries—but how do we ensure it's safe? Teresa Tung, Senior Managing Director & Global Data Capability Lead at Accenture, dives into what companies need to make generative AI both impactful and secure, exploring the frameworks that bring safety to scale.
How do you build AI applications that are not only powerful but also trustworthy and easy to adopt? Join Mark Papermaster, CTO & EVP at AMD, and Mark Russinovich, CTO & Deputy CISO at Microsoft Azure, as they discuss the essential role of confidential computing in making AI seamless, secure, and accessible to businesses everywhere.
Trailer

Trailer

2024-11-1301:17

Welcome to AI Confidential, the podcast that explores the future of AI and data sovereignty.   In today's rapidly evolving tech landscape, AI is at the heart of transformation across industries. From finance to high tech, leaders are racing to harness its power while navigating complex challenges like data sovereignty, regulatory compliance, and privacy. But what if there was a way to solve these problems without sacrificing innovation?   On AI Confidential, join host Aaron Fulkerson as he dives deep with visionaries from AMD, Accenture, Google Cloud, Microsoft Azure, NVIDIA, Opaque Systems, and more to uncover how confidential computing is reshaping what's possible for AI. Together, they explore pressing questions, share groundbreaking use cases, and take bold steps in predicting AI's future.  
Comments 
loading