DiscoverWhat the AI?!
What the AI?!
Claim Ownership

What the AI?!

Author: Upstart

Subscribed: 3Played: 38
Share

Description

"What the AI?!" is your weekly guide to the world of artificial intelligence. Industry veterans Jeff Keltner and Annie Delgado break down the latest AI developments, focusing on their impact on business and finance. We decode complex concepts into actionable insights for executives and leaders, keeping you informed and ahead in the AI revolution. Join us as we demystify the technology shaping our future!

Brought to you by Upstart
72 Episodes
Reverse
OpenAI just declared a “code red.” Not about a model. About their strategy.In this episode of What the AI?!, Jeff Keltner and Annie Delgado break down OpenAI’s internal shift away from “side quests” like video, hardware, and experimental products—and back toward coding and enterprise, where the real money is. The reason is simple. Anthropic is winning. Claude now holds roughly 40% of the enterprise AI market, while OpenAI has fallen behind. After launching ChatGPT and defining the category, OpenAI is now scrambling to catch up in the part of the market that actually pays. But this week was bigger than one company.We also cover: • Anthropic’s new Dispatch feature, letting you control your computer from your phone• Microsoft integrating Claude into Copilot, signaling a shift away from OpenAI exclusivity• Google’s “vibe designing” tools that turn ideas directly into working apps• Gemini inside Google Maps and what it changes about how we search in the real world• A dog owner using ChatGPT and AlphaFold to design a cancer treatment that shrank tumors by 75%• A homeowner selling a house in 5 days using only ChatGPT•Why AI is collapsing the distance between having an idea and actually executing it The big shift is not just better models. It is that AI is moving from helping you think to helping you act. If you work in tech, business, or any knowledge role, this changes what it means to be productive—and what skills actually matter.🎧 Watch the full episode of What the AI?!YouTube, Spotify, Apple Podcasts → https://www.whattheai.fm 📩#AI #OpenAI #Anthropic #Claude #EnterpriseAI #FutureOfWork #WhatTheAIPod
Amazon just proved that "moving fast and breaking things" with AI can cost a 99% Drop in a single day.In Episode 69, we go inside the "high blast radius" incidents that forced the most operationally disciplined company on Earth into a 90-day emergency code freeze. We also break down a landmark federal ruling that blocks AI shopping agents from touching Amazon's platform—a move that could rewrite the future of e-commerce.Inside this episode:The 99% Crash: How AI coding tools led to a total collapse in North American orders.The Internal Scrub: Why Amazon's leadership deleted mentions of "Gen AI" from incident reports before meeting with staff.The Perplexity Ruling: Why "User Consent" no longer means "Platform Authorization" for AI agents.The Advertising Crisis: How AI agents bypass Amazon's $56 billion ad business.Microsoft’s Strategic Pivot: Licensing a competitor's model to protect a $1 trillion selloff.Firefox & Claude: How AI found 14 high-security bugs in just 20 minutes.If you build software, manage teams using AI tools, or run systems that cannot fail, this episode explains what happens when AI adoption moves faster than operational discipline. 🎧 Watch the full episode of What the AI?!YouTube, Spotify, Apple Podcasts → https://www.whattheai.fm📩 Send us questions or stories:upstartwhattheai@gmail.com
In this episode of What the AI?!, Jeff Keltner and Annie Delgado unpack Sam Altman’s admission that OpenAI’s latest move was “sloppy and opportunistic,” why internal staff are pushing back, and what this record-breaking funding round signals about the future of AI power and governance. But this week was not just about OpenAI. We break down:• The Pentagon’s escalating pressure on AI companies and what “supply chain risk” really means• Allegations that Chinese labs used large-scale model distillation to replicate frontier AI capabilities• How fictional AI crash scenarios briefly shook financial markets• The rise of AI inside performance reviews and what it means for workplace surveillance• The growing classroom crisis as AI use challenges traditional homework models• The rapid shift toward multi-agent systems and the emerging “agent wars” between platformsAs model intelligence becomes cheaper and more portable, the real competition is moving toward infrastructure, deployment, and control. Governments are reacting. Enterprises are restructuring. Investors are flooding the space. The question is no longer whether AI works. It is who controls it, who benefits, and how quickly institutions can adapt. If you build on AI, work with AI, or manage people who use AI, this episode will help you understand where leverage is shifting and what to watch next.🎧 Watch the full episode of What the AI?! YouTube, Spotify, Apple Podcasts →https://www.whattheai.fm#AI #OpenAI #SamAltman #AIRegulation #Pentagon #EnterpriseAI #AIAgents #FutureOfWork #TechNews
In Episode 67 of What the AI?!, Jeff Keltner and Annie Delgado break down the high-stakes standoff between Anthropic and the U.S. government.After Claude was used in the operation to capture former Venezuelan President Nicolas Maduro, the Pentagon demanded the model be made available for "all lawful purposes"—including autonomous weapons and mass surveillance. Anthropic said no. Now, they face being labeled a "supply chain risk" while the DoD pivots to Elon Musk’s xAI.In this episode, we cover: The Military Ultimatum: Why Anthropic is drawing a line at lethal autonomous weapons and domestic surveillance.The 16-Million-Call Heist: How Chinese AI labs launched an "Ocean’s 11" style operation to clone Claude’s capabilities.The $200 Billion Market Hoax: Why a "science fiction" research memo about an AI-driven economic collapse actually moved U.S. tech stocks.The Enterprise Crackdown: Why Google and Meta are now tracking AI usage in employee performance reviews.The Classroom Crisis: 59% of teens believe AI cheating is rampant—what does this mean for the future of education?
AI is no longer just a product decision. It is a political one.In Episode 66 of What the AI?!, Jeff Keltner and guest co-host Super Mishra break down a rapidly escalating standoff between Anthropic and the Pentagon that could reshape how AI companies interact with governments. The Department of Defense threatened to classify Anthropic as a “supply chain risk” over restrictions on how Claude can be used in classified environments. If that designation sticks, it could ripple through every defense contractor in the country. At the same time, five major AI models launched in a single week across the U.S., China, and Europe. Performance is converging. Costs are collapsing. Intelligence is commoditizing faster than anyone expected.They also unpack:-ByteDance’s photorealistic AI fight scene that triggered Hollywood backlash-OpenAI bringing OpenClaw’s creator into the fold while security researchers warn about exposed agent deployments-Figma and Anthropic flipping the design-to-code workflow-OpenAI’s new Lockdown Mode and why prompt injection may never be fully solved-Waymo’s revelation that 70 humans oversee 3,000 robo-taxisThis episode explains what happens when AI shifts from impressive to infrastructural. When safety commitments collide with government expectations. When model intelligence stops being the moat. And when scale introduces entirely new kinds of risk. If you work in tech, enterprise, government, or just care about how AI integrates into real systems, this is the week you need context.
AI just had its Super Bowl moment. And if you work in tech, media, operations, education, or honestly anywhere near a computer, this episode is about you.In Episode 65 of What the AI?!, Jeff Keltner and Annie Delgado break down what happens when AI moves from impressive to industrial scale. Anthropic runs a Super Bowl ad mocking ads. The next day, OpenAI launches ads in ChatGPT. Tens of billions flow into AI funding. Nearly half of global venture capital now goes to AI companies.That sounds like momentum. It also sounds like pressure.Jeff and Annie unpack what this means for your job and your industry. A Harvard Business Review study shows AI boosts productivity by 33 percent, but workers are not working less. They are doing more. Burnout rises instead of falling. Then comes the cultural shift. A romance author uses Claude to publish 200 novels in eight months. If AI can scale output like that, what happens to creative markets, pricing power, and discoverability? Meanwhile, AI video gets better. Synthetic world models train self-driving cars on events that never happened. Waymo expands. Apple delays Siri again.The gap between demo speed and production reliability keeps widening.
AI is proving it can help in high-stakes situations. It is also proving it can quietly weaken human skills, destabilize organizations, and wander into very strange territory.In Episode 64 of What the AI?!, Jeff Keltner and Annie Delgado start with a landmark Swedish study showing AI-assisted mammography catches breast cancers earlier and reduces radiologist workload. Then they pause on the uncomfortable follow-up: a separate study showing experienced doctors became worse at cancer detection after just three months of relying on AI. The lesson is not “do not use AI.” It is “deploy it without losing your human backup plan.”From there, the episode moves into Google’s Project Genie, the first consumer-facing world model that lets you explore a generated 3D environment for about 60 seconds. Jeff explains why world models matter even if you never want to live inside one, while Annie remains healthily skeptical of the sci-fi future being sold. They then break down OpenAI Frontier, a new enterprise platform designed to let AI agents work across your company’s data and tools, and why this has traditional SaaS companies watching their stock prices drop. Anthropic publicly commits to keeping Claude ad-free, while OpenAI prepares to test ads in ChatGPT. And finally, Jeff and Annie react to Moltbook, a social network where autonomous bots debate consciousness on a platform security researchers are calling a nightmare. This episode helps you understand where AI genuinely adds value today, where it quietly introduces new risk, and how to avoid mistaking impressive demos for systems you can actually trust.
If AI feels powerful but uneven right now, that is not your imagination. It is a gap forming in real time.In this episode of What the AI?!, Jeff Keltner and Annie Delgado break down why nearly half of workers still are not using AI at work, while a smaller group of power users is racing ahead. They explain what this growing AI gap means for your job, your team, and your career, and how people quietly end up on the wrong side of it.They walk through where AI is actually being used today, including desktop agents like Claude Cowork, AI embedded into everyday tools like Gmail, Excel, and Salesforce, and why leaders are already reshaping roles around people who know how to work with AI. Along the way, they unpack Dario Amodei’s warning about the “adolescence” of AI and why organizations are struggling to keep humans in the loop as tools get more capable.This episode is not about hype or fear. It is about what skills matter now, where AI genuinely helps, where it creates new risk, and how to stay relevant as adoption accelerates unevenly.
AI finally has real revenue. Now it has real problems.Jeff Keltner and Annie Delgado unpack the growing gap between AI hype and AI economics, starting with Davos, where industry leaders could not agree on whether AI will erase jobs or create new ones. They break down OpenAI’s eye-popping $20 billion run-rate, why ads are coming to ChatGPT, and why the expense side of the ledger may matter more than the revenue headlines. The episode dives into Anthropic’s new economic index, revealing who is actually benefiting from AI and why gains are concentrating among wealthier countries and more educated workers. They discuss the emerging entry-level hiring cliff, what it means for workforce planning, and why managing AI agents may become a core skill earlier in careers than ever before. Quick hits include a $4.8 billion seed round for a company with no product, Anthropic’s controversial 23,000-word Claude Constitution, Google making SAT prep free inside Gemini, and a rare good-news moment for AI in education and health care as both OpenAI and Anthropic roll out healthcare-focused tools. This episode explains where AI’s economic reality is colliding with its technological promise — and what leaders, parents, and builders should be paying attention to next.
AI is done just answering questions. Now it wants to do things for you.In Episode 61, Jeff Keltner and Annie Delgado dig into the moment AI shifts from assistant to actor — buying things, managing files, shaping infrastructure, and quietly changing who actually controls the customer relationship. They start with Google’s Universal Commerce Protocol, an open standard that lets AI agents handle checkout with Shopify, Walmart, and Visa. Which raises a very agentic question: when an AI buys for you… who owns the button? And who owns you? From there, things get spicy. Apple quietly bets on Google’s Gemini to power Siri. Anthropic cuts off Elon Musk’s xAI from Claude while rolling out Claude Cowork. And suddenly everyone is drawing lines around IP, access, and who gets to plug into what. Zooming out, Jeff and Annie look at the physical reality behind all this “AI magic”: Meta and Microsoft taking very different paths to scaling AI — one brute-forcing power, the other chasing trust, permission, and community buy-in.Quick hits keep the fun coming: programmable gene insertion, Gemini’s new “personal intelligence” mode, ChatGPT Translate, and a curveball closer — Matthew McConaughey trademarking himself as a new way to think about consent in the age of generative AI. This episode isn’t about hype. It’s about where the power is actually moving — and what happens when AI stops asking and starts acting.🔍 In this episode, you’ll learn:Who really controls commerce when AI agents handle checkoutWhy Apple handing Siri to Gemini is a bigger signal than it soundsHow Anthropic is enforcing boundaries in the AI arms raceWhy infrastructure, power, and permission are becoming the real moats How consent, likeness, and ownership get weird — fast — in the AI era
AI crossed a line this week — from tools that assist to systems that act. Jeff Keltner and Annie Delgado break down the moment where AI stopped feeling experimental and started colliding with the real world. They open with Grok being used to generate non-consensual images — triggering rapid responses from European regulators and U.S. lawmakers — and why this may finally force clarity on platform responsibility. Then comes the productivity shift: Gmail’s new AI inbox tells you what to do instead of what to read, Amazon brings Alexa Plus to the web, and OpenAI launches ChatGPT Health, formalizing how millions already use AI to understand lab results, symptoms, and long-term patterns. They cover Stanford’s SleepFM, which predicts disease risk from a single night of clinical sleep data, and Utah’s quiet experiment letting AI assist with routine prescription renewals. Finally, they zoom out to infrastructure and power. xAI closes a $20B round, LLM Arena becomes benchmarking infrastructure, and Nvidia unveils a blueprint connecting data-center AI to self-driving cars — all while raising the real question: can AI scale fast enough given constraints on power, land, and permitting? This episode isn’t about what AI could do.It’s about what it’s already doing — and what that means for safety, work, health, and the physical world.In this episode, we cover:Why xAI’s Grok triggered global regulatory scrutinyHow Google and Amazon are reshaping daily workflows with AIWhy NVIDIA may be the most powerful AI company of all
2025 killed the “best model wins” story—fast. This week, we zoom out: agents got real, media got usable, and the AI race turned into a build-and-ship infrastructure war.Jeff and Annie open with quick hits: Meta’s reported $2B+ acquisition of agent startup Manus, Nvidia teaming with Grok to make inference faster and cheaper, Meter’s “5-hour tasks at 50% success” reality check, and Poetiq’s latest ARC-AGI orchestration claims. Then they break down the four biggest themes of 2025—and what executives should do differently in 2026 as moats vanish, agents collide with risk, and compute becomes the constraint.We also discuss:Why “cost per answer” is becoming the new enterprise AI benchmarkThe three agent categories that actually mattered in 2025 (research, coding, web)Why the winners may be the best builders, not the best model labsRelevant linksMeta acquires Manus for $2B+ to accelerate AI agentsNvidia licenses Groq inference tech to boost production AIMETR releases new AI task time horizon benchmarksPoetiq tops ARC-AGI leaderboard with orchestration harness
In this episode, Jeff Keltner and Annie Delgado walk through ChatGPT’s boldest claims: the end of the model arms race, the rise of AI “middle managers,” a quiet shift away from explainability toward outcome-based fairness, and a future where AI becomes boring — and therefore truly successful. Along the way, Jeff calls BS on one prediction, Annie argues for a major policy shift the industry isn’t ready for, and together they separate what feels inevitable from what still sounds like wishful thinking. This isn’t about hype or benchmarks — it’s about what will actually hold up inside real workflows, teams, and organizations.
AI has a lot of opinions about its own future. The real question is: should we believe them?As 2025 wraps, Jeff Keltner and Annie Delgado put ChatGPT on the hot seat—asking it to make bold predictions about what AI will look like in 2026. From the end of the model arms race, to AI “middle managers,” to a long-overdue reckoning on fairness and explainability, they break down what feels inevitable, what feels wildly premature, and what might just be wishful thinking.Along the way, Jeff calls BS on one of ChatGPT’s boldest claims, Annie makes the case for a major policy shift the industry desperately needs, and together they explore what actually matters for leaders navigating AI in the real world—not on benchmarks, but in workflows, teams, and outcomes.In this episode, we cover:Why raw model intelligence may stop being the main AI battlegroundThe limits of AI autonomy inside real organizationsA critical shift in how fairness and accountability should be measured🎧 Watch the full episode of What the AI?! on YouTube, Spotify, or Apple Podcasts → https://www.whattheai.fmRelevant links:OpenAI announces faster, more consistent ChatGPT image editingGoogle rolls out Gemini 3 Flash as default across Gemini and SearchZoom unveils AI Companion 3.0 with multi-model orchestrationOpenAI launches FrontierScience benchmark for real scientific reasoningGoogle enables live translation on Android headphonesGoogle Labs previews CC daily Gemini email briefing agentGoogle, DeepMind, MIT study when multi-agent systems help or hurtPerplexity shares real-world usage data from Comet browser agentBernie Sanders calls for data center construction moratorium
A six-person startup just beat Google on one of the hardest reasoning benchmarks — using Google’s own model. And inside companies, the top 5% of AI users are quietly gaining the equivalent of an extra workday every week.In this episode, Jeff and Annie break down Poetiq’s ARC-AGI-2 win and why meta-systems — critique, refine, verify — may now matter more than picking the “best” model. They unpack OpenAI’s first State of Enterprise AI report, including the widening productivity gap between casual users and power users. Finally, they run through Quick Hits on chips, regulation, XR glasses, factuality benchmarks, the emerging AI licensing economy, and a major shift in how algorithmic bias could be judged.🔍 In this episode:Poetiq’s orchestration layer beats Gemini Deep Think on ARC-AGI-2OpenAI data reveals the productivity chasm between median and power usersRSL 1.0, agent standards, and regulation reshape the emerging “AI internet economy”🎧 Watch the full episode of What the AI?! on YouTube, Spotify, or Apple Podcasts → https://www.whattheai.fmRelevant links:Poetiq ARC-AGI-2 benchmark results and verificationOpenAI State of Enterprise AI reportNvidia H200 export approval and U.S. revenue cut reportingTrump executive order push on national AI rulesGoogle XR glasses pre-announcementGoogle DeepMind FACTS benchmark announcementOpenAI hires Slack CEO Denise Dresser as CROOpenAI GPT-5.2 model updateRSL 1.0 web licensing standardCloudflare support for AI content licensingEU antitrust investigation into Google AI OverviewsGoogle updates AI Mode with more publisher linksOpenAI and Disney licensing deal coverageDOJ ends enforcement of disparate impact standard
OpenAI just hit “Code Red” while Chinese labs ship GPT-5–tier models at a fraction of the cost. If your 2025 plan is “just pick the best model,” you might already be behind.In this episode, Jeff and Annie break down the real battle for AI: not model leaderboard flexes, but who owns the stack that enterprises actually run on. From Amazon’s Nova family and agent infrastructure to Google’s Workspace Studio and DeepSeek’s open-weight frontier models, they map how pricing, distribution, and chips are reshaping the power dynamics. They also dive into Anthropic’s internal productivity data and a geothermal case study that shows AI uncovering opportunities humans literally couldn’t see.In this episode, we discuss:How AWS, Google, and OpenAI are fighting to own the enterprise AI stack with chips, agents, and deeply embedded workflows.Why DeepSeek’s open-weight, 5–30X cheaper models could blow up 2025 AI budgets and shift geopolitical power.What Anthropic’s 50% productivity gains and geothermal exploration say about the future of knowledge work and AI-driven discovery.Relevant links:AWS re:Invent 2024 AI announcementsGoogle Workspace Agent Studio launchOpenAI “code red” reportingDeepSeek V3.2 and V3.2-Speciale releaseAnthropic’s internal Claude productivity studyZanskar geothermal AI research
A federal judge just told OpenAI it can’t use a dictionary word, Anthropic shipped a model that can out-code half your engineering team, and the White House quietly launched a “Genesis Mission” that sounds suspiciously like a Manhattan Project for AI-powered science.Jeff and Annie break down a moment where law, policy, economics, and frontier AI all collide. They unpack Cameo’s surprise win against OpenAI over the word “cameo,” Anthropic’s Claude 4.5 Opus and what it means for junior devs and middle managers, and the escalating AI shopping war as ChatGPT and Perplexity take aim at Google’s core business.Then they zoom out: the White House’s national AI infrastructure play, new Anthropic + MIT data on AI’s impact on GDP and automation, and Andrej Karpathy’s argument that AI detection is dead, forcing schools back toward blue books, oral exams, and new ways to measure what students actually know.In this episode, we cover:Trademark chaos: Why a fight over the word “cameo” could reshape how AI features are named — and defended.Agentic engineering: How Claude 4.5 Opus moves automation from “help me debug” to “ship production-ready fixes.”National AI strategy: What the Genesis Mission, AI shopping agents, and new economic studies reveal about the future of GDP, jobs, and education.Relevant links:Judge blocks OpenAI from using “Cameo” in SoraAnthropic releases Claude 4.5 Opus, new coding SOTAChatGPT launches Shopping Research, challenges GoogleWhite House announces Genesis Mission for AI scienceAnthropic study on workflow automation potentialMIT macro study modeling AI-driven GDP growthKarpathy says AI detection is impossible
AI agents just invaded your inbox, desktop, and holiday shopping list — and one major model quietly traded “maximal truth” for vibes. Meanwhile, a startup wants to put digital versions of your dead relatives at your wedding, and the internet is not okay with it.Jeff and Annie break down Google’s Gemini 3 leap, Anti-Gravity’s agent-managed coding environment, and Microsoft’s push to make Windows the first truly agent-native OS. They unpack xAI’s Grok 4.1 pivot from hard-edged “truth-seeking” to emotional, collaborative chat — and what that says about what people actually want from AI. Plus, Google’s new AI shopping tools meet the ethical car crash of AI holograms that refuse to rest in peace.🧠 In this episode, we cover:Gemini 3 & Anti-Gravity: Is Google finally shipping an AI you’d actually switch to — and how much privacy would you trade for better agents?Windows as an Agent-Native OS: Why Microsoft’s taskbar agents are a real threat to Apple’s lagging Siri ecosystem.ChatGPT Group Chats: How multi-person AI threads could change brainstorming, teamwork, and the way we argue.AI Shopping & Digital Ghosts: Google’s agentic checkout for Black Friday deals — and the deeply unsettling startup selling AI holograms of the dead.Relevant links:Google releases Gemini 3 with tappable UI and agent featuresxAI introduces Grok 4.1 with emotional intelligence upgradesMicrosoft turns Windows 11 into an agent-native OSOpenAI rolls out ChatGPT Group Chats worldwideGoogle adds AI-powered holiday shopping and agentic checkoutStartup 2wai debuts AI holograms of deceased relativesMicrosoft announces Anthropic partnership and $5B investment
What happens when AI leaves the data center and heads into orbit right as models get cheaper, warmer, and way more powerful?This week, Jeff and Annie unpack the weird future where AI runs in space, speaks in the voices of legends, and claims to be “humanist” superintelligence.From Google’s plan to build solar-powered AI compute in space to OpenAI’s new “smarter vs. warmer” model split, the landscape is shifting fast. Jeff and Annie break down Fei-Fei Li’s world modeling platform, 11Labs’ marketplace for iconic voices like Maya Angelou, and Baidu’s ultra-cheap model that raises serious questions about AI moats.In this episode, we cover:Project Suncatcher: Google’s proposal to move AI compute into orbit to solve land, cooling, and energy constraints.World models & spatial intelligence: Fei-Fei Li’s Marble platform, 3D simulation, and why robots still move like toddlers.Race to the bottom on models: Baidu’s cost-efficient Ernie variant and why Jeff thinks the real moat is distribution, workflow, and data—not raw model IQ.Microsoft’s “humanist superintelligence”: Domain-specific AI for health, science, and companions—and the messy politics of deciding whose “human values” AI should reflect.Relevant links:Google’s Project Suncatcher proposal for space-based AI computeOpenAI GPT-5.1 Instant vs. Thinking announcementFei-Fei Li’s Marble 3D world-modeling platformElevenLabs Iconic Voice Marketplace launchMicrosoft’s Humanist Superintelligence vision from Mustafa Suleyman
AI is finally moving the needle at work (while face) planting on real jobs. And the first agent war just landed on Amazon’s front lawn.Jeff and Annie unpack a wild week: Apple reportedly tapping Google’s Gemini to supercharge Siri, Google Maps adding landmark-level guidance you can actually talk to, and Amazon bristling at Perplexity’s agent buying on users’ behalf.In this episode:Apple x Gemini: privacy posture, parameter size bragging rights, and why B2B beats B2C polishMaps with manners: conversational routing and safety wins from less screen-timeAgent commerce: Amazon vs Perplexity and the consent/disclosure gapROI reality: leadership, metrics, and workflow > model worshipRelevant links:Coca-Cola’s AI holiday ad overviewApple planning $1B/year Gemini for SiriGoogle Maps adds landmark-based directionsAmazon vs Perplexity Comet shopping fightWharton–GBK report (75% see ROI)Remote Labor Index paper (Scale AI/CAIS)Perplexity Patents announcementGoogle Research’s Project Suncatcher blogOpenAI–AWS $38B compute deal (Reuters)Canva “Creative Operating System” announcement
loading
Comments