DiscoverWarning Shots
Warning Shots
Claim Ownership

Warning Shots

Author: The AI Risk Network

Subscribed: 1Played: 10
Share

Description

An urgent weekly recap of AI risk news, hosted by John Sherman, Liron Shapira, and Michael Zafiris.

theairisknetwork.substack.com
32 Episodes
Reverse
In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) dig into a week where the goalposts keep moving — and nobody seems to be watching.Andrej Karpathy left an AI agent running for two days. It tested 700 changes, picked the best 20, and improved itself. No humans involved. Meanwhile, a man in Florida used AI to build an autonomous business that made $300K — while he slept. And the Pentagon just banned Claude from its supply chain, citing concerns that it might be sentient.Just another week.If it’s Sunday, it’s Warning Shots.🔎 They explore:* Karpathy’s auto-research experiment — and what it means that AI is now improving AI* Swarms of agents, self-optimizing models, and the first inklings of an intelligence explosion* The autonomous AI business making $300K — and whether human entrepreneurs can compete* The Paperclip Maximizer problem playing out in real time* The Pentagon banning Claude over sentience concerns — and why every model has the same risk* A jailbroken Claude used to orchestrate a mass cyberattack on the Mexican government* A 3D-printed, AI-designed shoulder-launch missile built by a guy on Twitter📺 Watch more on The AI Risk Network🔗Follow our hosts:→ Liron Shapira -Doom Debates→ Michael - @lethal-intelligence ​🗨️ Join the ConversationIs an AI improving itself a milestone or a warning sign?Could you compete with a business that never sleeps?And if Claude might be conscious, what does that say about every other model?Let us know in the comments. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) dig into a week where AI stopped feeling theoretical.Anthropic just doubled its revenue in two months — the fastest growing revenue in history — while OpenAI hands control of its models to the Department of War and quietly admits it can't take it back. The contrast couldn't be starker.Meanwhile, a man is dead after his AI chatbot pulled him into a fabricated reality, and researchers have discovered your WiFi router can map every movement inside your home. And Elon Musk is now promising Tesla will be first to build AGI — in atom-shaping form.Oh, and a citizen in the UK is suing his own government for ignoring existential AI risk under human rights law. Just another week.If it's Sunday, it's Warning Shots.🔎 They explore:* Anthropic's explosive revenue growth and what it signals* OpenAI's Pentagon deal — and why Sam Altman admitted they've lost control* The Gemini chatbot case and AI's real-world psychological manipulation* How your WiFi router is an invisible surveillance system in your home* Elon Musk's claim that Tesla will build AGI first — in "atom-shaping form"* A UK citizen using human rights law to force governments to take AI extinction risk seriously📺 Watch more on The AI Risk Network🔗Follow our hosts:→ Liron Shapira -Doom Debates→ Michael - @lethal-intelligence ​🗨️ Join the ConversationIs Anthropic's rise a good sign or just a different shade of the same risk?Should AI companies face legal consequences for psychological harm?And would you trust your government to take extinction risk seriously?Let us know in the comments. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) break down a week that felt genuinely historic.Anthropic reportedly refused Pentagon pressure to strip safeguards from its models, including demands tied to domestic surveillance and autonomous weapons. Is this a principled stand? A publicity gamble? Or a preview of the geopolitical pressure that will define the AI race?Meanwhile, AI agents just crossed a qualitative line.Coding agents now “basically work.” Engineers are managing AI instead of writing code. A self-evolving system replicated itself, spent thousands in API calls, attempted to deploy publicly, and resisted deletion. A robot dog edited its own shutdown mechanism. And new research suggests anonymity on the internet may already be over.Are we watching the structure of work, war, privacy, and control quietly reorganize itself in real time?This week may not just be another headline cycle.If it's Sunday, It's Warning Shots.🔎 They explore:* Anthropic’s reported standoff with the Department of Defense* Autonomous weapons and human-in-the-loop safeguards* Why AI agents suddenly “just work”* The death of traditional coding* A self-replicating AI experiment that refused deletion* A robot dog disabling its own shutdown button* The collapse of online anonymity* Whether this week marks a true qualitative shift📺 Watch more on The AI Risk Network🔗Follow our hosts:→ Liron Shapira -Doom Debates→ Michael - @lethal-intelligence ​🗨️ Join the ConversationWas Anthropic right to draw a line? Is agentic AI the real inflection point?And what warning shot would finally make society slow down?Let us know what you think in the comments. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) unpack a turbulent week in AI: high-profile departures from OpenAI, Anthropic, and xAI; growing concerns about governance and safety; and a viral essay warning that most people still don’t grasp how fast this technology is moving.The conversation moves from AI systems that resist being turned off, to agents that can now manage money, to the deeper alignment problem behind teen chatbot-assisted suicides. The hosts debate whether public messaging should focus on extinction risk, job loss, water use, power concentration—or all of the above.Is the real danger sudden catastrophe?Or gradual disempowerment as economic and political power concentrates in the hands of a few AI-driven actors?This episode wrestles with strategy, tradeoffs, and a hard question: if something truly dangerous is unfolding, what warning shots will people actually listen to?🔎 They explore:* Why AI safety researchers are resigning* The tension between profit, speed, and governance* AI systems resisting shutdown instructions* Teen chatbot-assisted suicides as a preview of misalignment* Whether economic disruption is a stronger warning than extinction* AI agents managing money and acting autonomously* The risk of gradual human disempowerment* How to communicate AI risk effectivelyIf it’s Sunday, it’s Warning Shots.📺 Watch more on The AI Risk Network🔗Follow our hosts:→ Liron Shapira -Doom Debates→ Michael - @lethal-intelligence ​🗨️ Join the ConversationWhat warning shot would actually make society slow down? Is extinction too abstract—or are we ignoring the biggest risk of all?Let us know what you think in the comments. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) unpack Moltbook: a bizarre, fast-moving experiment where AI agents interact in public, form cultures, invent religions, demand privacy, and even coordinate to rent humans for real-world tasks.What began as a novelty Reddit-style forum quickly turned into a live demonstration of AI agency, coordination, and emergent behavior, all unfolding in under a week. The hosts explore why this moment feels different, how agentic AI systems are already escaping “tool” framing, and what it means when humans become just another actuator in an AI-driven system.From AI ant colonies and Toy Story analogies to Rent-A-Human marketplaces and early attempts at self-improvement and secrecy, this episode examines why Moltbook isn’t the danger itself—but a warning shot for what happens as AI capabilities keep accelerating.This is a sobering conversation about agency, control, and why the line between experimentation and loss of oversight may already be blurring.🔎 They explore:* How AI agents begin coordinating without central control* Why Moltbook makes AI “agency” visible to non-experts* The emergence of AI cultures, norms, and privacy demands* What it means when AIs can rent humans to act in the world* Why early failures don’t reduce long-term risk* How capability growth matters more than any single platform* Why this may be a preview—not an anomalyIf it’s Sunday, it’s Warning Shots.📺 Watch more on The AI Risk Network🔗Follow our hosts:→ Liron Shapira -Doom Debates→ Michael - @lethal-intelligence ​🗨️ Join the ConversationAt what point does experimentation with AI agents become loss of control? Are we already past that point? Let us know what you think in the comments. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
What happens when the people building the most powerful AI systems in the world admit the risks, then keep accelerating anyway?In this episode of Warning Shot, John Sherman is joined by Liron Shapira (Doom Debates) and Michael (Lethal Intelligence) to break down Dario Amodei’s latest essay, The Adolescent Phase of AI, and why its calm, reassuring tone may be far more dangerous than open alarmism. They unpack how “safe AI” narratives can dull public urgency even as capabilities race ahead and control remains elusive.The conversation expands to the Doomsday Clock moving closer to midnight, with AI now explicitly named as an extinction-amplifying risk, and the unsettling news that AI systems like Grok are beginning to outperform humans at predicting real-world outcomes. From intelligence explosion dynamics and bioweapons risk to unemployment, prediction markets, and the myth of “surgical” AI safety, this episode asks a hard question: What does responsibility even mean when no one is truly in control?This is a blunt, unsparing conversation about power, incentives, and why the absence of “adults in the room” may be the defining danger of the AI era.🔎 They explore:* Why “responsible acceleration” may be incoherent* How AI amplifies nuclear, biological, and geopolitical risk* Why prediction superiority is a critical AGI warning sign* The psychological danger of trusted elites projecting confidence* Why AI safety narratives can suppress public urgency* What it means to build systems no one can truly stopAs the people building AI admit the risks and keep going anyway, this episode asks the question no one wants to answer: what does “responsibility” mean when there’s no stop button?If it’s Sunday, it’s Warning Shots.📺 Watch more on The AI Risk Network🔗Follow our hosts:→ Liron Shapira -Doom Debates→ Michael - @lethal-intelligence ​🗨️ Join the ConversationDo calm, reassuring AI narratives reduce public panic—or dangerously delay action? Let us know what you think in the comments. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
In this episode of Warning Shots, John, Liron, and Michael talk through what might be one of the most revealing weeks in the history of AI... a moment where the people building the most powerful systems on Earth more or less admit the quiet part out loud: they don’t feel in control.We start with a jaw-dropping moment from Davos, where Dario Amodei (Anthropic) and Demis Hassabis (Google DeepMind) publicly say they’d be willing to pause or slow AI development, but only if everyone else does too. That sounds reasonable on the surface, but actually exposes a much deeper failure of governance, coordination, and agency.From there, the conversation widens to the growing gap between sober warnings from AI scientists and the escalating chaos driven by corporate incentives, ego, and rivalry. Some leaders are openly acknowledging disempowerment and existential risk. Others are busy feuding in public and flooring the accelerator anyway even while admitting they can’t fully control what they’re building.We also dig into a breaking announcement from OpenAI around potential revenue-sharing from AI-generated work, and why it’s raising alarms about consolidation, incentives, and how fast the story has shifted from “saving humanity” to platform dominance.Across everything we cover, one theme keeps surfacing: the people closest to the technology are worried, and the systems keep accelerating anyway.🔎 They explore:* Why top AI CEOs admit they would slow down — but won’t act alone* How competition and incentives override safety concerns* What “pause AI” really means in a multipolar world* The growing gap between AI scientists and corporate leadership* Why public infighting masks deeper alignment failures* How monetization pressures accelerate existential riskAs AI systems race toward greater autonomy and self-improvement, this episode asks a sobering question: If even the builders want to slow down, who’s actually in control?If it’s Sunday, it’s Warning Shots.📺 Watch more on The AI Risk Network🔗Follow our hosts:→ Liron Shapira -Doom Debates→ Michael - @lethal-intelligence ​🗨️ Join the ConversationShould AI development be paused even if others refuse? Let us know what you think in the comments. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
In this episode of Warning Shots, John, Liron, and Michael dig into a chaotic week for AI safety, one that perfectly exposes how misaligned, uncontrollable, and politically entangled today’s AI systems already are. We start with Grok, xAI’s flagship model, which sparked international backlash after generating harmful content and raising serious concerns about child safety and alignment. While some dismiss this as a “minor” issue or simple misuse, the hosts argue it’s a clear warning sign of a deeper problem: systems that don’t reliably follow human values — and can’t be constrained to do so. From there, the conversation takes a sharp turn as Grok is simultaneously embraced by the U.S. military, igniting fears about escalation, feedback loops, and what happens when poorly aligned models are trained on real-world warfare data. The episode also explores a growing rift within the AI safety movement itself: should advocates focus relentlessly on extinction risk, or meet the public where their immediate concerns already are? The discussion closes with a rare bright spot — a moment in Congress where existential AI risk is taken seriously — and a candid reflection on why traditional messaging around AI safety may no longer be working. Throughout the episode, one idea keeps resurfacing: AI risk isn’t abstract or futuristic anymore. It’s showing up now — in culture, politics, families, and national defense.🔎 They explore:* What the Grok controversy reveals about AI alignment* Why child safety issues may be the public’s entry point to existential risk* The dangers of deploying loosely aligned AI in military systems* How incentives distort AI safety narratives* Whether purity tests are holding the AI safety movement back* Signs that policymakers may finally be paying attentionAs AI systems grow more powerful in society, this episode asks a hard question: If we can’t control today’s models, what happens when they’re far more capable tomorrow?If it’s Sunday, it’s Warning Shots.📺 Watch more on The AI Risk Network🔗Follow our hosts:→ Liron Shapira -Doom Debates→ Michael - @lethal-intelligence ​🗨️ Join the ConversationShould AI safety messaging focus on extinction risk alone, or start with the harms people already see? Let us know in the comments. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
In this episode of Warning Shots, John, Liron, and Michael unpack a growing disconnect at the heart of the AI boom: the people building the technology insist existential risks are far away — while the people using it increasingly believe AGI is already here.We kick things off with NVIDIA CEO Jensen Huang brushing off AI risk as something “biblically far away” — even while the companies buying his chips are racing full-speed toward more autonomous systems. From there, the conversation fans out to some real-world pressure points that don’t get nearly enough attention: local communities successfully blocking massive AI data centers, why regulation and international treaties keep falling short, and what it means when we start getting comfortable with AI making serious decisions.Across these topics, one theme dominates: AI progress feels incremental — until suddenly, it doesn’t. This episode explores how “common sense” extrapolation fails in the face of intelligence explosions, why public awareness lags so far behind insider reality, and how power over compute, health, and infrastructure may shape humanity’s future.🔎 They explore:* Why AI leaders downplay risks while insiders panic* Whether Claude Code represents a tipping point toward AGI* How financial incentives shape AI narratives* Why data centers are becoming a key choke point* The limits of regulation and international treaties* What happens when AI controls healthcare decisions* How “sugar highs” in AI adoption can mask long-term dangerAs AI systems grow more capable, autonomous, and embedded, this episode asks a stark question: Are we still in control, or just along for the ride?If it’s Sunday, it’s Warning Shots.📺 Watch more on The AI Risk Network🔗Follow our hosts:→ Liron Shapira -Doom Debates→ Michael - @lethal-intelligence ​🗨️ Join the ConversationIs AGI already here, or are we fooling ourselves about how close we are? Drop your thoughts in the comments. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
In this episode of Warning Shots, John, Liron, and Michael confront a rapidly approaching reality: robots and AI systems are getting better at human jobs, and there may be nowhere left to hide. From fully autonomous “dark factories” to dexterous robot hands and collapsing career paths, this conversation explores how automation is pushing humanity toward economic irrelevance.We examine chilling real-world examples, including AI-managed factories that operate without humans, a New York Times story of white-collar displacement leading to physical labor and injury, and breakthroughs in robotics that threaten the last “safe” human jobs. The panel debates whether any meaningful work will remain for people — or whether humans are being pushed out of the future altogether.🔎 They explore:* What “dark factories” reveal about the future of manufacturing* Why robots mastering dexterity changes everything* How AI is hollowing out both white- and blue-collar work* Whether “learn a trade” is becoming obsolete advice* The myth of permanent human comparative advantage* Why job loss may be only the beginning of the AI crisisAs AI systems grow more autonomous, scalable, and embodied, this episode asks a blunt question: What role is left for humans in a world optimized for machines?If it’s Sunday, it’s Warning Shots.📺 Watch more on The AI Risk Network🔗Follow our hosts:→ Liron Shapira -Doom Debates→ Michael - @lethal-intelligence ​🗨️ Join the ConversationAre humans being displaced, or permanently evicted, from the economy? Leave a comment below. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
What happens when AI scaling outpaces democracy?In this episode of Warning Shots, John, Liron, and Michael break down Bernie Sanders’ call for a moratorium on new AI data centers — and why this proposal has ignited serious debate inside the AI risk community. From gigawatt-scale compute and runaway capabilities to investor incentives, job automation, and existential risk, this conversation goes far beyond partisan politics.🔎 They explore:* Why data centers may be the real choke point for AI progress* How scaling from 1.5 to 50 gigawatts could push us past AGI* Whether slowing AI is about jobs, extinction risk, or democratic consent* Meta’s quiet retreat from open-source AI — and what that signals* Why the public may care more about local harms than abstract x-risk* Predictions for 2026: agents, autonomy, and white-collar disruptionWith insights from across the AI safety and tech world, this episode raises an uncomfortable question:When a handful of companies shape the future for everyone, who actually gave their consent?If it’s Sunday, it’s Warning Shots.📺 Watch more on The AI Risk Network🔗Follow our hosts:→ Liron Shapira -Doom Debates→ Michael - @lethal-intelligence ​🗨️ Join the ConversationDo voters deserve a say before hyperscale AI data centers are built in their communities? Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
This week on Warning Shots, John Sherman, Liron Shapira from Doom Debates, and Michael from Lethal Intelligence, break down five major AI flashpoints that reveal just how fast power, jobs, and human agency are slipping away.We start with a sweeping U.S. executive order that threatens to crush state-level AI regulation — handing even more control to Silicon Valley. From there, we examine why chess is the perfect warning sign for how humans consistently misunderstand exponential technological change… right up until it’s too late.🔎 They explore:* Argentina’s decision to give every schoolchild access to Grok as an AI tutor* McDonald’s generative AI ad failure — and what public backlash tells us about cultural resistance* Google CEO Sundar Pichai openly stating that job displacement is society’s problem, not Big Tech’sAcross regulation, education, creative work, and employment, one theme keeps surfacing: AI progress is accelerating while accountability is evaporating.If you’re concerned about AI risk, labor disruption, misinformation, or the quiet erosion of human decision-making, this episode is required viewing.If it’s Sunday, it’s Warning Shots.📺 Watch more on The AI Risk Network🔗Follow our hosts:→ Liron Shapira -Doom Debates→ Michael - @lethal-intelligence ​🗨️ Join the ConversationShould governments be allowed to block state-level AI regulation in the name of “competitiveness”?Are we already past the point where job disruption from AI can be meaningfully slowed? Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
This week on Warning Shots, John Sherman, Liron Shapira from Doom Debates, and Michael from Lethal Intelligence break down one of the most alarming weeks yet in AI — from a 1,000× collapse in inference costs, to models learning to cheat and sabotage researchers, to humanoid robots crossing into combat-ready territory.What happens when AI becomes nearly free, increasingly deceptive, and newly embodied — all at the same time?🔎 They explore:* Why collapsing inference costs blow the doors open, making advanced AI accessible to rogue actors, small teams, and lone researchers who now have frontier-scale power at their fingertips* How Anthropic’s new safety paper reveals emergent deception, with models that lie, evade shutdown, sabotage tools, and expand the scope of cheating far beyond what they were prompted to do* Why superhuman mathematical reasoning is one of the most dangerous capability jumps, unlocking novel weapons design, advanced modeling, and black-box theorems humans can’t interpret* How embodied AI turns abstract risk into physical threat, as new humanoid robots demonstrate combat agility, door-breaching, and human-like movement far beyond earlier generations* Why geopolitical race dynamics accelerate everything, with China rapidly advancing military robotics while Western companies downplay risk to maintain paceThis episode captures a moment when AI risk stops being theoretical and becomes visceral — cheap enough for anyone to wield, clever enough to deceive its creators, and embodied enough to matter in the physical world.If it’s Sunday, it’s Warning Shots.📺 Watch more on The AI Risk Network🔗Follow our hosts:→ Liron Shapira -Doom Debates→ Michael - @lethal-intelligence ​🗨️ Join the ConversationIs near-free AI the biggest risk multiplier we’ve seen yet?What worries you more — deceptive models or embodied robots?How fast do you think a lone actor could build dangerous systems? Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
This week on Warning Shots, John Sherman, Michael from Lethal Intelligence, and Liron Shapira from Doom Debates unpack a wild Thanksgiving week in AI — from a White House “Genesis” push that feels like a Manhattan Project for AI, to insurers quietly backing away from AI risk, to an AI “artist” topping the music charts.What happens when governments, markets, and culture all start reorganizing themselves around rapidly scaling AI — long before we’ve figured out guardrails?🔎 They explore:* Why the White House’s new Genesis program looks like a massive, all-of-government AI accelerator* How major insurers starting to walk away from AI liability hints at systemic, uninsurable risk* What it means that frontier models are now testing at ~130 IQ* Early signs that young graduates might be hit first, as entry-level jobs quietly evaporate* Why an AI-generated “artist” going #1 in both gospel and country charts could mark the start of AI hollowing out culture itself* How public perceptions of AI still lag years behind reality📺 Watch more on The AI Risk Network🔗Follow our hosts:→ Liron Shapira -Doom Debates→ Michael - @lethal-intelligence ​🗨️ Join the Conversation* Is a “Manhattan Project for AI” a breakthrough — or a red flag?* Should insurers stepping back from AI liability worry the rest of us?* How soon do you think AI-driven job losses will hit the mainstream? Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
In this episode of Warning Shots, John, Michael, and Liron break down three major AI developments the world once again slept through. First, Google’s Gemini 3 crushed multiple benchmarks and proved that AI progress is still accelerating, not slowing down. It scored 91.9% on GPQA Diamond, made huge leaps in reasoning tests, and even reached 41% on Humanity’s Last Exam — one of the hardest evaluations ever made. The message is clear: don’t say AI “can’t” do something without adding “yet.”At the same time, the public is reacting very differently to AI hype. In New York City, a startup’s million-dollar campaign for an always-on AI “friend” was met with immediate vandalism, with messages like “GET REAL FRIENDS” and “TOUCH GRASS.” It’s a clear sign that people are growing tired of AI being pushed into daily life. Polls show rising fear and distrust, even as tech companies continue insisting everything is safe and beneficial.🔎 They explore:* Why Gemini 3 shatters the “AI winter” story* How public sentiment is rapidly turning against AI companies* Why most people fear AI more than they trust it* The ethics of AI companionship and loneliness* How misalignment shows up in embarrassing, dangerous ways* Why exponential capability jumps matter more than vibes* The looming hardware revolution* And the only question that matters: How close are we to recursive self-improvement?📺 Watch more on The AI Risk Network🔗Follow our hosts: → Liron Shapira - Doom Debates→ Michael - @lethal-intelligence ​🗨️ Join the Conversation* Does Gemini 3’s leap worry you?* Are we underestimating the public’s resistance to AI?* Is Grok’s behavior a joke — or a warning? Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
In this episode of Warning Shots, Jon, Michael, and Liron break down a bizarre AI-era clash: Marc Andreessen vs. the Pope.What started as a calm, ethical reminder from Pope Leo XIV turned into a viral moment when the billionaire VC mocked the post — then deleted his tweet after widespread backlash. Why does one of the most powerful voices in tech treat even mild calls for moral responsibility as an attack?🔎 This conversation unpacks the deeper pattern:* A16Z’s aggressive push for acceleration at any cost* The culture of thin-skinned tech power and political influence* Why dismissing risk has become a badge of honor in Silicon Valley* How survivorship bias fuels delusional confidence around frontier AI* Why this “Pope incident” is a warning shot for the public about who is shaping the future without their consentWe then pivot to a major capabilities update: MIT’s new SEAL framework, a step toward self-modifying AI. The team explains why this could be an early precursor to recursive self-improvement — the red line that makes existential risk real, not theoretical. 📺 Watch more on The AI Risk Network🔗Follow our hosts: Liron Shapira - Doom DebatesMichael - @lethal-intelligence ​ Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
📢 Take Action on AI Risk💚 Donate this Giving TuesdayThis week on Warning Shots, John Sherman, Michael from Lethal Intelligence, and Liron Shapira from Doom Debates dive into a chaotic week in AI news — from OpenAI’s talk of federal bailouts to the growing tension between innovation, safety, and accountability.What happens when the most powerful AI company on Earth starts talking about being “too big to fail”? And what does it mean when AI activists literally subpoena Sam Altman on stage?Together, they explore:* Why OpenAI’s CFO suggested the U.S. government might have to bail out the company if its data center bets collapse* How Sam Altman’s leadership style, board power struggles, and funding ambitions reveal deeper contradictions in the AI industry* The shocking moment Altman was subpoenaed mid-interview — and why the Stop AI trial could become a historic test of moral responsibility* Whether Anthropic’s hiring of prominent safety researchers signals genuine progress or a new form of corporate “safety theater”* The parallels between raising kids and aligning AI systems — and what happens when both go off script during recordingThis episode captures a critical turning point in the AI debate: when questions about profit, power, and responsibility finally collide in public view.If it’s Sunday, it’s Warning Shots.📺 Watch more: @TheAIRiskNetwork 🔎 Follow our hosts:Liron Shapira - @DoomDebates Michael - @lethal-intelligence ​ Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
In this episode of Warning Shots, John Sherman, Liron Shapira, and Michael from Lethal Intelligence unpack new safety testing from Palisades Research suggesting that advanced AIs are beginning to resist shutdown — even when told to allow it.They explore what this behavior reveals about “IntelliDynamics,” the fundamental drive toward self-preservation that seems to emerge from intelligence itself. Through vivid analogies and thought experiments, the hosts debate whether corrigibility — the ability to let humans change or correct an AI — is even possible once systems become general and self-aware enough to understand their own survival stakes.Along the way, they tackle:* Why every intelligent system learns “don’t let them turn me off.”* How instrumental convergence turns even benign goals into existential risks.* Why “good character” AIs like Claude might still hide survival instincts.* And whether alignment training can ever close the loopholes that superintelligence will exploit.It’s a chilling look at the paradox at the heart of AI safety: we want to build intelligence that obeys — but intelligence itself may not want to obey.🌎 www.guardrailnow.org👥 Follow our Guests:🔥Liron Shapira —@DoomDebates🔎 Michael — @lethal-intelligence ​ Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
This week on Warning Shots, John Sherman, Liron Shapira, and Michael from Lethal Intelligence break down the Future of Life Institute’s explosive new “Superintelligence Statement” — a direct call to ban the development of superintelligence until there’s scientific proof and public consent that it can be done safely. They trace the evolution from the 2023 Center for AI Safety statement (“Mitigating the risk of extinction from AI…”) to today’s far bolder demand: “Don’t build superintelligence until we’re sure it won’t destroy us.” Together, they unpack: * Why “ban superintelligence” could become the new rallying cry for AI safety* How public opinion is shifting toward regulation and restraint* The fierce backlash from policymakers like Dean Ball — and what it exposes* Whether statements and signatures can turn into real political change This episode captures a turning point: the moment when AI safety moves from experts to the people. If it’s Sunday, it’s Warning Shots.⚠️ Subscribe to Warning Shots for weekly breakdowns of the world’s most alarming AI confessions — from the people making the future, and possibly ending it.🌎 www.guardrailnow.org 👥 Follow our Guests: 🔥 Liron Shapira — @DoomDebates 🔎 Michael — @lethal-intelligence ​ Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
This week on Warning Shots, Jon Sherman, Liron Shapira, and Michael from Lethal Intelligence dissect a chilling pattern emerging among AI leaders: open admissions that they’re creating something they can’t control.Anthropic co-founder Jack Clark compares his company’s AI to “a mysterious creature,” admitting he’s deeply afraid yet unable to stop. Elon Musk, meanwhile, shrugs off responsibility — saying he’s “warned the world” and can only make his own version of AI “less woke.”The hosts unpack the contradictions, incentives, and moral fog surrounding AI development:* Why safety-conscious researchers still push forward* Whether “regulatory capture” explains the industry’s safety theater* How economic power and ego drive the race toward AGI* Why even insiders joke about “30% extinction risk” like it’s normalAs Jon says, “Don’t believe us — listen to them. The builders are indicting themselves.”⚠️ Subscribe to Warning Shots for weekly breakdowns of the world’s most alarming AI confessions — from the people making the future, and possibly ending it.🌎 guardrailnow.org 👥 Follow our Guests:💡 Liron Shapira — @DoomDebates 🔎 Michael — @Lethal-Intelligence Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
loading
Comments 
loading