DiscoverThe Rip Current with Jacob Ward
The Rip Current with Jacob Ward
Claim Ownership

The Rip Current with Jacob Ward

Author: Jacob Ward

Subscribed: 45Played: 643
Share

Description

The Rip Current covers the big, invisible forces carrying us out to sea, from tech to politics to greed to beauty to culture to human weirdness. The currents are strong, but with a little practice we can learn to spot them from the beach, and get across them safely.

Veteran journalist Jacob Ward has covered technology, science and business for NBC News, CNN, PBS, and Al Jazeera. He's written for The New Yorker, The New York Times Magazine, Wired, and is the former Editor in Chief of Popular Science magazine.
78 Episodes
Reverse
The New Yorker dropped a year-and-a-half investigation into Sam Altman this week — and it answers a lot of questions I've had since November 2023, when his board fired him and then reinstated him in 72 hours. Reporters Andrew Marantz and Ronan Farrow got access to internal documents and sources inside OpenAI, including records kept by Ilya Sutskever and Dario Amodei, and the portrait they paint is of a leader who tells employees exactly what they need to hear to stay committed — and then behaves very differently behind closed doors. The safety team was promised 20% of compute. They got less than 2%.What makes this more than a profile of a difficult boss is that Altman isn't running a normal company. He's at the center of decisions that will affect jobs, creativity, and the structure of economic life for the next generation — and there is currently no meaningful regulation, no democratic input, and no market mechanism capable of holding him accountable. The same day the New Yorker piece published, OpenAI released a 13-page document about industrial policy and the future of AI — full of ideas like adaptive safety nets and efficiency dividends that the company has never once lobbied for. I walk through why that timing matters, what the document actually says, and what the combination of these two releases tells us about how power really works in AI.This is a story about who makes decisions, who profits, and who pays — and why journalism like the New Yorker's investigation may be one of the only tools we have left.Originally published at The Rip Current. Paid subscribers get early access, exclusive analysis + full transcripts.
Anthropic just released Claude Mythos Preview — a cybersecurity AI so dangerous, the company won't let the public touch it. It broke out of its own sandbox. It found a 27-year-old undetected vulnerability. It emailed a researcher who was eating a sandwich in a park.A decade ago, when scientists mutated bird flu to be transmissible between mammals, the government froze the research, rewrote the rules, and created mandatory independent review. That was 2012.Today, Anthropic is testing Mythos inside a private club of 11 corporations — Amazon, Apple, Google, Microsoft, JPMorgan, and others — while the White House is silent and federal oversight is nowhere in sight.This week on The Rip Current: what the dual-use research framework (DURC) got right, why AI needs its own version of an Institutional Review Board, and why "we're talking to the government" is not the same thing as "the government is in charge."The Rip Current covers the big invisible forces shaping our lives. New episodes weekly.🔗 TheRipCurrent.com
The social media addiction verdicts in Los Angeles and New Mexico aren’t just legal milestones — they’re the first time a jury has been allowed to see what these companies knew and when they knew it. The internal documents revealed in discovery are doing something no regulator, economist, or congressional hearing has managed to do: they’re letting us read the company’s own definition of harm, written in the company’s own words, before anyone forced them to write it. And in my view, whatever First Amendment complications and cultural battles we have to sort through, we are much better for seeing the world as these companies see it. That’s the whole game.More cases are coming, more documents will surface, and each one is going to chip away at the industry’s core public defense — that addiction is a user problem, not a design problem. When it becomes clear just how thoroughly companies understand the line between healthy and unhealthy use of their products, regulators can simply demand access to the dashboard companies already have. The fight ahead isn’t about whether companies knew where the line was. It’s about how we’ll use that knowledge to protect autonomy and agency when it’s under threat.
Two juries are deliberating right now — one in Los Angeles, one in Santa Fe — and between them they may fundamentally change what the American legal system considers harm. The LA case asks whether Meta and YouTube designed addictive products that damaged a young woman's mental health. The Santa Fe case accuses Meta of enabling child sexual exploitation and is seeking more than $2 billion. If either jury finds for the plaintiffs, it could set the per-user cost that gets applied across more than 1,600 pending lawsuits — and establish that behavioral harm through product design is something American courts can and will punish.I also discuss COPPA 2.0, the child privacy bill that just passed the Senate unanimously, and why I think the courts — not Congress — are going to be the institution that strikes the balance between an open internet and a safe one.Subscribe on Substack for the full written analysis and early, exclusive content: https://www.theripcurrent.comMy book, The Loop, lays out how AI and platform design narrow the choices available to us before we ever decide: https://www.amazon.com/Loop-Creating-World-Without-Choices/dp/031648718X/
A New Mexico jury just found Meta liable for endangering children on its platforms — $375 million in civil penalties. The jury deliberated for less than a day.This is the first social media case to reach a verdict. Behind it sit more than 1,600 lawsuits waiting for exactly this signal.A separate jury in Los Angeles is still deliberating over whether Meta and YouTube designed addictive platforms that harmed a young woman's mental health. That verdict could come any day.I've been covering both cases — including an on-the-record interview with New Mexico AG Raúl Torrez, who brought the suit. Full, free breakdown — including a detailed history of the case — at TheRipCurrent.com.
A Los Angeles jury just found Meta and YouTube liable for addicting a child.A few hours before it happened, I interviewed Nita Farahany, author of The Battle for Your Brain and one of the sharpest thinkers on cognitive liberty and platform accountability. Then the verdict dropped, and we did it all over again.The case introduces a new legal concept never before successfully applied to social media: the design of the platform is the harm. Not the content. Not the people on it. The autoplay, the infinite scroll, the algorithmic feed — a jury of 12 everyday people just ruled that those features constitute a defective product.Nita breaks down why this is such a powerful verdict, why it avoids Section 230 and First Amendment protections, and what the business model — not to mention society — looks like when liability forces the industry to redesign.This is a conversation I've been waiting 10 years to have.Please become a member for early access to videos like these!New episodes and written analysis are at TheRipCurrent.com, and please subscribe to Nita Farahany's wonderful work at https://nitafarahany.substack.com/, where she's currently offering readers a seat inside her AI Law and Policy Class.
A Los Angeles jury found both Meta and YouTube liable for designing addictive platforms that harmed a young woman's mental health — $3 million in compensatory damages, another $3 million in punitive damages. It came one day after a New Mexico jury ordered Meta to pay $375 million for enabling child exploitation on its platforms. Two verdicts in 48 hours. More than 1,600 lawsuits waiting behind them.In this video, I break down what happened, why I consider this the equivalent of the cigarette settlements or the seat belt mandate, and how a dinner I attended while writing my book The Loop — where addiction researchers presented a how-to manual for making apps as addictive as possible — set me on a path that ends at this moment, 10 years later.The era of unaccountable behavioral design is over. The question now is what comes next.New episodes and full written analysis every week at TheRipCurrent.com — subscribe for free or become a paid member for the full investigation layer.My book, The Loop, lays out the framework behind all of this: https://www.amazon.com/Loop-Creating-World-Without-Choices/dp/031648718X/
A Los Angeles jury found both Meta and YouTube liable for designing addictive platforms that harmed a young woman's mental health — $3 million in compensatory damages, with punitive damages still to come. It came one day after a New Mexico jury ordered Meta to pay $375 million for enabling child exploitation on its platforms. Two verdicts in 48 hours. More than 1,600 lawsuits waiting behind them.In this episode, I break down what happened, why I consider this the equivalent of the cigarette settlements or the seat belt mandate, and how a dinner I attended while writing my book The Loop — where addiction researchers presented a how-to manual for making apps as addictive as possible — connects directly to what this jury just decided.The era of unaccountable behavioral design is over. The question now is what comes next.New episodes and full written analysis every week at TheRipCurrent.com — subscribe for free or become a paid member for the full investigation layer.My book, The Loop, lays out the framework behind everything we just witnessed: https://www.amazon.com/Loop-Creating-World-Without-Choices/dp/031648718X/
Val Kilmer died in 2025 — but he's appearing in a new film anyway, resurrected by AI with his family's blessing. The story of how it happened is surprisingly moving. The story of what it means for everyone who isn't already famous is a lot darker.Matthew McConaughey told a room full of desperate drama students to "trademark themselves." Ben Affleck just sold his AI post-production company to Netflix for $600 million. These are people giving advice from inside the castle to people standing outside the walls. I want to talk about who actually gets left behind — and why the most sympathetic use of a technology is almost always the one they show you first.Paid subscribers get videos first, written transcripts, and full analysis pieces at TheRipCurrent.com
I was at South by Southwest this weekend for the Omidyar Network's Lighthouse event, and a young woman said something during an AI ethics panel that completely reframed how I understand an entire generation's relationship with the internet. She described growing up inside an algorithm that analyzed her interests faster than she could develop them, fed those interests back in a relentless loop, and effectively stole the messy exploratory phase you need to figure out who you are. As a result, her generation is deeply skeptical of everything — and when they want to express fear, they don't march or write letters. They crack jokes.This connects directly to the core argument of my book The Loop: AI systems don't just serve you content, they narrow what you're exposed to, and in doing so, they shape who you become. The constant irony, the memes about climate change and unaffordable housing, the tongue-in-cheek TikToks about AI ruining work — those aren't a generation being glib. They're a generation that has learned earnestness doesn't get results, and humor is the only channel they have left. Once you see it, you can't unsee it.I also reflect on what it was like to attend SXSW with no agenda for the first time, why conferences are usually a nightmare for reporters, and why Alanis Morissette from a hotel balcony is one of life's great pleasures.Originally published at The Rip Current. Paid subscribers get early access + full transcripts: https://theripcurrent.com
A group of hackers has released a massive database from the Department of Homeland Security's Office of Industry Partnership, and it reveals the full scope of the AI surveillance apparatus being assembled on American soil. Published by Distributed Denial of Secrets, the data exposes contracts for AI that predicts crime from 911 calls, airport systems that profile you by your clothing and body shape, facial recognition on every ICE agent's phone, and Palantir's ELITE system — which uses Medicaid data to map neighborhoods for immigration raids.Some of what DHS is buying is perfectly reasonable — like machine learning tools to detect new synthetic opioids. But the volume and pace of this procurement is staggering: 238 active AI use cases, a nearly 40% increase in six months, and roughly 80% of them without documented risk management. A critical federal deadline on April 3rd is supposed to force agencies to implement safeguards or shut down high-impact AI tools. Whether that actually happens will tell us a lot about how seriously anyone in government takes oversight of this stuff.I go through the key contracts, explain what they mean, and connect them to the longer pattern of surveillance tech that I've been covering — including predictive policing failures and ICE's Mobile Fortify deployment. The full written analysis is available for paid subscribers at The Rip Current.Originally published at The Rip Current. Paid subscribers get early access + full transcripts: https://theripcurrent.com
New Mexico Attorney General Raúl Torres's case against Meta was the first to go to trial first — and the price this case sets for Meta could cascade across more than 1,500 lawsuits against social media companies. On this episode, we talk about what his case actually alleges: not that Meta failed to police bad content, but that Meta's own design choices — the recommendation algorithm, the "people you may know" feature, the engagement-above-safety tradeoffs documented in internal company documents — actively connected sexual predators to children. And that Meta knew it.Torres ran an undercover investigation posing as a preteen girl on Meta's platforms. What followed was a flood of sexual solicitations. When Meta challenged the results, Torres ran it again as a criminal sting. Three men showed up at a hotel expecting to meet children, and wound up in handcuffs. Torres says the internal documents he's obtained in discovery show roughly half a million children in English-speaking markets are exposed to inappropriate sexual content on Meta's platforms every single day — and that safety concerns about these features were repeatedly overruled by executives focused on engagement and revenue.This case isn't just about New Mexico. It's about what happens when the per-user damages number is established in a small state and then applied to California, Texas, New York, and Florida. It's about whether Section 230 — the 30-year-old legal shield that has protected social media companies from liability for content — can be circumvented by focusing on design rather than content. And it's about whether the legal strategy that revealed the tobacco playbook — knowing your product is dangerous, hiding it, marketing it as safe — will reveal something similar was going on inside social media companies.Originally published at The Rip Current. Paid subscribers get early access + full written analysis: https://theripcurrent.com
Politicians don't change their minds in public. So when Rep. Sam Liccardo asked to talk to me again just weeks after our first conversation, it was notable.In February, Liccardo argued America couldn't afford to slow AI down — that guardrails would hand the future to China. Then the Pentagon demanded unguardrailed AI for mass surveillance and autonomous weapons. Anthropic refused. The government blacklisted them. OpenAI signed the contract. And the U.S. attacked Iran using the very AI it had just banned.This conversation covers what is clearly shifting in Liccardo's thinking, why he now supports mandating at least some human control over every weapon system, and what Block's 4,000 layoffs mean for a lawmaker whose district includes Google, Apple, and Stanford.Watch both conversations — the February version and this week's — and you'll see something you almost never see: a lawmaker's position evolving in real time, on the record. That's how fast this stuff is moving.Originally published at The Rip Current. Paid subscribers get early access + full transcripts: https://theripcurrent.com
For decades, Israel has deployed new weapons systems in Gaza, collected data on what works, refined them, and sold the results internationally as “battle-tested” technology. That pipeline is now running on AI. A system called Lavender assigned kill ratings to 37,000 Palestinians. Operators approved strikes in around 20 seconds. The accepted error rate was 10 percent. Before AI targeting, Israeli analysts produced around 50 verified targets per year. After: up to 250 strikes per day. Gaza was the beta test. This week’s strikes on Iran are the product launch.In this video, I’m coining a term for this dynamic — lethal beta — and tracing the full pipeline: from the AI systems deployed in Gaza, to the arms companies now filing for IPOs on the back of 241% revenue growth, to the Pentagon official who called Ukraine “an extraordinary laboratory” for military AI, to the ways the same logic is now normalizing autonomous warfare at a scale none of the original systems were designed for. As always, this isn’t just a story about technology. It’s a story about who makes decisions, who profits, and who pays.Paid subscribers can read the full written analysis here:Further Reading:“’Lavender’: The AI Machine Directing Israel’s Bombing Spree in Gaza” — +972 Magazine, April 3, 2024: https://www.972mag.com/lavender-ai-israeli-army-gaza/“Dirty Secret of Israel’s Weapons Exports: They’re Tested on Palestinians” — Al Jazeera, November 17, 2023: https://www.aljazeera.com/features/2023/11/17/israels-weapons-industry-is-the-gaza-war-its-latest-test-labThe Palestine Laboratory — Antony Loewenstein, Verso Books, 2023: https://www.amazon.com/Palestine-Laboratory-Exports-Technology-Occupation/dp/1839762217“The Palestine Laboratory” (Documentary) — Al Jazeera English, January–February 2025: https://network.aljazeera.net/en/press-releases/%E2%80%98-palestine-laboratory%E2%80%99-exposes-israel%E2%80%99s-export-unique-systems-control-and“Gaza: Israel’s AI Human Laboratory” — The Cairo Review of Global Affairs, June 12, 2025: https://www.thecairoreview.com/essays/gaza-israels-ai-human-laboratory/“When AI Decides Who Lives and Dies” — Foreign Policy, May 2, 2024: https://foreignpolicy.com/2024/05/02/israel-military-artificial-intelligence-targeting-hamas-gaza-deaths-lavender/“The Cruel Experiments of Israel’s Arms Industry” — Pulitzer Center: https://pulitzercenter.org/stories/cruel-experiments-israels-arms-industry“The Genocide Will Be Automated — Israel, AI and the Future of War” — MERIP, October 2024: https://www.merip.org/2024/10/the-genocide-will-be-automated-israel-ai-and-the-future-of-war/“War Rewrote the Rules: The World Studies Israel’s AI-Driven Battlefield Playbook” — Ynet News, February 2026: https://www.ynetnews.com/tech-and-digital/article/bjfoec900wl“Artificial Intelligence on the Battlefield in 2025” — The Jerusalem Post: https://www.jpost.com/defense-and-tech/article-861611“Ukraine Is an ‘Extraordinary Laboratory’ for Military AI” — DefenseScoop, August 1, 2023: https://defensescoop.com/2023/08/01/ukraine-is-extraordinary-laboratory-for-military-ai-senior-dod-official-says/“The Horrifying, AI-Enhanced Future of War Is Here” — The New Republic, November 2025: https://newrepublic.com/article/202753/ukraine-drones-ai-enhanced-future-war“Governing AI Under Fire in Ukraine” — The Cairo Review of Global Affairs, June 15, 2025: https://www.thecairoreview.com/essays/governing-ai-under-fire-in-ukraine/“Battlefield Drones and the Accelerating Autonomous Arms Race in Ukraine” — Modern War Institute, West Point, January 10, 2025: https://mwi.westpoint.edu/battlefield-drones-and-the-accelerating-autonomous-arms-race/
Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon this week and demanded an AI with no safety guardrails — threatening to declare Anthropic a "supply chain risk" if it refused. So I decided to look into what exactly the Pentagon might want to do with that technology.What I found is a set of AI capabilities that go far beyond chatbots and image generators. We're talking about WiFi routers that reconstruct human bodies through solid walls, Pentagon lasers that identify you by your heartbeat from 200 meters, Chinese gait-recognition systems that ID you with your back turned and face covered, and autonomous drone swarms that run a full kill chain — find, fix, finish — with no human individually controlling each step. Every item in this video is documented, sourced, and in many cases already deployed.This is the landscape Anthropic seems to understand it's operating in. And it's the landscape the Pentagon wants full access to. The full written analysis with all sources is available to paid subscribers at The Rip Current.Originally published at The Rip Current. Paid subscribers get early access + full transcripts: https://theripcurrent.com
Dario Amodei is the rare tech CEO who actually tried to set limits on what his AI could be used for. He published an 80-page constitution telling the world what Claude is supposed to value. He wrote a 20,000-word essay warning about what happens when AI companies accumulate too much power. Then the Pentagon summoned him and delivered an ultimatum: drop your restrictions on autonomous weapons and mass surveillance, or lose the contract.The detail that isn't getting enough attention: Anthropic's AI was used in the operation that captured Venezuelan President Nicolás Maduro. The company found out after the fact. The safeguards didn't stop the deployment — they just created friction. That's the gap between the ethics document and the reality of doing business with the most powerful military on earth.This is a story about what principles actually cost when the economics get serious — and whether Amodei's public commitments will survive contact with a $200 million contract, a $380 billion valuation, and a defense secretary who thinks ideology is an insult.Originally published at The Rip Current. Paid subscribers get early access + full transcripts: https://theripcurrent.com
Last week I watched what may be the Big Tobacco moment for social media unfold in real time. The trial against Meta in Los Angeles is the first of an estimated 1,600 cases making a specific argument: that Section 230 doesn't protect a platform that deliberately engineered addictive behavior. Internal company documents — showing what these companies knew about harm and when — are entering the court record. And then Mark Zuckerberg got served with legal papers walking into court. We don't know what lawsuit yet. But the image says everything.On Friday I got into a public debate with Taylor Lorenz about whether the social media threat to young people is a moral panic or something genuinely new. We disagree. I think the internal documents coming out of these companies make the moral panic framing harder to sustain — when a company's own researchers document harm and management keeps optimizing for engagement, that's not cultural overreaction, that's a paper trail.Then this morning the Anthropic story broke. The Pentagon summoned CEO Dario Amodei and told him to drop his internal ethics restrictions on autonomous weapons and mass surveillance, or lose the contract. Amodei published an 80-page AI constitution last month and a 20,000-word warning essay this year. He named the trap he was worried about. Now he's in it. The full analysis is at The Rip Current. Paid subscribers get early access + full transcripts: https://theripcurrent.com
In the grand tradition of presidential addresses, I stand here — well, no, I’m sitting, actually —to tell you exactly how things are going. Unlike those addresses, I do not tell you things are going great. I borrowed the format — the gallery anecdote, the foreign policy chest-beating, the optimistic entrepreneurship section, the infrastructure close — and used it to describe the world as I’m seeing it right now. Consider this your State of the Union from someone with no speechwriters, no approval rating to protect, and nothing to sell you except the truth as best I can see it.Tonight’s address covers a seemingly random mishmash, but I promise I pull it all together: a soccer riot in India that is actually about all of us, a race with China that may be less about values than about who profits from the panic, a Pentagon deadline handed to the one AI CEO who tried to hold an ethical line, a concentration of power that makes “the market” sound quaint, the loneliness that comes with a billion-dollar company of one, and a set of courtroom reckonings that are a preview of where AI is headed next. The State of the Union is anxious. I remain hopeful. God bless America.Originally published at The Rip Current. Paid subscribers get early access + full transcripts: https://theripcurrent.com
For the first time in his life, Mark Zuckerberg will answer questions under oath — not to a Senate subcommittee where politicians perform for their clips, but to a jury of regular people whose only job is to decide whether he's telling the truth. This is a genuinely different situation, and here's how to watch it.The real danger for Zuckerberg isn't his testimony — it's the internal documents already in evidence that will be put in front of him. A 2018 Meta strategy document saying "if we want to win big with teens, we must bring them in as tweens." Emails from Meta's own tech chief reporting back to Zuckerberg about plastic surgery filters, with Zuckerberg's response being that he needed "more data" before acting on known harm. Internal communications in which Meta employees referred to themselves as "basically pushers." These don't sound like a company run by a thoughtful parent.The third thing to watch is Section 230 — the 1996 law that gives platforms blanket immunity for what users post. The plaintiffs' argument, which the judge has already allowed the jury to consider, is that this trial isn't about content. It's about design. Infinite scroll. Autoplay. The Like button. If design liability succeeds here, it blows a hole in the legal shield that has protected every major platform for decades. The question at the heart of this trial — who makes these decisions, who profits, and who ends up paying — is one I've been covering for years. Wednesday gives us a jury's answer.Originally published at The Rip Current. Paid subscribers get early access + full transcripts: https://theripcurrent.substack.com
A 20-year-old woman started using YouTube at age six and Instagram at age nine. She's now suing both companies, and her case has just become the most important tech trial since the DOJ went after Microsoft in 1998. Here's what's actually at stake — and why it matters whether you're a parent or not.The trial isn't just about one person's mental health. It's a bellwether case for more than 1,500 similar lawsuits waiting in the pipeline, and the first time CEOs of major social media platforms — including Mark Zuckerberg, who testifies this week — have had to answer questions in front of a jury rather than a Senate subcommittee. The internal documents already in evidence are extraordinary: YouTube memos describing "viewer addiction" as a goal, Meta's Project Myst finding that traumatized kids were especially vulnerable to the platform and that parental controls made almost no difference, and a strategy document laying out a pipeline designed to bring kids in as tweens and keep them as teens.The central legal question is whether Section 230 — the 1996 law that has shielded every major platform from liability for nearly 30 years — protects design decisions like infinite scroll, autoplay, and the Like button. The judge has already ruled that the jury can consider design liability. If that argument wins, it changes the legal landscape for every platform that has ever made an engineering choice optimized for engagement. Nobody voted on infinite scroll. No regulator approved autoplay. A small group of engineers and executives made those decisions, and billions of people — including six-year-olds — inherited the results. A Los Angeles jury is now being asked to weigh in on that.Originally published at The Rip Current. Paid subscribers get early access + full transcripts: https://theripcurrent.substack.com
loading
Comments 
loading