DiscoverThe Rip Current with Jacob Ward
The Rip Current with Jacob Ward
Claim Ownership

The Rip Current with Jacob Ward

Author: Jacob Ward

Subscribed: 8Played: 201
Share

Description

The Rip Current covers the big, invisible forces carrying us out to sea, from tech to politics to greed to beauty to culture to human weirdness. The currents are strong, but with a little practice we can learn to spot them from the beach, and get across them safely.

Veteran journalist Jacob Ward has covered technology, science and business for NBC News, CNN, PBS, and Al Jazeera. He's written for The New Yorker, The New York Times Magazine, Wired, and is the former Editor in Chief of Popular Science magazine.
41 Episodes
Reverse
According to the Wall Street Journal, Sam Altman sent an internal memo on Monday declaring a company-wide emergency and presumably ruining the holiday wind-down hopes of his faithful employees. OpenAI is hitting pause on advertising plans, delaying AI agents for health and shopping, and shelving a personal assistant called “Pulse.” All hands are being pulled back to one mission: making ChatGPT feel more personal, more intuitive, and more essential to your daily life.The company says it wants the general quality, intelligence, and flexibility to improve, but I’d argue this is less about making the chatbot smarter, and more about making it stickier.Google’s Gemini has been surging — monthly active users jumped from 450 million in July to 650 million in October. Industry leaders like Salesforce CEO Marc Benioff are calling it the best LLM on the market. OpenAI seems to feel the heat, and also seems to feel it doesn’t have the resources to keep building everything it wants all at once — it has to prioritize. Consider that when Altman was recently asked on a podcast how he plans to get to profitability, he grew exasperated. “Enough,” he said.But here’s what struck me about the Code Red. While Gemini is supposedly surpassing ChatGPT in industry benchmarkes, I don’t think Altman is chasing benchmarks. He’s chasing the “toothbrush rule” — the Google standard for greenlighting new products that says a product needs to become an essential habit used at least three times a day. The memo specifically emphasizes “personalization features.” They want ChatGPT to feel like it knows you, so that you feel known, and can’t stop coming back to it.I’ve been talking about AI distortion — the strange way these systems make us feel a genuine connection to what is, ultimately, a statistical pattern generator. That feeling isn’t a bug. It’s becoming the business model.Facebook did this. Google did this. Now OpenAI is doing it: delaying monetization until the product is so woven into your life that you can’t imagine pulling away. Only then do the ads come.Meanwhile, we’re living in a world where journalists have to call experts to verify whether a photo of Trump fellating Bill Clinton is real or AI-generated. The image generators keep getting better, the user numbers keep climbing, and the guardrails remain an afterthought.This is the AI industry in December 2025: a race to become indispensable.
It’s Monday, December 1st. I’m not a turkey guy, and I’m of the opinion that we’ve all made a terrible habit of subjecting ourselves to the one and only time anyone cooks the damn thing each year. So I hope you had an excellent alternative protein in addition to that one. Ours was the Nobu miso-marinated black cod. Unreal.Okay, after the food comes the A.I. hangover. This week I’m looking at three fronts where the future of technology just lurched in a very particular direction: politics, geopolitics, and the weird church council that is the A.I. conference circuit.First, the politics. Trump’s leaked executive order to wipe out state A.I. laws seems to have stalled — not because he’s suddenly discovered restraint, but maybe because the polling suggests that killing A.I. regulation is radioactive. Instead, the effort is being shoved into Congress via the National Defense Authorization Act, the “must-pass” budget bill where bad ideas go to hide. Pair that with the Federal Trade Commission getting its teeth kicked in by Meta in court, and you can feel the end of the Biden-era regulatory moment and the start of a very different chapter: a government that treats Big Tech less as something to govern and more as something to protect.Second, the geopolitics. TSMC’s CEO is now openly talking about expanding chip manufacturing outside Taiwan. That sounds like a business strategy, but it’s really a tectonic shift. For years, America’s commitment to Taiwan has been tied directly to that island’s role as our chip lifeline. If TSMC starts building more of that capacity in Arizona and elsewhere, the risk calculus around a Chinese move on Taiwan changes — and so does the fragility of the supply chain that A.I. sits on top of.Finally, the quiet councils of the faithful: AWS re:Invent and NeurIPS. Amazon is under pressure to prove that all this spending on compute actually makes money. NeurIPS, meanwhile, is where the people who build the models go to decide what counts as progress: more efficient inference, new architectures, new “alignment” tricks. A single talk or paper at that conference can set the tone for years of insanely expensive work. So between Trump’s maneuvers, the FTC’s loss, TSMC’s hedging, and the A.I. priesthood gathering in one place, the past week and this one are a pretty good snapshot of who really steers the current we’re all in.
It’s a warning siren: people seeing delusions they never knew they had amplified by AI, a wave of lawsuits alleging emotional manipulation and even suicide coaching, a major company banning minors from talking freely with chatbots for fear of excessive attachment, and a top mental-health safety expert at OpenAI quietly heading for the exit.For years I’ve argued that AI would distort our thinking the same way GPS distorted our sense of direction. But I didn’t grasp how severe that distortion could get—how quickly it would slide from harmless late-night confiding to full-blown psychosis in some users.OpenAI’s own data suggests millions of people each week show signs of suicidal ideation, emotional dependence, mania, or delusion inside their chats. Independent investigations and a growing legal record back that up. And all of this is happening while companies roll out “AI therapists” and push the fantasy that synthetic friends might be good for us.As with most of what I’ve covered over the years, this isn’t a tech story. It’s a psychological one. A biological one. And a story about mixed incentives. A story about ancient circuitry overwhelmed by software, and by the companies who can’t help but market it as sentient. I’m calling it AI Distortion—a spectrum running from mild misunderstanding all the way to dependency, delusion, isolation, and crisis.It’s becoming clear that we’re not just dealing with a tool that organizes our thoughts. We’re dealing with a system that can warp them, in all of us, every time.
Today I dug into the one corner of the economy that’s supposed to keep its head when everyone else is drunk on hype: the insurance industry. Three of the biggest carriers in the country—AIG, Great American, and W.R. Berkley—are now begging regulators not to force them to cover A.I.-related losses, according to the Financial Times. These are the people who price hurricanes, wildfires, and war zones… and they look at A.I. and say, “No thanks.” That tells you something about where we really are in the cycle.I also walked through the Trump administration’s latest maneuver, which looks a lot like carrying water for Big Tech in Brussels: trading lower steel tariffs for weaker European tech rules. (The Europeans said “no thank you.”) Meanwhile, we’re still waiting on the rumored executive order that would bulldoze state A.I. laws—the only guardrails we have in this country.On the infrastructure front, reporting out of Mumbai shows how A.I. demand is forcing cities back toward coal just to keep data centers running. And if that wasn’t dystopian enough, I close with a bleak little nugget from Business Insider advising Gen Z to “focus on tasks, not job titles” in the A.I. economy. Translation: don’t expect a career—expect a series of gigs glued together by hope.It’s a full Monday’s worth of contradictions: the fragile hype economy, the political favoritism behind it, and the physical reality—pollution, burnout, precarity—that always shows up eventually.
The only laws protecting you from the worst excesses of A.I. might be wiped out — and fast. A leaked Trump executive order would ban states from regulating A.I. at all, rolling over the only meaningful protections any of us currently have. There is no federal A.I. law, no federal data-privacy law, nothing. States like California, Illinois, and Colorado are the only line of defense against discriminatory algorithms, unsafe model deployment, and the use of A.I. as a quasi-therapist for millions of vulnerable people.This isn’t just bad policy — it’s wildly unpopular. The last time Republicans tried this maneuver, the Senate killed it 99–1. And Americans across the political spectrum overwhelmingly want A.I. regulated, even if it slows the industry down. But the tech sector wants a frictionless, regulation-free environment, and the Trump administration seems eager to give it to them — from crypto dinners and gilded ballrooms to billion-dollar Saudi co-investment plans.There’s another layer here: state laws also slow down the federal government’s attempt to build a massive surveillance apparatus using private data brokers and companies like Palantir. State privacy protections cut off that flow of data. Removing those laws clears the pipe.The White House argues this is about national security, China, and “woke A.I.” But legal experts say the order is a misreading of commerce authority and won’t survive in court. And state leaders like California’s Scott Wiener are already preparing to sue. For now, the takeaway is simple: states are the only governments in America protecting you from A.I. — and the administration is trying to take that away.
In today’s episode, I’m following the money, the infrastructure, and the politics:Nvidia just posted another monster quarter and showed that it’s still the caffeine in the US economy. Investors briefly relaxed, even as they warned that an AI bubble is still the top fear in markets. Google jammed Gemini 3 deeper into Search in a bid to regain narrative control. Cloudflare broke down and reminded us that the “smart” future still runs on pretty fragile plumbing. The EU blinked on AI regulation. And here in the U.S., the White House rolled out the red carpet for Saudi Arabia as part of a multibillion-dollar AI infrastructure deal that seems to be shiny enough to have President Trump openly chastising a journalist for asking Crown Prince about his personal responsibility for the murder of an American journalist.But the deeper story I’m looking at today is social, not financial. Politicians like Bernie Sanders are beginning to voice the fear that AI won’t just destroy jobs — it might quietly corrode our ability to relate to one another. If you’ve been following me you know this is more or less all I’m thinking about at the moment. So I looked at the history of this kind of concern, and while we’re generally only concerned with death and financial loss in this country, we do snap awake from time to time when a new technology threatens our social fabric. Roll your eyes if you want to, but we’ve seen this moment before with telegraphs, movies, radio demagogues, television, video games, and social media, and there’s a lot to learn from that history. This episode explores that lineage, what it means for AI, and why regulation might arrive faster than companies expect.
Today’s Deep Cut asks a simple question: Is the AI industry building way more capacity than the world actually needs?To answer it, I look at three historical warnings:• Tulsa, Oklahoma, a city built for millions who never came after early oil wealth exploded and then evaporated.• Britain’s “Railway Mania” of the 1840s, when investors poured money into duplicate train lines that bankrupted entire companies.• And today’s AI giants, spending trillions on data centers, energy infrastructure, and even floating ideas about putting compute facilities in space.We’ll talk about why companies like OpenAI, Amazon, Meta, and others believe this infrastructure binge is justified, and where the logic breaks down. I also dig into the Kardashev Scale, the ecological cost of rocket launches, and the mismatch between AI’s lofty energy dreams and the reality of using all that power to generate wedding vows and knock-knock jokes.History is full of moments when industries overbuilt themselves into crisis. Are we repeating the pattern with AI?If you enjoy the show, you can subscribe to the newsletter at TheRipCurrent.com.
Today’s “Map” tracks the forces shaping tech, money, and global power on Monday, November 17th.We start with a rare move: Warren Buffett’s Berkshire Hathaway quietly taking a $4.9B stake in Alphabet — one of the most surprising bets of his career, and a clear signal about where long-term AI value is concentrating.Meanwhile, Peter Thiel just sold his entire stake in Nvidia (~$100M). For a man who’s made a career out of contrarian timing, this exit raises the question: what does he see (or not see) in AI’s hardware boom?I also recap a discussion I moderated with consular officials and regulators from across Asia, where the loudest concern wasn’t about safety or innovation — it was about AI’s failure to work in languages other than English. Meta is now pushing its new Omnilingual ASR model, supporting 1,600+ languages, to become a global “voice layer.” Whether it actually works is an open question.And then there’s Moscow’s big humanoid robot debut — where the machine walked onstage looking drunk, staggered around, and face-planted so hard its panels came off. It’s funny, but it’s also a reality check: the dream of a general-purpose home robot is still nowhere near ready.Finally, we look ahead: Saudi Crown Prince Mohammed bin Salman is visiting the White House with a massive investment and technology package — including AI access and a civilian nuclear deal — at the exact moment AI energy demand is exploding past U.S. grid capacity.The throughline:AI money — not AI models — is steering the world right now. A third of U.S. GDP growth last year came from AI infrastructure spending, and this week’s Nvidia earnings call will reveal where the next wave is headed.If you want more breakdowns like this every weekday, you can subscribe at TheRipCurrent.com.
Are we ready to take on the tech titans? Sacha Haworth thinks maybe—just maybe—we finally are. The head of the Tech Oversight Project joins me this week to talk about the pervasive influence of Big Tech on our lives, and why recognizing a growing allergy to that influence is becoming a centerpiece of political strategy. We discuss the public’s growing concerns over privacy, children’s addiction to technology, and the economic and environmental effects of tech companies’ big AI plans on local communities. Sacha shares insights on political will and the bipartisan potential to regulate and hold big tech accountable, and the court cases and regulatory moves she’ll be watching most closely in 2026 and beyond.00:00 Introduction: The Growing Influence of Tech00:22 The Rip Current: Exploring Big Tech’s Impact01:05 Guest Introduction: Sasha Hayworth01:38 Election Insights: Tech’s Role in Political Wins02:43 Tech and Economic Issues in Elections03:35 The Rise of Data Centers and Their Impact06:29 Personal Journey: From Policy School to Tech Oversight10:41 The Tech Oversight Project: Mission and Goals11:46 Shaping the Narrative: Tech in Politics17:22 The Politics of Tech: Power and Influence22:03 Economic Speculation and the Tech Bubble28:36 Future Vision: The Impact of AI and Tech31:22 The Impact of Job Loss and Tax Incentives32:39 AI’s Influence on Young Minds34:49 Parental Concerns and Legislative Efforts40:28 The Dark Side of Chatbots49:03 Section 230 and Legal Protections01:00:56 Political Will and Bipartisan Efforts01:03:43 Conclusion and Call to Action
We can all agree that a free press is a cornerstone of American democracy, and that we want journalism in our lives. But that's different from making it possible to make a living as a journalist, and it's also not enough to protect the power of journalism against the libertarian worldview and AI slop being pushed on us all by the world's biggest companies. How will journalism survive? Jake talks with Michael Bolden, the new Dean of the Berkeley Journalism School, about his personal journey from Mobile, Alabama, to leading one of the country's top journalism schools. They dive deep into the philosophical importance of journalism, the complications brought by AI and media technology, and the crucial role of local news. Bolden emphasizes the necessity of adapting journalism education to future demands, including the incorporation of AI and influencer collaborations, and together they try to sort out how to bring together the best of this new, open world of information and the old world of true expertise and editorial rigor.00:00 Introduction: The Impact of Personal Background on Journalism00:29 The State of Journalism Today01:07 Challenges Facing Modern Journalism02:27 Introducing Michael Bolden: A Career in Journalism03:56 Michael Bolden's Early Life and Influences07:17 The Importance of Representation in Journalism14:04 Navigating Professional Challenges19:53 The Future of Journalism Education27:31 The Evolving Role of Journalists28:53 The Decline of Traditional Media33:38 The Rise of Influencers and Independent Journalists38:32 Political Influence and Media Ownership47:25 AI and the Future of Journalism57:12 Innovative Journalism Models59:20 Conclusion and Final Thoughts
Jesse Damiani, whose newsletter Reality Studies unpacks emerging philosophical questions around technology, had me on his Urgent Futures podcast for an hour-plus conversation about the state of A.I., and where my 2022 book The Loop got it right and got it wrong.
AI is about to create an epidemic of addiction in this country and around the world, according to Zachary Gidwitz, founder of OpenRecovery. Could it also be our best shot at fighting back? In this episode of The Rip Current, I discuss the growing issue of addiction in America and the potential for AI tools to combat it with Gidwitz. Together we get into the rise of various forms of addiction, from fentanyl and gambling to social media and pornography. Gidwitz shares his vision of using AI not to replace human therapists but to guide individuals towards real human connection and effective recovery programs. He stresses the importance of tailoring interventions to individual needs and avoiding one-size-fits-all approaches. The conversation also explores the ethical considerations and challenges in using AI for such sensitive applications, emphasizing the need for transparency, collaboration, and continuous improvement. Addiction is coming, team. Here’s hoping conversations like this can help get us out in front of it.
My recent trip to Brazil happened to coincide with the trial of former president Jair Bolsonaro, and ever since I’ve been looking for the right person to explain how it is that a former military dictatorship is now the kind of democracy that actually brings a former leader to account. In this episode, Cristina Tardaguila, founder of the fact-checking organization Lupa, describes the rise and conviction of former President Jair Bolsonaro, the impact of misinformation, and the growing (and now perhaps unstoppable) influence of China and Russia in Brazil. Cristina shares insights into the creation and evolution of Lupa, the complexities of Brazilian democracy, and the economic and political dynamics shaping the nation’s complicated future.00:00 US Diplomacy and Brazil’s Geopolitical Landscape02:17 Introduction to Lupa and Cristina Tardaguila02:48 The Rise of Fact-Checking in Brazil05:09 Global Populism and Bolsonaro’s Influence07:02 The Hate Cabinet and Techno-Populism10:56 Lupa’s Evolution and Business Model12:58 COVID-19 and the Fight Against Misinformation16:46 The Why of Disinformation27:12 Bolsonaro’s Political Journey and Impact33:52 The Aftermath of January 8th, 202334:40 Reflecting on the Insurrection41:25 The Trial and Conviction of Bolsonaro47:56 Brazil’s Political Future51:49 China’s Influence in Brazil59:48 Conclusion and Final Thoughts
Dr. Rumman Chowdhury, an AI ethicist and the head of Humane Intelligence, is sick of all this complaining. Not because there isn’t plenty to complain about — in this episode we unpack a host of horrors that AI and the companies who make it are foisting on all of us — but because she believes that the fatalism of AI criticism inadvertently empowers powerful corporations. Dr. Chowdhury, who has worked at Accenture, Twitter, and served as a science envoy for the Biden administration, has an unusual background for an AI builder — political science and quantitative social sciences — and her work on the inherent biases within algorithms has led her to believe that the solutions are far more complicated than just switching the whole thing off. Enjoy!
When I first came on his show, Andrew Keen took a dim view of my ideas about how we might fight back against the psychological effects of AI in my 2022 book The Loop, and to be honest: he was kinda right. The “how to fight back” section of the book was thin, largely because I was hanging (and still hang) so much of my hopes on the idea that the courts will save us. So I asked him if he’d like to revisit our conversation on his show, and he graciously agreed. Here’s how it went!
AI is having a profound effect on love, work, and democracy...but do the people making it understand that? AI ethics consultant Olivia Gambelin has been fighting to make her clients in Silicon Valley understand that good ethics are good business, and to make regulators in Europe see that good business can also have good ethics. It's a tough gig. Gambelin, who advises both AI companies in Silicon Valley and regulators in Brussels, talks techno-solutionism, the challenges of implementing ethical AI practices when there's always that one evangelist in the room trying to go too far too fast, and her worries about what she considers the top truly unacceptable category of AI product out there today.00:00 Introduction to Techno Solutionism00:44 Meet Olivia Gambelin: AI Ethics Consultant01:56 Olivia's Journey: From Bay Area to Brussels08:19 The Role of Ethical Intelligence14:38 Challenges in AI Implementation21:50 The Future of AI in the Workplace32:51 Introduction to AI Regulation in Brussels34:05 Cultural Differences in AI Regulation36:29 Challenges in European AI Literacy38:32 Behavioral Science and AI Manipulation44:22 Ethics in AI: Business vs. Regulation51:17 The Need for AI Regulation in the US56:00 Ethical Boundaries in AI Applications01:00:53 Positive Applications of AI01:05:18 Conclusion and Final Thoughts
A new California law, SB 53, was just signed by Governor Gavin Newsom today, Monday September 29th. It has the AI makers undoubtedly throwing their mushroom coffee across the room this afternoon, because now they’re hemmed in by the EU’s AI Act on one side, and the Golden State’s new law on the other. Here’s why it matters!
TikTok is no different than any other social media company (it wants to serve you irresistible content by predicting your tastes algorithmically), and its processes aren't either (it threw spaghetti and money against the wall until it stuck). But its status as a Chinese company, the first globally successful Chinese media export, and a deeply powerful geopolitical tool means it's the center of a battle over the future of the Internet. In her new book Every Screen on the Planet: The War Over Tiktok, Emily Baker-White describes her years covering the company. (She did it so well executives there even attempted to spy on her phone to find out who she was speaking to inside TikTok.) She tells the story of the most effective attention-grabbing algorithm ever devised — its strange beginnings, and the “messy” people who operate the levers behind its curtain — and explains the legal limbo in which it and its billions of users are now caught.
AI is making people crazy. And I don’t mean in the sense that it’s driving tech observers like me crazy, with its reckless adoption path and dishonest marketing and screwy incentives. I mean it’s literally making otherwise reasonable people believe that their AI chatbots are lovers, or prisoners, or prophets of hidden wisdom. In this hourlong conversation with tech educator Morten Rand-Hendricksen (his TikTok and his YouTube are worth a follow), we try to surround the topic from two sides. Mine is the psychological and ideological side, which I used to frame the thesis of my book The Loop: How A.I. is Creating a World without Choices and How to Fight Back. Morten’s is the bits and bytes that have been assembled to give the impression that this language-mirroring system has somehow developed reasoning, and a personality, and special insight into who you are. He does a tremendous job here explaining why this simulation of companionship is so thin, and so powerful, and so dangerous. And he has some real tactics to share for all of us in fighting back.
Are you ready to get in a tube built by billionaires and stay there for nine months? Ready to live in caves on the other side? And who is in charge of this dreary outpost anyway? In this episode of The Rip Current, David Ariosto, author of Open Space: From Earth to Eternity, the Global Race to Explore and Conquer the Cosmos and host of the Space Minds podcast, joins Jake to explore the motivations behind private and state-funded space travel, the potential for settlements on the Moon and Mars, and the ethical implications of billionaire-led space enterprises. We talk about the fragility of the human body in space, the viability of actually governing space settlements, and the technological advancements driving the space industry. Who really holds the power in the final frontier?00:00 Introduction to Space Exploration00:30 The Rip Current: Big Invisible Forces00:32 The Leap of Faith in Space01:20 Guest Introduction: David Sto03:22 The Commercialization of Space05:57 Technological Advances and Market Potential13:41 Geopolitical Implications of Space Exploration21:50 Human vs. Robotic Space Missions34:24 Facing Death: The Mindset of Test Pilots35:52 Astronaut's Dream: Franklin Chang Diaz's Story37:56 Psychological Impact of Space Travel39:55 Billionaires in Space: Elon Musk and Beyond41:48 NASA's Changing Role in Space Exploration51:17 The Future of Space Colonization59:44 The Vastness of Space and Human Survival01:05:40 Conclusion: The Drive to Explore the Cosmos
loading
Comments