DiscoverThe Rip Current with Jacob Ward
The Rip Current with Jacob Ward
Claim Ownership

The Rip Current with Jacob Ward

Author: Jacob Ward

Subscribed: 32Played: 424
Share

Description

The Rip Current covers the big, invisible forces carrying us out to sea, from tech to politics to greed to beauty to culture to human weirdness. The currents are strong, but with a little practice we can learn to spot them from the beach, and get across them safely.

Veteran journalist Jacob Ward has covered technology, science and business for NBC News, CNN, PBS, and Al Jazeera. He's written for The New Yorker, The New York Times Magazine, Wired, and is the former Editor in Chief of Popular Science magazine.
63 Episodes
Reverse
Dario Amodei is the rare tech CEO who actually tried to set limits on what his AI could be used for. He published an 80-page constitution telling the world what Claude is supposed to value. He wrote a 20,000-word essay warning about what happens when AI companies accumulate too much power. Then the Pentagon summoned him and delivered an ultimatum: drop your restrictions on autonomous weapons and mass surveillance, or lose the contract.The detail that isn't getting enough attention: Anthropic's AI was used in the operation that captured Venezuelan President Nicolás Maduro. The company found out after the fact. The safeguards didn't stop the deployment — they just created friction. That's the gap between the ethics document and the reality of doing business with the most powerful military on earth.This is a story about what principles actually cost when the economics get serious — and whether Amodei's public commitments will survive contact with a $200 million contract, a $380 billion valuation, and a defense secretary who thinks ideology is an insult.Originally published at The Rip Current. Paid subscribers get early access + full transcripts: https://theripcurrent.com
Last week I watched what may be the Big Tobacco moment for social media unfold in real time. The trial against Meta in Los Angeles is the first of an estimated 1,600 cases making a specific argument: that Section 230 doesn't protect a platform that deliberately engineered addictive behavior. Internal company documents — showing what these companies knew about harm and when — are entering the court record. And then Mark Zuckerberg got served with legal papers walking into court. We don't know what lawsuit yet. But the image says everything.On Friday I got into a public debate with Taylor Lorenz about whether the social media threat to young people is a moral panic or something genuinely new. We disagree. I think the internal documents coming out of these companies make the moral panic framing harder to sustain — when a company's own researchers document harm and management keeps optimizing for engagement, that's not cultural overreaction, that's a paper trail.Then this morning the Anthropic story broke. The Pentagon summoned CEO Dario Amodei and told him to drop his internal ethics restrictions on autonomous weapons and mass surveillance, or lose the contract. Amodei published an 80-page AI constitution last month and a 20,000-word warning essay this year. He named the trap he was worried about. Now he's in it. The full analysis is at The Rip Current. Paid subscribers get early access + full transcripts: https://theripcurrent.com
In the grand tradition of presidential addresses, I stand here — well, no, I’m sitting, actually —to tell you exactly how things are going. Unlike those addresses, I do not tell you things are going great. I borrowed the format — the gallery anecdote, the foreign policy chest-beating, the optimistic entrepreneurship section, the infrastructure close — and used it to describe the world as I’m seeing it right now. Consider this your State of the Union from someone with no speechwriters, no approval rating to protect, and nothing to sell you except the truth as best I can see it.Tonight’s address covers a seemingly random mishmash, but I promise I pull it all together: a soccer riot in India that is actually about all of us, a race with China that may be less about values than about who profits from the panic, a Pentagon deadline handed to the one AI CEO who tried to hold an ethical line, a concentration of power that makes “the market” sound quaint, the loneliness that comes with a billion-dollar company of one, and a set of courtroom reckonings that are a preview of where AI is headed next. The State of the Union is anxious. I remain hopeful. God bless America.Originally published at The Rip Current. Paid subscribers get early access + full transcripts: https://theripcurrent.com
For the first time in his life, Mark Zuckerberg will answer questions under oath — not to a Senate subcommittee where politicians perform for their clips, but to a jury of regular people whose only job is to decide whether he's telling the truth. This is a genuinely different situation, and here's how to watch it.The real danger for Zuckerberg isn't his testimony — it's the internal documents already in evidence that will be put in front of him. A 2018 Meta strategy document saying "if we want to win big with teens, we must bring them in as tweens." Emails from Meta's own tech chief reporting back to Zuckerberg about plastic surgery filters, with Zuckerberg's response being that he needed "more data" before acting on known harm. Internal communications in which Meta employees referred to themselves as "basically pushers." These don't sound like a company run by a thoughtful parent.The third thing to watch is Section 230 — the 1996 law that gives platforms blanket immunity for what users post. The plaintiffs' argument, which the judge has already allowed the jury to consider, is that this trial isn't about content. It's about design. Infinite scroll. Autoplay. The Like button. If design liability succeeds here, it blows a hole in the legal shield that has protected every major platform for decades. The question at the heart of this trial — who makes these decisions, who profits, and who ends up paying — is one I've been covering for years. Wednesday gives us a jury's answer.Originally published at The Rip Current. Paid subscribers get early access + full transcripts: https://theripcurrent.substack.com
A 20-year-old woman started using YouTube at age six and Instagram at age nine. She's now suing both companies, and her case has just become the most important tech trial since the DOJ went after Microsoft in 1998. Here's what's actually at stake — and why it matters whether you're a parent or not.The trial isn't just about one person's mental health. It's a bellwether case for more than 1,500 similar lawsuits waiting in the pipeline, and the first time CEOs of major social media platforms — including Mark Zuckerberg, who testifies this week — have had to answer questions in front of a jury rather than a Senate subcommittee. The internal documents already in evidence are extraordinary: YouTube memos describing "viewer addiction" as a goal, Meta's Project Myst finding that traumatized kids were especially vulnerable to the platform and that parental controls made almost no difference, and a strategy document laying out a pipeline designed to bring kids in as tweens and keep them as teens.The central legal question is whether Section 230 — the 1996 law that has shielded every major platform from liability for nearly 30 years — protects design decisions like infinite scroll, autoplay, and the Like button. The judge has already ruled that the jury can consider design liability. If that argument wins, it changes the legal landscape for every platform that has ever made an engineering choice optimized for engagement. Nobody voted on infinite scroll. No regulator approved autoplay. A small group of engineers and executives made those decisions, and billions of people — including six-year-olds — inherited the results. A Los Angeles jury is now being asked to weigh in on that.Originally published at The Rip Current. Paid subscribers get early access + full transcripts: https://theripcurrent.substack.com
Opening arguments began this morning in two trials that could change social media forever—and they're not about what kids see online. They're about who decided to make these platforms addictive in the first place.Today in Los Angeles Superior Court, Meta (Instagram/Facebook) and YouTube are defending against claims that their platforms' design features—infinite scroll, auto-play, algorithmic recommendations—deliberately addict children and cause mental health harm. A 19-year-old plaintiff says these features caused her anxiety, body dysmorphia, and suicidal thoughts starting at age 10.Simultaneously, New Mexico's attorney general is suing Meta for failing to protect children from sexual exploitation.THE STAKES:Bellwether trials affecting 1,500+ similar lawsuitsHundreds of school district claimsCases from 40+ state attorneys generalMeta warned damages could reach "high tens of billions of dollars"Mark Zuckerberg, Adam Mosseri (Instagram), and Neal Mohan (YouTube) expected to testifyTHE KEY DIFFERENCE:These lawsuits sidestep Section 230 (which protects platforms from liability for user content) by attacking the design of the platforms themselves—not the content posted on them. They argue companies used "behavioral and neurobiological techniques borrowed from slot machines and exploited by the cigarette industry."WHY THIS MATTERS:This isn't about censoring social media or protecting kids from "bad content." It's about whether companies can knowingly build products designed to addict children, even when internal research shows the harm being caused.FROM MY ANALYSIS:How adolescent brain development makes kids vulnerable to designed addiction (10-12 years old: all gas pedal, no brakes)What internal documents already show companies knewWhy Section 230 won't protect them this timeThe shift from "physical harm" to "behavioral harm" in American regulationWhat happens if plaintiffs winThis could be social media's Big Tobacco moment. Watch to understand what's really happening in that courtroom today.SOURCES:NPR: https://www.npr.org/2026/01/27/nx-s1-5684196/social-media-kids-addiction-mental-health-trialCNBC: https://www.cnbc.com/2026/02/09/meta-big-week-in-court-opening-arguments-in-new-mexico-la-trials.htmlABC News: https://abcnews.go.com/Technology/wireStory/arguments-begin-landmark-social-media-addiction-trial-set-129983976Read the full analysis at TheRipCurrent.com#SocialMedia #Meta #YouTube #BigTech #Regulation
Ever wonder why ICE agents cover their faces during raids? They know exactly what surveillance technology can do when your face is captured in public. And they should know—they’re operating the most sophisticated surveillance apparatus ever deployed on American soil.With a $75 billion budget from last year’s reconciliation process, ICE has gone on a shopping spree that would make China’s “Safe Cities” program jealous. Iris scanners from BI2 Technologies. Facial recognition from Clearview AI. License-plate tracking systems from Thomson Reuters that can establish your daily travel patterns. Cell phone location tracking purchased from commercial data brokers. A $30 million enforcement platform from Palantir that draws on everything from Medicaid records to IRS data.The technology doesn’t stop at identifying immigrants. Body cam footage shows agents using ChatGPT to write reports. “Stingray” devices impersonate cell towers to grab a protester’s unique phone identifier—often without warrants. And ICE, like other agencies, sidesteps the Supreme Court’s Carpenter decision by simply buying from commercial data brokers what they can’t legally obtain with a warrant.And here’s the kicker: DHS is now using at least 200 AI systems—a 37% increase since July 2025—with virtually no oversight because agencies self-report whether AI is their “primary” decision-making tool.Watch the full breakdown to understand what this means for everyone’s civil liberties, both Americans and those hoping to be.
It’s a dark time.We have an unaccountable federal police force killing Americans in the street. Heather Cox Richardson, the foremost historian of the American political moment, ended her show in tears. The American experiment feels more experimental than ever.So here I want to step back and think about something much, much larger than us. Not to minimize our problems, but because understanding how impossibly small we are might help us stop fucking around and take care of one another.In 1964, a Soviet astronomer named Nikolai Kardashev detected a regular signal from deep space. To his ears, it had to be aliens — some mechanical device creating this extremely repetitive, measurably consistent pulse. It turned out to be a pulsar, a naturally occurring phenomenon. He was disappointed. But the experience obsessed him, and he created what’s now called the Kardashev Scale, a way of measuring the sophistication of civilizations.Level one: a civilization that has harnessed the available power of its own planet. Level two: harnessed the power of its nearest star. Level three: harnessed the power of its galaxy. We’re not even a one. We’re maybe a 0.4. We’re primitive.There’s a comedian on TikTok named Vinny Thomas who does this great bit about humanity being interviewed by some intergalactic HR person for admission into the larger club of civilizations. We’re bombing the interview. “Have you colonized any other worlds?” No. “What about Mars? It’s right down the street.”This gets at something Enrico Fermi famously asked while building nuclear weapons during the Manhattan Project. At lunch with colleagues, he’d talk about the math: so much space, so many stars. Where is everybody? The Fermi Paradox has been kicked around for decades, but the solution I find most compelling came from European researchers: It’s not that we’re alone. It’s that even if other civilizations exist across the vastness of the universe, they don’t exist at the same time as us.The universe isn’t just unimaginably large. It’s also unimaginably old. We’re a fraction of an instant in its history — a match flare struck in the darkness. The idea that two matches would happen to be lit at the same moment, such that they’d see each other’s light in all that vastness? Ludicrous.Here’s how alone we are. The Kepler telescope searched for exoplanets — planets with the right ratio of size and distance from their star to potentially support life. The closest one to us is Proxima Centauri b, 4.2 light-years away. That’s right down the block in universal terms. The news coverage at the time was breathless: we might go there someday!I was one of those breathless reporters. It felt like a civilizational shift! But then I began asking about the distances involved, and that’s where the story fell off the front page. At the fastest speed we can get a rocket to travel, it turns out it would take 2,000 human generations to reach Proxima Centauri b. That’s 200,000 years of travel. Modern humans have only been around for 200,000 years. Getting to that planet would mean bottling up the entirety of human history, jamming it into a tube, and sending it off into the unknown.We’re not doing that, whatever Elon Musk tells you. We are on the generation ship right now. This is it. Planet Earth.Astronauts talk about the overview effect — this euphoric epiphany that grips them when they see Earth from space. They come back describing the specialness of life here, how incredibly fragile and precious this delicate little vessel is.And so when I think about how much we’re lying to each other and being angry at one another at the behest of companies that profit from it, killing people for objecting to political decisions, taking people from safety to harm to remain in power — all these sins we’re committing in the face of the vastness of the universe and how fragile we are on our tiny speck.We’re a match flare. We get this brief moment. Let’s make it count.
This is a massive week for anyone who’s been watching Big Tech’s impact on kids. Internal documents from Meta, Google, YouTube, Snapchat, and TikTok are being made public as part of major lawsuits, and what they reveal is damning.Two themes emerge. First: the business value of kids. A 2020 Google presentation literally says “solving kids is a massive opportunity.” An internal Facebook email from 2016 identifies the company’s top priority as “total teen time spent.” These companies clearly saw children as a pipeline of new users to be captured.Second: they knew about the harm. An Instagram internal study from 2018 documented that “teens weaponize Instagram features to torment each other” and that “most participants regret engaging in conflicts.” TikTok’s own strategy documents admit the platform “is particularly popular with younger users who are particularly sensitive to reinforcement and have minimal ability to self-regulate.” YouTube identified “late night use, heavy habitual use, and problematic content” as root causes of harm.They knew.As I discuss here, I want this moment to establish a new legal framework in America — one that recognizes behavioral harm the same way we recognize physical and financial harm. We’ve done it before with tobacco. We can do it again with social media. And this might be the beginning.
I spent this week talking to bank tellers at Wells Fargo for a special report over at Hard Reset, and what they told me should alarm anyone who thinks their job is safe from AI.The bank has cut 65,000 jobs since 2019. CEO Charles Scharf just told investors more cuts are coming — permanently. Meanwhile, profits are soaring. Credit card accounts up 20 percent, auto lending up 19 percent, investment banking fees up 14 percent. Fewer people, more money.Branch employees describe what’s happening on the ground: three tellers became two, then one. Lines getting longer. Pressure mounting. Work that used to require human judgment now gets handled by an app, with AI guiding and surveilling every interaction. The jobs aren’t disappearing — they’re just getting piled onto fewer bodies, all of them working “at-will” with zero protection.And that’s why, for the first time in American history, bank tellers at a national institution are unionizing. Twenty-eight Wells Fargo branches across 14 states have voted to join the Communications Workers of America. Banking was always the stable, boring job that didn’t need a union. That deal is broken.Here’s what gutted me: the workers getting hit hardest are women without college degrees, especially women from Black and brown communities. Retail banking was a reliable path to middle-class stability for those folks. Now those jobs are being automated away, and as one banker pointed out, the money saved flows straight up to an overwhelmingly white, male executive class.This is the forecast. AI isn’t coming for jobs in some distant future. It’s here. It’s seeping into white-collar work that we assumed to be safe. And the only people who know it are the ones already being forced out the door.Read the full investigation at Hard Reset.
Every January, the world’s wealthiest decision-makers descend on the World Economic Forum in Davos, and we’re told this is where the future gets negotiated. This year, while the world talks of Greenland, geopolitics, tariffs, and other surreal headlines coming out of the Alps — I’m thinking about the social dynamics on the literal streets of that mountain town.Like any professional convening that draws the powerful, Davos functions less like a sober policy conference and more like a global high-school reunion, complete with insecurity, status anxiety, and a desperate fear of missing out. And nowhere is that clearer than in the party scene. Big tech and AI companies are throwing the most lavish, impossible-to-get-into events, and global leaders and their staffers are lining up — literally — to get inside.A sharp New York Times piece captures this perfectly: initiatives focused on gender equity and public good sit empty, while neon-lit crypto lounges and AI cocktail hours pulse with attention. That imbalance matters. Parties shape conversations. Conversations shape priorities. Priorities shape policy.I’ve seen this dynamic before at places like Aspen, SXSW, CES — and it always works the same way. The room that feels important becomes the room that is important, regardless of what’s actually being said inside it.The unsettling part is this: these companies now wield the resources and influence of nation-states. When they dominate Davos socially, they dominate it politically. And that should worry anyone who still believes that regulation, caution, or democratic deliberation might matter in the age of AI.
I’ve been watching robots fall over for a long time.About a decade ago, I stood on a Florida speedway covering a DARPA robotics competition where machines failed spectacularly at things like opening doors and climbing stairs. It was funny, a little sad, and a reminder of just how hard it is to automate human behavior.Fast-forward to CES this week, and the joke’s over.Humanoid robots are no longer pitching sideways into the dirt. They’re lifting, carrying, improvising, and — according to companies like Hyundai — heading onto American factory floors by 2028. These machines aren’t just pre-programmed arms anymore. Thanks to AI, they can understand general instructions, adapt on the fly, and perform tasks that once required human judgment.The pitch from executives like Hyundai’s CEO is reassuring: robots won’t replace humans, they’ll “work for humans.” They’ll handle the dangerous, repetitive jobs so people can move into higher-skilled roles.Labor unions hear something else entirely.For many workers, especially in manufacturing, these are some of the last stable, well-paying jobs that don’t require a college degree. And no one is voting on whether those jobs disappear. There’s no democratic process weighing the tradeoffs. We’re just sliding, quietly, toward a future where efficiency outruns consent.What troubles me most isn’t the technology itself. It’s the assumption baked into it — that if people are being worked like robots, the solution isn’t to make work more humane, but to replace the people.That’s not inevitability. That’s a choice. And right now, it’s being made without us.
It’s a very weird Monday back from the holidays. While most of us were shaking off jet lag and reminding ourselves who we are when we’re not sleeping late and hanging with family, the world woke up to a piece of news this weekend that showed no one in power learned a goddamn thing in history class: the United States has rendered Venezuela’s president to New York, and powerful people are openly fantasizing about “fixing” a broken country by taking control of its oil.This isn’t a defense of Nicolás Maduro. He presided over the destruction of a nation sitting on the world’s largest proven oil reserves. Venezuela’s state now barely functions beyond preserving its own power. The Venezuelans I’ve spoken with have a wide variety of feelings about an incompetent dictator being arrested by the United States.But what’s clear is that anyone who has read anything knows that the history of oil grabs is a history of financial disaster. So when I hear confident talk about oil revenues flowing back to the U.S., I don’t hear a plan. I hear the opening chapter of a time-honored financial tragedy that’s been repeated again and again, even in our lifetimes.Let’s put aside the moral horror of military invasion and colonial brutality, and just focus on whether the money ever actually flows back to the invader. Example after example shows it doesn’t: Iraq was supposed to stabilize energy markets. Instead, it delivered trillions in war costs, higher deficits, and zero leverage over oil prices. Britain’s attempt to hang onto the Suez Canal ended with a humiliating retreat, an IMF bailout, and the end of its time as a superpower. France’s war in Algeria collapsed its government. Dutch oil extraction in Nigeria boomeranged back home as lawsuits, environmental liability, and reputational ruin.Oil empires all make the same mistake: they think they can nationalize the upside while outsourcing the risk. In reality, profits stay local or corporate. Costs always come home. And we’re about to learn it all over again.Read more at TheRipCurrent.com.
Happy New Year! I’ve been off for the holiday — we cranked through a bake-off, a dance party, a family hot tub visit, and a makeshift ball drop in the living room of a snowy cabin — and I’m feeling recharged for (at least some portion of) 2026. So let’s get to it.I woke to reports that “safeguard failures” in Elon Musk’s Grok led to the generation of child sexual exploitative material (Reuters) — a euphemism that barely disguises how awful this is. I was on CBS News to talk about it this morning, but I made the point that the real question isn’t how did this happen? It’s how could it not?AI systems are built by vacuuming up the worst and best of human behavior and recombining it into something that feels intelligent, emotional, and intimate. I explored that dynamic in The Loop — and we’re now seeing it play out in public, at scale.The New York Times threw a question at all of us this morning: Why Do Americans Hate AI? (NYT). One data point surprised me: as recently as 2022, people in many other countries were more optimistic than Americans when it came to the technology. Huh! But the answer to the overall question seems to signal that we’ve all learned something from the social media era and from the recent turn toward a much more realistic assessment of technology companies’ roles in our lives: For most people, the benefits are fuzzy, while the threats — to jobs, dignity, and social stability — are crystal clear.Layer onto that a dated PR playbook (“we’re working on it”), a federal government openly hostile to regulation, and headlines promising mass job displacement, and the distrust makes a lot of sense.Of course, this is why states are stepping in. The rise of social media and the simultaneous correlated crisis in political discord, health misinformation, and depression rates left states holding the bag, and they’re clearly not going to let that happen again. California’s new AI laws — addressing deepfake pornography, AI impersonation of licensed professionals, chatbot safeguards for minors, and transparency in AI-written police reports — are a direct response to the past and the future.But if you think the distaste for AI’s influence is powerful here, I think we haven’t even gotten started in the rest of the world. Here’s a recent episode that has me more convinced of it than ever: a stadium in India became the scene of a violent protest when Indian football fans who’d paid good money for time with Lionel Messi were kept from seeing the soccer star by a crowd of VIPs clustered around him for selfies. The resulting (and utterly understandable) outpouring of anger made me think hard about what happens when millions of outsourced jobs disappear overnight. I think those fans’ rage at being excluded from a promised reward, bought with the money they work so hard for, is a preview.So yes — Americans distrust AI. But the real question is how deep those feelings go, and how much unrest this technology is quietly banking up, worldwide. That’s the problem we’ll be reckoning with all year long.
Okay, honest admission here: I don’t fully know what I think about this topic yet. A podcast producer (thanks Nancy!) once told me “let them watch you think out loud,” and I’m taking her to heart — because the thing I’m worried about is already happening to me.Lately, I’ve been leaning hard on AI tools, God help me. Not to write for me — a little, sure, but for the most part I still do that myself — but to help me quickly get acclimated to unfamiliar worlds. The latest unfamiliar world is online marketing, which I do not understand AT ALL but now need to master to survive as an independent journalist. And here’s the problem: the advice these systems give isn’t neutral, because first of all it’s not really “advice,” it’s just statistically relevant language regurgitated as advice, and second, because it just vacuums up the language wherever it can find it, its suggestions come with online values baked in. I know this — I wrote a whole fucking book about it — but I lose track of it in my desperation to learn quickly.I’m currently trying to analyze who it is that follows me on TikTok, and why, so I can try to port some of those people (or at least those types of people) over to Substack and YouTube, where one can actually make a living filing analysis like this. One of the metrics I was told to prioritize? Disagreement in the comments. Not understanding, learning, clarity, the stuff I’m after in my everyday work. Fighting. Comments in which people want to argue with me are “good,” according to ChatGPT. Thoughtful consensus? Statistically irrelevant.Here’s the added trouble. It’s one thing to read that and filter out what’s unhelpful. It’s another thing to do so in a world where all of us are supposed to pretend we had this thought ourselves. AI isn’t just helping us work faster. It’s quietly training us to behave differently — and to hide how that training happens. We’re all pretending this output is “ours,” because the unspoken promise of AI right now is that you can get help and still take the credit. (I believe this is a fundamental piece of the marketing that no one’s saying out loud, but everyone is implying.) And the danger isn’t just dishonesty toward others. It’s that we start believing our own act. There’s a huge canon of scientific literature showing that lying about a thing causes us to internalize the lie over time. The Harvard psychologist Daniel Schachter wrote a sweeping review of the science in 1999 entitled “The Seven Sins of Memory,” in which he synthesized a range of studies that showed that memory is us building a belief on the prior belief, not drawing on a perfect replay of reality, and that repetition and suggestion can implant or strengthen false beliefs that feel subjectively true. Throw us enough ideas and culturally condition us to hide where we got them, and eventually we’ll come to believe they were our own. (And to be clear, I knew a little about the reconstructive nature of memory, but ChatGPT brought me Schachter’s paper. So there you go.) What am I suggesting here? I know we’re creating a culture where machine advice is passed off as human judgment. I don’t know whether the answer is transparency, labeling, norms, regulation, or something else entirely. So I guess I’m starting with transparency.In any event, I do know this: lying about how we did or learned something makes us less discerning thinkers. And AI’s current role in our lives is built on that lie.Thinking out loud. Feedback welcome. Thanks!
Here’s one I truly didn’t see coming: the Trump administration just made the most scientifically meaningful shift in U.S. marijuana policy in years.No, weed isn’t suddenly legal everywhere. But moving marijuana from Schedule I — alongside heroin — to Schedule III is a very big deal. That single bureaucratic change cracks open something that’s been locked shut for half a century: real research.For years, I’ve covered the strange absurdities of marijuana science in America. If you were a federally funded researcher — which almost every serious scientist is — you weren’t allowed to study the weed people actually use. Instead, you had to rely on a single government-approved grow operation producing products that didn’t resemble what’s sold in dispensaries. As a result, commercialization raced ahead while our understanding lagged far behind.That’s how we ended up with confident opinions, big business, and weak data. We know marijuana can trigger severe psychological effects in a meaningful number of people. We know it can cause real physical distress for others. What we don’t know — because we’ve blocked ourselves from knowing — is who’s at risk, why, and how to use it safely at scale.Meanwhile, the argument that weed belongs in the same category as drugs linked to violence and mass death has always collapsed under scrutiny. Alcohol, linked to more than 178,000 deaths per year in the United States alone, does far more damage, both socially and physically, yet sits comfortably in legal daylight.If this reclassification sticks, the excuse phase is over. States making billions from legal cannabis now need to fund serious, independent research. I didn’t expect this administration to make a science-forward move like this — but here we are. Here’s hoping we can finish the job and finally understand what we’ve been pretending to regulate for decades.Covering earlier regulatory changes for Al Jazeera in 2016...
The United States has a split personality when it comes to AI data centers. On the one side, tech leaders (and the White House) celebrate artificial intelligence as a symbol of national power and economic growth. But politicians from Bernie Sanders to Ron DeSantis point out that when it shows up in our towns, it drains water, drives up electricity prices, and demands round-the-clock power like an always-awake city.Every AI prompt—whether it’s wedding vows or a goofy image—fires up racks of servers that require enormous amounts of electricity and water to stay cool. The result is rising pressure on local water supplies and power grids, and a wave of protests and political resistance across the country. I’m covering that in today’s episode, and you can read the whole report over at Hard Reset.
For most of modern history, regulation in Western democracies has focused on two kinds of harm: people dying and people losing money. But with AI, that’s beginning to change.This week, the headlines point toward a new understanding that more is at stake than our physical health and our wallets: governments are starting to treat our psychological relationship with technology as a real risk. Not a side effect, not a moral panic, not a punchline to jokes about frivolous lawyers. Increasingly, I’m seeing lawmakers understand that it’s a core threat.There is, for instance, the extraordinary speech from the new head of MI6, Britain’s intelligence agency. Instead of focusing only on missiles, spies, or nation-state enemies, she warned that AI and hyper-personalized technologies are rewriting the nature of conflict itself — blurring peace and war, state action and private influence, reality and manipulation. When the person responsible for assessing existential threats starts talking about perception and persuasion, that stuff has moved from academic hand-wringing to real danger.Then there’s the growing evidence that militant groups are using AI to recruit, radicalize, and persuade — often more effectively than humans can. Researchers have now shown that AI-generated political messaging can outperform human persuasion. That matters, because most of us still believe we’re immune to manipulation. We’re not. Our brains are programmable, and AI is getting very good at learning our instructions.That same playbook is showing up in the behavior of our own government. Federal agencies are now mimicking the president’s incendiary online style, deploying AI-generated images and rage-bait tactics that look disturbingly similar to extremist propaganda. It’s no coincidence that the Oxford University Press crowned “rage bait” its word of the year. Outrage is no longer a side effect of the internet — it’s a design strategy.What’s different now is the regulatory response. A coalition of 42 U.S. attorneys general has formally warned AI companies about psychologically harmful interactions, including emotional dependency and delusional attachment to chatbots and “companions.” This isn’t about fraud or physical injury. It’s about damage to people’s inner lives — something American law has traditionally been reluctant to touch.At the same time, the Trump administration is trying to strip states of their power to regulate AI at all, even as states are the only ones meaningfully responding to these risks. That tension — between lived harm and promised utopia — is going to define the next few years.We can all feel that something is wrong. Not just economically, but cognitively. Trust, truth, childhood development, shared reality — all of it feels under pressure. The question now is whether regulation catches up before those harms harden into the new normal.Mentioned in This Article:Britain caught in ‘space between peace and war’, says new head of MI6 | UK security and counter-terrorism | The Guardianhttps://www.theguardian.com/uk-news/2025/dec/15/britain-caught-in-space-between-peace-and-war-new-head-of-mi6-warnsIslamic State group and other extremists are turning to AI | AP Newshttps://apnews.com/article/islamic-state-group-artificial-intelligence-deepfakes-ba201d23b91dbab95f6a8e7ad8b778d5‘Virality, rumors and lies’: US federal agencies mimic Trump on social media | Donald Trump | The Guardianhttps://www.theguardian.com/us-news/2025/dec/15/trump-agencies-style-social-mediaUS state attorneys-general demand better AI safeguardshttps://www.ft.com/content/4f3161cc-b97a-496e-b74e-4d6d2467d59c
President Trump has signed a sweeping executive order aimed at blocking U.S. states from regulating artificial intelligence — arguing that a “patchwork” of laws threatens innovation and America’s global competitiveness. But there’s a catch: there is no federal AI law to replace what states have been doing.In this episode, I break down what the executive order actually does, why states stepped in to regulate AI in the first place, how this move conflicts with public opinion, and why legal experts believe the fight is headed straight to the courts.This isn’t just a tech story. It’s a constitutional one.Read the full analysis in my weekly column at HardResetMedia.com.
This week I sat down with the woman who permanently rewired my understanding of human nature — and now she’s turning her attention to the nature of the machines we’ve gone crazy for.Harvard psychologist Mahzarin Banaji coined the term “implicit bias” and has conducted research for decades into the blind spots we don’t admit even to ourselves. The work that blew my hair back shows how prejudice has and hasn’t changed since 2007. Take one of the tests here — I was deeply disappointed by my results. More recently, she’s been running new experiments on today’s large language models.What has she learned?They’re far more biased than humans.Sometimes twice or three times as biased.They show shocking behavior — like a model declaring “I am a white male” or demonstrating literal self-love toward its own company. And as their most raw and objectionable responses are papered over, our ability to understand just how prejudiced they really are is being whitewashed, she says.In this conversation, Banaji explains:Why LLMs amplify bias instead of neutralizing itHow guardrails and “alignment” may hide what the model really thinksWhy kids, judges, doctors, and lonely users are uniquely exposedHow these systems form a narrowing “artificial hive mind”And why we may not be mature enough to automate judgement at allBanaji is working at the very cutting edge of the science, and delivers a clear and unsettling picture of what AI is amplifying in our minds.00:00 — AI Will Warp Our DecisionsBanaji on why future decision-making may “suck” if we trust biased systems. 01:20 — The Woman Who Changed How We Think About BiasJake introduces Banaji’s life’s work charting the hidden prejudices wired into all of us. 03:00 — When Internet Language Revealed Human BiasHow early word-embedding research mirrored decades of psychological findings.05:30 — AI Learns the One-Drop RuleCLIP models absorb racial logic humans barely admit. 07:00 — The Moment GPT Said “I Am a White Male”Banaji recounts the shocking early answer that launched her LLM research. 10:00 — The Rise of Guardrails… and the Disappearance of HonestyWhy the cleaned-up versions of models may tell us less about their true thinking.12:00 — What “Alignment” Gets Fatally WrongThe Silicon Valley fantasy of “universal human values” collides with actual psychology.15:00 — When AI Corrects Itself in Stupid WaysThe Gemini fiasco, and why “fixing” bias often produces fresh distortions.17:00 — Should We Even Build AGI?Banaji on why specialized models may be safer than one general mind.19:00 — Can We Automate Judgment When We Don’t Know Ourselves?The paradox at the heart of AI development.21:00 — Machines Can Be Manipulated Just Like HumansCialdini’s persuasion principles work frighteningly well on LLMs. 23:00 — Why AI Seems So Trustworthy (and Why That’s Dangerous)The credibility illusion baked into every polished chatbot.25:00 — The Discovery of Machine “Self-Love”How models prefer themselves, their creators, and their own CEOs. 28:00 — The Hidden Line of Code That Made It All Make SenseWhat changes when a model is told its own name. 31:00 — Artificial Hive Mind: What 70 LLMs Have in CommonThe narrowing of creativity across models and why it matters.34:00 — Why LLM Bias Is More Extreme Than Human BiasBanaji explains effect sizes that blow past anything seen in psychology. 37:00 — A Global Problem: From U.S. Race Bias to India’s Caste BiasHow Western-built models export prejudice worldwide.40:00 — The Loan Officer Problem: When “Truth to the Data” Is ImmoralA real-world example of why bias-blind AI is dangerous. 43:00 — Bayesian Hypocrisy: Humans Do It… and AI Does It MoreModels replicate our irrational judgments — just with sharper edges. 48:00 — Are We Mature Enough to Hand Off Our Thinking?Banaji on the risks of relying on a mind we didn’t design and barely understand.50:00 — The Big Question: Can AI Ever Make Us More Rational?
loading
Comments