Blood in the Machine: The Podcast

A podcast about Silicon Valley, AI, and the future of work. <br/><br/><a href="https://www.bloodinthemachine.com?utm_medium=podcast">www.bloodinthemachine.com</a>

How to dis-enshittify the world, with Cory Doctorow

Does Cory Doctorow even need an introduction at this point? If you spend any amount of time at all online, then you’ve encountered his work, his ideas, his words. But the ultra-prolific science fiction writer, digital rights activist, and coiner of the “Enshittification” verbiage that’s become universal shorthand for the degradation of the internet (and, to an extent, everything else), has been especially ubiquitous lately. His book-length treatment of the Enshittification thesis was just published by FSG last month, and he’s been on the press tour war path. And good! “Enshittification” is vintage Doctorow—it’s sharp, frisky, freewheeling, and erudite; call it elevated, book-length blogging, perhaps. It’s also going to be the book I recommend to folks interested in getting into Cory’s nonfiction work; it so ably ties together the many strands of his thought and his various crusades into a manifesto of sorts. And, naturally, it persuasively makes the case for how big tech and its monopolistic platforms have conquered the internet, systematically siphoned it of vitality and utility to placate shareholders, and left an enshittified husk of what the web, and the world, ought to be, in its wake. So naturally, I wanted to have Cory on Blood in the Machine: The Podcast to discuss it all. Cory has given approximately 4,000 interviews for this book and topic at this point (I recommend the New York Times mag profile for a look at his life and career, as well as his backyard, where he throws some great parties). So I thought I’d pick his brain specifically on how he thinks workers—tech workers and otherwise—can help turn the tides of enshittification. My instinct was to call this “deshittifying” the web, but Cory prefers “dis-enshittification”, which, fair enough. Regardless, we cover a lot of fertile ground, and field some good questions and comments from the chat (thanks to everyone who popped in when we were live). Cory and I don’t agree on everything—copyright law in particular is a point of contention—but there is a lot of food for thought here, if I do say so myself. We cover why tech workers are both woefully under-organized and potentially powerful vectors for change, AI, sectoral bargaining, and more. I always have a good time chatting with Cory; I hope you enjoy our conversation, too. Which reminds me: If you do enjoy chats and recordings like this, consider chipping in a few bucks a month so I can continue doing them. It takes time and energy to research and write these posts, to schedule interviews, and to find the adequate angle at which to prop up my phone on a stack of books on my desk so I can record the thing. Thanks to all those who already to pitch in; you make the whole BLOOD project possible. Man, this was a busy week. On top of my chat with Cory, I also joined host Alexis Madrigal and fellow tech writer Charlie Warzel on KQED’s Forum; you can listen to that here. I also joined Lauren Goode and Michael Calore on WIRED’s Uncanny Valley podcast, to discuss the AI bubble piece I wrote for the magazine last month. I also spoke with CBC’s Nora Young about the invisible labor that makes AI systems possible; that article is out here.Okay! That should bring us about up to speed. Hope everyone had a solid weekend and maybe even found a little respite. I, for one, I might add, had a fantastic weekend. The conference on New Luddism at Columbnia was a smashing success. I’ll have more to share soon, but the event brought together academics, journalists, labor organizers, policy heads, and student activists. I met so many folks doing great work in the space, the conversations were stimulating, and it was quite possible, for a few hours, to glimpse a future where our systems are no longer fueled by relentless worker exploitation and surveillance, or beholden to big tech and the oligarchs that operate it. I’ll write a longer debrief on all of the above soon, but for now: great stuff. A shout to the organizers for knocking it out of the park. On Saturday, Paris Marx, Edward Ongweso Jr, Jathan Sadowski and I held a Luddite Tribunal before an absolutely packed house at the Starr Bar in Brooklyn. More on that before long, too, but it was just such good fun. It was kind of like the conference, in fact, just with more hammers. Thanks so much to everyone who came out, joined the chaos, and brought tech to submit for judgement. I’ve got a lot of things cooking for this week, so I’ll end here for now. More soon, thanks for reading—and hammers up. Get full access to Blood in the Machine at www.bloodinthemachine.com/subscribe

11-10
01:07:43

Understanding the tech oligarchy and its gilded rage with Jacob Silverman

Happy Halloween all. I got into the spirit by catching up with the horror flicks I’d been sleeping on (28 Years Later was surprisingly good) and, more importantly, chatting with the great tech journalist Jacob Silverman, author of GILDED RAGE: ELON MUSK AND THE RADICALIZATION OF SILICON VALLEY, a book about some very scary people who hold immense sway over our politics and our lives. Before we get into that, for anyone interested, I wanted to bump my list of best-ever ‘luddite horror’ films I wrote up last year. It was a fun one, I think:OK! Onwards. 2025 has in many ways been the year of the tech oligarch. It marks the moment that many of Silicon Valley’s wealthiest and most powerful figures openly embraced an antidemocratic regime and publicly pledged support to authoritarianism. It feels like a lifetime ago, but I for one will not forget the image of tech CEOs like Mark Zuckerberg and Jeff Bezos standing in the front row and applauding at Donald Trump’s inauguration, or their multiple trips and photo ops at Mar-a-Lago, or Elon Musk’s central role in the early months of the administration. They’re still there, too, even if they aren’t making as many headlines. David Sacks, Elon Musk’s compatriot, is the White House’s AI and crypto czar. Executives from the VC firm Andreessen Horowitz, or a16z, are in advisory roles. JD Vance, whose mentor was Peter Thiel, owes his career to Valley operators. And not only are they in power, but they’re angry. They practice a politics of pitched persecution and extreme resentment that can be baffling to those of us who’d kill to simply not have to worry about paying the rent for a year. As Silverman puts it in his book, these tech titans “had the world at their fingertips and they couldn’t stand the touch.”So *why* are they so mad? What are these world-beating centibillionaires so furious about? Why is it that they “swung right” so hard in the 2020s, if that’s in fact what happened? How did we get here, in other words? And what can we do about it? Silverman’s book tackles all of the above and more. It argues that we shouldn’t view the tech billionaires as a collection of eccentric elites, but as a class; a group that, whether they publicly present as liberals or conservatives, share a distinct set of ambitions and goals: Slashing regulations and oversight, lowering taxes, extracting value from the state, and concentrating power. We get into all of the above in our conversation, which you can check out here or wherever you get your podcasts.This is officially the second installment of BLOOD IN THE MACHINE: The Podcast, after the inaugural episode with Karen Hao a bit back. There are still warts aplenty; for instance, I thought I would be able to edit the audio and video after I uploaded it, but no! Not allowed in Substack’s player. There’s only a weird “AI enhancement” option that is supposed to trim the dead air but instead cuts right to me talking about some technical difficulties. Alas! Future episodes will be seamlessly intro’d I’m sure—like the one next week with Cory Doctorow, which will be recorded on Tuesday, November 4th at 7 PM EST / 4 PM PST for those who’d like to join the live chat. Speaking of live chatting, thanks to everyone who joined the fray with Silverman, and to all those who left questions and thoughts in the comments, we had time to answer some at the end. Let me know what you think of this newish Blood in the Machine audio enterprise in the comments, and whether you’d like me to keep doing them. I’ll add the obligatory note here that it takes time and resources to plan and prep for such things, and I’d appreciate your support so I can continue doing them. Thanks as always to everyone who already chips in. Finally, I’ve been doing a bunch of Spanish and Italian language interviews since BLOOD: The Book just published in those languages, and I got one particularly fun request about Frankenstein—which figures into the book as BITM readers who’ve stuck with it till the end will know—in light of the release of the new Guillermo del Toro film. (Which I cannot wait to see, by the by.) A reporter from El Pais asked me the following, and I thought I’d share my answer here for anyone interested.EL PAIS: What is your interpretation of the novel Frankenstein in these times of AI, of the quest to prolong life, and in which powerful technologists such as Thiel, Musk, Sam Altman, Zuckerberg, and Bezos permeate every aspect of our working, social, and personal lives, as well as the future to come?ME: I could write a whole essay on this question!Frankenstein is just as relevant today as it ever was. A story about a careless founder recklessly unleashing a dangerous new technology without considering who might suffer as a result? I could just as easily be describing Sam Altman or Mark Zuckerberg as Victor Frankenstein.It’s a testament to Mary Shelley’s power of observation about how men attempt to harness technology for their own power and profit that her work is just as cutting in the age of AI as it was in the age of the automated loom.Scholars have read Shelley’s treatment of the monster—who is neglected, misunderstood, and intelligent—as a comment on the way that industrialists like Richard Arkright treated the working people of the day. That is, they were willing to make them suffer in their experiments with using technology, and vulnerable human workers, to maximize production and profit. Yet again, it’s scarcely different today: With AI, Altman, Zuckerberg, and Elon Musk are running large-scale experiments on society while aiming to deskill and immiserate workers. All with regard for little but how it grows their own power and lines their pockets. Some things never change.Pretty relevant to my chat with Mr. Silverman, if I do say so myself.OK! That’s it for today, thanks everyone. Once again: Happy Halloween, and hammers up. Get full access to Blood in the Machine at www.bloodinthemachine.com/subscribe

10-31
01:06:01

Dismantling the Empire of AI with Karen Hao

Years before OpenAI became a household name, Karen Hao was one of the very first journalists to gain access to the company. What she saw when she did unsettled her. Despite a name that signaled transparency, executives were elusive, the culture secretive. Despite publicly heralding a mission to build AGI, or artificial general intelligence, company leadership couldn’t really say what that meant, or how they defined AGI at all. The seeds of a startup soon to be riven by infighting, rushing to be first to market with a commercial technology and a powerful narrative, and led by an apparently unscrupulous leader, were all right there. OpenAI deemed Hao’s resulting story so negative it refused to speak with her for three years. More people should have read it, probably. Since then, OpenAI has launched Dall-E and ChatGPT, amassed a world-historic war chest of venture capital, and set the standard for generative AI, the definitive technological product of the 2020s. And Hao has been in the trenches, following along, investigating the company every step of the way. The product of all that reportage, her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, is now officially out. It’s excellent. In fact, I would go so far as to say that if you were looking to understand modern Silicon Valley, the AI boom, and the impact of both on the wider world by reading just one book, that book should be Empire of AI. So, given that it could not be more in the Blood in the Machine wheelhouse, I invited Hao to join me for the first-ever BITM livestream, which we held yesterday afternoon, to discuss its themes and revelations. It went great, imo. I wasn’t sure how many folks would even drop by, as I’d never tried a livestream here before, but by the end there were hundreds of you in the room, leaving thoughtful comments and questions, stoking a great discussion.This is also the first BITM podcast, too—though perhaps I got a little overzealous; the audio quality isn’t always great, and if I want to get official about it, I think I’d have to edit a new intro, as I was kind of all over the place. We’ll figure this stuff out, eventually, so thanks for bearing with. But the conversation wound up being so good that in addition to reposting the video above, I transcribed it below, too, so you can read it as a Q+A. Forgive any typos. (I didn’t transcribe the audience Q+A portion.) As always, all of this work is made possible *100%* by paid supporters of this newsletter. If you value work like this—in-depth interviews with folks like Hao, who are fighting the good fight—please consider becoming a paid subscriber. Many human-generated thanks to all those who already do. BLOOD IN THE MACHINE: Okay, greetings and welcome to the very first Blood in the Machine Multimedia Spectacular, with Karen Hao. Karen is a tech journalist extraordinaire. She's been a reporter for the MIT Technology Review and the Wall Street Journal. And you currently write for the Atlantic, as well as other places. You lead the Pulitzer Center's AI Spotlight Series, where you train journalists around the world how to cover AI. And after reading this book, I'm so glad it is you doing the training and not certain other journalists in this ecosystem—we won't name names. So congratulations on all of that. Did I get it all? Did I get all the accolades?Karen Hao: Yes [laughs].Okay, perfect. But most importantly, for our purposes today, Karen has written this book, Empire of AI, Dreams and Nightmares in Sam Altman's Open AI. And it's out this week. And let me just say this bluntly. This is not the interview you want to come to for hardballs and gotchas on Karen, because I just absolutely love this book.I personally hoped somebody would write this book. I think it's just really just a real feat of reportage and cultural analysis and economic and social critique. It's a panoramic look, not just at the leading generative AI company, but at the last five to 10 years of some of the most important technological, economic, and cultural headwinds in all of tech, as well as how they're impacting, reshaping, and injuring communities around the world.But if you have time to read one book about AI and its global and political implications, then this is it. Honestly, this is it. And we'll dig into why in just a second.I can't recommend it enough Okay. End of effusive praise. Karen, thank you so much for joining.Thank you so much for having me, Brian. And it's an honor to be part of this first live stream. I religiously read all of your issues. And it is also so effective and inspirational for me to do the work that you're doing. So thank you.Well thank you. And I look forward to diving on in. So let us do so right now. And let's just start with with the title. Okay, we've got this is this book is called Empire of AI, not say, “OpenAI, the company that changed everything.” It is very explicitly, I think, this formation, which I think really does sort of put in context the entire story to come in quite a useful lens. So why is that? Why is it called Empire of AI? Why is this book about OpenAI beginning with this empire framing?Yeah, so the thing that I have come to realize over reporting on opening AI and AI for the last seven years is that we need to start using new language to really cap the full scope and magnitude of the economic and political power that these companies like OpenAI now have. And what I eventually concluded was the only real word that captures all of that is empire. These are new forms of empire, AI companies. And the reason is in the long history of European colonialism, empires of old several features to them. First was they laid claim to resources that were not their own and they would create rules that suggested that they were in fact their own.They exploited a lot of labor as in they didn't pay many workers or they paid them very, very well for the labor that would fortify the empire. They competed with one another in this kind of moralistic way where the British Empire say they were better than the French Empire, or the Dutch Empire would say they were better than the British Empire, and all of this competition was ultimately accelerated the extraction, the exploitation, because their empire alone had to be the one at the head of the race leading the world towards modernity and progress.So the last future of empire is that they all have civilizing missions and they, whether it was rhetoric or whether they truly believed, they would fly this banner of we are plundering the world because this is the price of bringing everyone to the future. And empires of AI have all of these features. They are also laying claim to resources that are not their own, like the data and the work of artists, writers, creators. And they also design rules to suggest that actually it is their own.Oh, it's all just on the internet. And copyright law is fair use. And they also exploit a lot of labor around the world, in that they do not pay very well the contractors that are literally working for the companies to clean up their models and to do all the labeling and the preparation of the data that goes into their model.And they are ultimately creating labor automating technology. So they're exploiting labor on the other end of the AI development process as well in the deployment of these models, where OpenAI literally defines AGI as highly autonomous technology. As systems that outperform humans' most economically valuable work. So their technologies are suppressing the ability of workers to mobilize and demand more rights. And they do it in this aggressive race where they're saying, there's a bad guy, we're the good guy, so let us continue to race ahead and be number one.And one of the things that I mentioned in the book is empires of old were extremely violent and we do not have that kind of overt violence with empires of AI today but we need to understand that modern day empires will look different than empires of old because there has been 150 years of human rights progress and so modern day empires they will take that playbook and move it into what would be acceptable today but one of the things that I don't put in the book itself, but I started using as an analogy is, if you think about the British East India Company, they were originally a company that was engaging in mutually beneficial economic activity in India.And at some point there was a flip that switched where they gained enough economic and political leverage that they were able to start acting in their self-interest with absolutely no consequence. And that's when they dramatically evolved to imperial power.And they did this with the backing of the British crown. They did it with the resources of the British crown, with the license of the British crown. And we are now at a moment, like I froze this manuscript in early January. And then the Trump administration came into power. And we are literally now seeing the same inflection point happening where these companies already have such profound power and are more powerful than most governments around the world.And previously, really the only government didn't necessarily have complete power over was the U.S. government. And now we've reached that point where the Trump administration is fully backing these companies, allowing them to do whatever they want, completely frictionless. And so we have also gotten to the point now where companies have turned into imperial powers where they can do anything in their self-interest with no material consequence.Yeah. Well said. And what more profound an example of these sort of imperial tendencies that you're talking about than for your book to drop the same week that there's literally a bill getting pushed through reconciliation that says ‘states can't write any more laws about AI. We're going to ban lawmaking around AI. This is too important.’ It really fits into that definition that you were talking about quite well, where they'l

05-20
57:05

Recommend Channels