DiscoverThe Reboot Podcast
The Reboot Podcast
Claim Ownership

The Reboot Podcast

Author: Reboot

Subscribed: 0Played: 0
Share

Description

Candid conversations with mission-driven technologists about how they approach their craft and careers. Find our essays and updates at joinreboot.org.

joinreboot.org
12 Episodes
Reverse
It’s been approximately three years since the launch of ChatGPT vaulted “A(G)I” into public consciousness. No coincidence that, around the 2.5-3 year mark, a bunch of AI books have now hit the market…. Jasmine, Jacob, Shira, and I talk through as many as we can get to in this long(! sorry) podcast. In reverse chronological order:* The Scaling Era: An Oral History of AI, 2019-2025 by Dwarkesh Patel and Gavin Leech (October 2025)* If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky and Nate Soares (September 2025)* What Is Intelligence?: Lessons from AI About Evolution, Computing, and Minds by Blaise Aguera y Arcas (September 2025)* The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want by Emily Bender and Alex Hanna (May 2025)* Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI by Karen Hao (May 2025)An abridged transcript is below, or jump to the bottom of this email to get our “buy/borrow/skip” (spoiler: unfortunately, most people will probably only find around 1.5 books worth reading). As always, audio version is more than a little spicier than the transcript. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit joinreboot.org
TL;DR: Reboot has a new mission: We are a publication by and for technologists. We are also open for pitches (and now pay $750 for newsletter essays!).Keep reading and listening for more context on how the editorial board came to this decision—this talk was first given at the Kernel 5 magazine launch in San Francisco—and for examples of the kinds of pieces we’d love to have.Talk: By and For TechnologistsHi! My name is Jasmine, and I’m the director and cofounder of Reboot. Thanks so much for being here. Kernel launch parties are always one of my favorite parts of what we do. Online writing often doesn’t feel real until everyone shows up in physical space together.Reboot turned five years old earlier this year, which is pretty crazy. Lots of things have changed since jessica dai and I started it as undergrads in 2020—in our lives, in the tech discourse, the industry writ large. Back then we were all talking about Facebook and the end of democracy, or freaking out that OpenAI wasn’t really open because they didn’t release GPT-2.At the time, Jessica and I started Reboot because it felt urgent to articulate a vision of technology that wasn’t about total refusal or hype. We wanted pragmatic, clear-eyed optimism; and we wanted a community of fellow early-career technologists to think through hard questions with. A recognition that tech is part of our strategy for achieving the goals we want, whether reproductive rights or more fun telephone poles in our communities. In 2021, when we were putting together Kernel’s first issue, I holed up in a lodge in Asheville, North Carolina and wrote a manifesto—“Take Back the Future!”—about what a “progressive techno-optimism” could look like.Well, a lot more people are talking about “techno-optimism” these days, and tragically not in the way that we meant. We waged a noble battle to reimagine the term, but unfortunately, Andreessen Horowitz has far more money and more Twitter followers than we do. Now, the tech industry has followed Marc’s lead and taken a turn to the right. Log onto x.com, and you’ll find infinite e/acc memes about how everyone who mentions ethics or safety or sustainability is automatically a doomer decel. According to Marc Andreessen’s techno-optimist manifesto, if you’re getting in the way of pure acceleration and profit—no matter the reason why—you are the enemy.And as I’ve spent more time reporting on Silicon Valley culture this year, one of the trends I’ve been most surprised and disturbed to observe is not merely a shift to the right, but the emergence of a nihilism about whether tech should serve humans at all. Here’s something I hear reasonably often: AGI is going to be so much smarter than us, so we should just hand over the reins and make them our worthy successors. If LLMs can now ace the IMO, why not make them president and CEO too? They should run the institutions, not us. Relatedly: the idea that Mars colonization or Cluely or whatever is some kind of natural, inevitable endpoint to humanity; that regardless of whether a product is something we want, there is a moral duty to bring it into existence—to enact the market’s and technological history’s will. This style of thinking is quite common among high-up people in Silicon Valley. But I think it’s low-agency and anti-human, to say the least.Reboot’s editorial board has been talking about how our publication should position ourselves in this strange moment. And the forcing function came to this: How many more times do we want to repeat, “Not that kind of techno-optimism”?I have always defined “techno-optimism” not as an uncritical belief that more technology equals more good—but rather optimism as agency, a faith that humans, as the builders of tools, can shape these incredible forces to achieve the values and goals that we define. Sand does not think until we make it. Modern civilization has always been about finding social and technological solutions to bring out the better angels of our nature—to transcend our monkey-brains and pursue our higher values and aspirations. For Reboot’s next era, we want to re-center the human and the intentional act of creating. Technology is something we do to the world, it is something we choose, and we humans are responsible for those choices.That leads us to a new mission: Reboot is a publication by and for technologists.I view this as less a shift than a clarification.In short: a technologist is anyone who exercises agency to shape technologies toward their goals. It’s a mindset, not a job title; an orientation, if you will. It includes many software engineers and founders, of course, but also makers of home-cooked apps and clever Zapier workflows, hacktivists and traffic cam artists. It’s an orientation of active play, not passive consumption. It combines the critic’s eye for spotting the flaws in a system with the artist’s or entrepreneur’s creative solutions. It’s not just posting about the problem but doing something about it. The technologist says: These systems were made once and they can be remade again. The world is a museum of passion projects. I will not accept things as they come out of the box. As Kevin wrote in his Kernel 5 editor’s note, technologists are players of infinite games.Reboot will continue publishing essays, interviews, and other creative works by technologists. We believe in lived experience and tacit knowledge; the deep understanding that comes from the personal experience of being “close to the machine,” as Ellen Ullman described in her memoir of the same name. As editors, we’ve noticed that both hype and doom deal in vague, sweeping proclamations. Most people who believe AGI will cure cancer or start WW3 tomorrow have worked neither in medicine nor in military strategy. Thus, we view the specificity of technologists’ experience—the fact that they know intimately where tools work versus don’t, how to tweak them to work a bit better—as a potent vaccine against bad ideas.As always, we are especially excited to work with people who are not professional writers. We want to develop ideas from practitioners: people doing stuff on the ground. Field-building manifestos, essays about projects you’ve built, and interviews (anonymous or otherwise) with the people doing the most interesting, challenging work in the space.We are also more than doubling our newsletter pay rates, so do pitch us! Writing is not quite as lucrative as a $100 million comp package from Meta, but we hope it will be at least somewhat more fulfilling.And again, thanks for being with Reboot, whether you’re an OG who subscribed in 2020 or a new reader who stumbled through the door today. I’m keenly aware that the market does not reward reflection on why we build what we build, which makes it all the more meaningful that you have decided to do it anyway.Thank you to Gray Area for hosting us here—they’re an incredible art and tech venue in SF, and do lots of other great events—and to all the incredible writers and contributors to Kernel Issue 5. Have some drinks! Buy some magazines! And thank you all for coming.We’re all super excited about this new direction—which emerged from lots of rich discussion and debate—and hope that you are too. You can pitch us here:Thanks for being here in year five!— Jasmine & Reboot team This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit joinreboot.org
Shortly after the AI 2027 report was released, my friend Saffron posted a tweet/mini-blog in response:Looking to “accurately” predict AI outcomes is… highly counterproductive for good outcomes.The researchers aim for predictive accuracy and make a big deal of their credentials in forecasting and research. (Although they obscure the actual research, wrapping this up with lots of very specific narrative.) This creates an intended illusion, especially for the majority of people who haven’t thought much about AI, that the near term scenarios are basically inevitable--they claim they are so objective, and good at forecasting!Why implicitly frame it as inevitable if they explicitly say (buried in a footnote in the “What is this?” info box) that they hope that this scenario does not come to pass? Why not draw attention to points of leverage for human agency in this future, if they *actually* want this scenario to not come to pass?I, too, was somewhat confused about the report, to put it lightly, and wanted to talk through it together. We try to understand the report and forecasting in general, and our conversation turns out to be less of an AI 2027 hate train than we initially thought! By the end, we end up coalescing around these three ideas: (1) the very act of prediction has an impact on the future; (2) ideally, forecasts should be empowering, rather than disempowering; (3) evaluating forecasts is messy business, so the intentions of the forecasters matter.— JessicaListen here on Substack (web or app), or subscribe on Apple Podcasts or Spotify. A transcript and takeaways will be published with each episode. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit joinreboot.org
Mr. Beast Saying Increasingly Large Amounts of MoneyBy Morry KolmanMr. Beast Saying Increasingly Large Amounts of Money is a compilation of over 2800 clips from 206 Mr. Beast videos. This is the abridged 12 minute version—both this and the full length hour-long cut are also available on my YouTube and my website. The work is intended to distill the content of the most popular YouTuber in the world down to one of its core motifs: the promise of the next number being even bigger.Mr. Beast, born as Jimmy Donaldson, tries to present himself as apolitical. But when you convince 4% of the world to hit the red Subscribe button, you get the politics for free. These clips are those politics. Donaldson has often explained that the over-the-top excesses of his content—both in conspicuous consumption and even more conspicuous philanthropy—enable him to snowball money and influence that he can then leverage to make even more positive change in the world.This framing is overwhelming. Donaldson has been in the hot seat for allegations of worker safety violations, hawking moldy knock-off Lunchables, and exploiting the poor, disabled, and destitute for views. Nothing, however, can avoid getting subsumed by the bottom line. “Is Mr. Beast good?” quickly becomes a proxy for “Should Mr. Beast's videos exist?” and if there is anything Mr. Beast does well, it is documenting the size, appreciation, and impact of his content at an incomparable rate. To argue against Donaldson is to rebuke the planting of 20 million trees, wish 100 people remained blind, and contend that those homeless people should not, actually, have been given $10,000.Mr. Beast Saying Increasingly Large Amounts of Money is an attempt to critique him on his own terms. In their proper context, Donaldson's absurd monetary figures not only make some level of sense, but engage the viewer on an emotional and entertaining level. Boiled down to concentrate and injected intravenously, though, they are a hypnotic experience of whiplash. It’s so easy to watch the numbers go up, increasing in opulence. At the same time, what’s actually happening—the deployment of real wealth and capital in service of making the world’s already largest creator get even more views—is uncanny. In incessant and mesmerizing form, this video portrays a 2020s version of the American Dream. Whether through extreme challenges, complete luck, or simply being a good supporting character, the beneficiaries of these videos receive houses, cars, and shiny briefcases of cash. In the video, though, this “philanthropy” is contextualized by the spectacle of consumption around it. Donaldson gives $50,000 to teachers, then dishes out $70,000 on a golden pizza. He spends tens of thousands to blow up fireworks in the sky, and drops hundreds of thousands back down from planes. He shells out $1 million on groceries for the hungry—and wastes the same amount on lottery tickets, all for the love of content.With his (often literal) piles of money, Mr. Beast wields the ability to change lives at will. Unfortunately it is a power he uses indiscriminately, self-servingly, and ostentatiously. Jimmy Donaldson does not perform acts of kindness, he purchases views, and in this video we watch that transaction happen several thousand times. Mothers cry, children scream, and a guy named Mac gets buried alive. In return, he has not dipped below 100 million views in over four years.MethodologyFirst, for the main source of data, I chose all Mr. Beast videos with uploaded (ie. non-auto-generated) transcripts—a total of 229 out of 837 published videos on his flagship channel. This gave me a source of processable ground truth about where money was mentioned and also limited the videos to those published the last 6 years, which make up the majority of his meteoric rise. Then, I downloaded the videos in 360p and scraped their transcripts for every occurrence of a dollar amount, logging each mention with its sum, video, and context in a database that I would build on top of as I nailed down the exact timing. I used those contextual timestamps to make rough clips that I fed into the open source AI tool Whisper to (a) get a more precise measurement of where “X dollars” was actually said and (b) standardize and double check that my first scrape had gotten the amount correct. Finally, as many of the clips were still off by a few annoying and noticeable fractions of a second in any direction, I made a script that allowed me to go through each entry individually, trim or extend the clip on either end, and modify the amount one last time if my first 2 methods had failed. After all 2800+ were processed—a task that took weeks—I made a final set of clips out of higher quality versions of the videos and used Premiere to make the film’s final dizzying supercut you see before you.90% of data science is data cleaning, and I have kept this overview pretty high-level in the interest of making it accessible to a wide audience. A much longer and more technical dive into the steps needed to go from a raw YouTube archive to this video—including everything from token suppression, the comparative benefits of transcription libraries, counterintuitive ways to standardize and parse numbers in natural language, and debugging audio desyncs in clip concatenations - may appear in the future on my website.Reboot publishes essays on tech, humanity, and power every week. If you want to keep up with the community, subscribe below ⚡️💝 closing noteThis project took literal hundreds of hours to complete, so thank you for watching! It could not have happened without Sam Lavigne’s Infinite Video class at the School for Poetic Computation, which gave me the opportunity to develop a proof of concept last year. There’s a lot of content out there, and critique-through-clip-compilation is a fun medium. I encourage you to give it a try :)Your fellow brains in rot, — Morry & Reboot Team This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit joinreboot.org
Surprise, another podcast! (Apparently Kelin’s was the “first”, so this is the “second.” But, from 2022-era Reboot, we have at least 3 older audio posts — I still think they’re excellent, so do check them out if you enjoy audio!)On certain slices of the internet, Ben Recht might be known as Substack power-user (“that one prof who blogs about math”), professional hot-take-haver, or “recurring podcast guest.” He’s also a computer science professor and one of my PhD advisors at Berkeley.I recently sat down with him to talk about… Nate Silver’s new book, understanding the world through statistics, the academic rat race, and the psychology of it all.Listen here on Substack (web or app), or subscribe on Apple Podcasts or Spotify. A transcript and takeaways will be published with each episode. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit joinreboot.org
Today’s episode features independent designer, artist, and programmer Kelin Carolyn Zhang. We cover:- What she learned designing for a studio, Twitter, political campaigns- Why and how she made the jump to independent work- Why she's excited about AI as a new design material- How she and her collaborator Ryan created Poetry Camera- How to find great creative collaborators - Using social media intentionallyFind Kelin on Twitter (@kelin_online), her website (kelin.online), and on TikTok (@kelin.online). This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit joinreboot.org
How to Beat the Odds

How to Beat the Odds

2024-04-0840:16

Reboot cofounders Jasmine and Jessica reflect on four years of existence, the roles of luck & effort, what writing we're most excited to publish, and the eroticism of Google Doc comments.Subscribe at joinreboot.org! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit joinreboot.org
When I heard my friend Luke Igel was making a documentary about MIT, I was equal parts impressed and skeptical — impressed, because that sounds so incredibly difficult to do, and skeptical, because I wasn’t sure how much the history of MIT would matter to anyone who wasn’t already affiliated with the university. My skepticism was unwarranted. MIT: REGRESSIONS, which covers the period of time between World War II and the start of covid-19, is fascinating; over these decades, the same questions come up over and over again: military involvement, student activism, (military and private) funding, career decisions, student and campus life. I got to watch an early rough-cut of the film, and today, I’m super excited to share a Q&A with the co-director Luke. (Watch the trailer!!)I had a TON of fun having this conversation. I’ve excerpted just a bit below, but in the full audio we talk about so much more: to what extent is MIT [history] unique, and to what extent can we extrapolate to universities in general? What is the purpose of an academic institution? What’s capital-P progress, and how do we get there? Who should pay for all of this? Why does student life matter in the context of more macro-level politics? And we talk about the making of the film as well — how do you even make a documentary?!, how to balance storytelling and argument and ~vibes~. Luke Igel is an undergraduate at MIT and co-director with Wesley Block of the feature-length documentary, MIT: REGRESSIONS. He’s previously worked on the Mars Perseverance rover’s self-driving system at NASA JPL and the Starlink satellite constellation at SpaceX. Find him on Twitter @lukeigel. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit joinreboot.org
⚡ Some More Labor History

⚡ Some More Labor History

2022-04-2201:32:03

In our last audio post, organizer Emma Kinema references some labor history as well as the film industry when discussing unions for tech workers. I’m super excited to share this interview with labor historian Joshua Freeman, who has a wealth of knowledge about the American labor movement over the last century or so. It’s a long, fun conversation, and I recommend listening to the full audio to hear everything.One of the chunks that was personally most mindblowing to me: that Reagan’s political career (and everything that followed) started because he was literally a “paid actor” to make free-market anti-union speeches. Joshua Freeman is a labor historian and formerly a professor of history at Queens College, CUNY, and the CUNY Graduate Center. He is the author of several books, most recently Behemoth: A History of the Factory and the Making of the Modern World. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit joinreboot.org
Organizing isn’t reading Das Kapital, it’s making spreadsheets and talking to your coworkers. That’s just one of Emma Kinema’s many zingers in this conversation with Reboot community member Chris Painter. I’ve transcribed some of my favorite sections below, but the full audio is worth listening to if you want to get the full scoop—on craft-based vs industrial unions, what it’s like to be a QA tester at a game studio, on software for organizing, on what Emma thinks of DAOs and similar approaches, and on when unions might be used for anti-progressive causes. Emma Kinema is an American labor organizer and a lead organizer of CODE-CWA, the Communication Workers of America's Campaign to Organize Digital Employees. She is a cofounder of Game Workers Unite. Find her on Twitter. Chris Painter is a Technology and National Security Fellow at the National Security Innovation Network. He's formerly worked as a machine learning engineer in the Bay Area at 4Catalyzer, as well as on AI alignment projects at OpenAI and the Centre for Effective Altruism. You can find him on Twitter or his Substack.🎮 organizing is a struggle for powerEmma Kinema, interviewed by Chris Painter. These are only excerpts! Listen to the full audio for the rest :) On games, film, tech, and steel The games industry I think mirrors in some ways the film industry. In the 1920s and early 30s, it was a dream job; it’s a relatively new thing, people would go and leave their communities behind to go to Hollywood and work in the industry. They were just grateful to have a job because it felt like they were doing something so exciting. But then, over time there's a lot of bad conditions. There's no standardization of practices. Crediting isn't appropriate, there's no safety regulations for actors and workers on set, and all kinds of different things, right? And so a couple of decades in, in the later 30s and 40s, and certainly into the 50s and 60s, organization came into the industry in major ways. You see the vast bulk of the industry organized—whether it's artists, editors, set dressers, painters, etc—all becoming organized in fighting for a better workplace and more professional workplace where they can practice their craft in a more effective way. So there’s this pattern in a lot of new industries where in the first couple decades it's a bit Wild West; it's a bit chaotic sometimes, but as people have gone through enough cycles of seeing people leave and burnout or be harmed, we learn that, turns out the industry we're in is not an exception.Before it, there's been kind of like a golden period, and then people get used to it and they want to have a more regular professional environment. And I think that's where we're at now—late 80s, 90s, 00’s—the [film] industry gets out of its baby phase. In the 2010s and certainly now the 2020s, [film has] really become a more professional, stabilized industry in some ways. [In games] people have seen enough cycles of people burning out and leaving and amazing studios crashing because of bad conditions and bad corporate practices, and that's why you're seeing an emergence of interest in organizing—to make it a more frankly, adult, and more humane, and more professional environment for everybody. And I think it maps a similar experience as the film industry had.Tech, I would say, mirrors the steel industry, which is maybe a weird comparison, but I often reference the steel industry when I'm talking to people about tech because, back in the 1910s, 1920s, during the early days of the steel industry, it was completely unorganized. If you asked any labor leader, any worker or any businessmen in the industry, will unions you know finally appear and be organized in the steel industry? —they would say no. The vast majority of people thought it was never going to happen,  including organizers and labor leaders. There were individual strikes and actions that would pop up, but there's no cohesive movement. But in the 30s and in the 40s, you start to see mass industrial unionism—massive companies falling to organized labor and workers being empowered on the job in a different way.What’s funny is a lot of people said steel could never be organized. They would say the workers are too well paid; it's too new of an industry, and people really like working in the industry; steel has new management techniques that will make it impossible for workers to organize. Where have I heard those same things in technology? The workers are too well paid. People wish they worked in tech. Management has these unique, newfangled ways of structuring the workplace such that the workers will never organize. I hear that constantly. But I think we're entering that phase where you start to see the scales tipping towards workers. On how to get started organizingGet in touch with me and my fellow organizers at CWA. We work with many, many game developers across the US and Canada, we have relationships with folks in many different studios. Go to our website; there’s a contact form. Even if people aren't sure if they want to organize to have a certified union and win collective bargaining rights, which I think you should—I think it's the strongest, most effective, most powerful way to address these issues we're talking about—but even if you're not sure about that, you should reach out.You should get organizer training from us because there's a lot of common red flags and pitfalls that people trip on when they don't have good organizing experience prior.I think a lot of people who don't know what to do and they're feeling very frustrated, they'll sometimes lash out and have really pent up energy, and they'll want to do some kind of big petition or a letter to the boss. Or try to run a walkout or something when they actually don't have the relationships and organizing foundation to pull that off.People only hear about strikes and walkouts and petitions, but they don't understand that 99% of organizing is actually just talking to your coworkers. People make pretty serious mistakes by trying to get to the big things too fast and they kind of ignore the more difficult day-to-day, more small-scale work that makes up good, powerful organizing. On rookie mistakesTrying to have these conversations [about starting to organize] in group settings—always a bad call. Don't have your organizing conversations over text or signal or company slack; do it in person, on a video call, or by phone.  You have to have that personal, real time conversation, where you can actually feel where that other person is emotionally, and connect with each other, and let your social barriers down, because good union organizing is all about—again—really connecting personally on the issues. You need trust and care amongst your coworkers, and you just can't do that unless you're really talking to them as people. I’ve also seen this at many studios: sometimes “progressive,” usually senior white male devs, will refuse to do the meaningful work of organizing in terms of having conversations and meetings. They won't help their coworkers with that, but instead, they'll fire off in an all hands meeting or on company slack, some big message about why everyone should have a union. They think they're helping, but they're not. They're actually often causing major problems for other people, because when the company sees that, they're not going to target the senior white male dev. They're going to go start looking for all the QA testers and marginalized people where they assume the rabble-rousing is coming from.  Oftentimes people will do this more individualist approach to trying to move the issue, and it's really counterproductive. It's not just neutral, it's counterproductive to meaningful organizing.On cultural vs material change To someone who believes that union organizing is good, but it can't really address cultural issues or social issues, I would just say that they're profoundly wrong and that I would argue union organization and the density of unit organization in any industry or region is the number one factor behind how progressive and accessible that culture is. I would argue our material conditions shape what we think and what we do—it shapes our culture and not the other way around, and so I think if you can affect the material conditions you can affect the culture right: the superstructure of culture is built on the primary structure of the economics and the the concrete relationships. [...]By its very nature, you cannot organize a union in your workplace unless you have a majority or supermajority of support.  And unlike all other forms of organizing, it's not opt in. I don't pick my coworkers. It's not a social club. It’s not a group around a certain identity, or certain experience. Those spaces are valuable too, of course, but union organizing requires us to go out and organize our coworkers no matter who they are.I'm a trans woman, a queer trans woman from a low working class background. I have had to organize far right people who hold very transphobic, very homophobic views, right? People who've been hostile in the workplace. I've had to go and organize them, because I didn't get to pick my coworker and I needed them to get to that majority where we can make change, and not change not just for me and for the people I care about, but for them too.There's things they can benefit from the process of organizing, whether you're a white guy, senior developer who wants to improve things around career progression, or the quality of the game, or having more say in the product, or better crediting practices. I'm a queer woman who wants to organize around diversity, equity, inclusion and pay equity. Both of our issues can benefit from working together in organizing a union together, because those issues aren't in competition. They can be empowered by linking them up.That’s  the real weight and power behind union organizing. Union organizing, by its very essence,  requires bridging these gaps amongs
Today, we’re sharing both the full text and the audio version of a Kernel piece by Archana Ahlawat, to all subscribers (free and paid). Most other audio versions are paywalled — if you’d like access to audio versions for other essays but aren’t able to pay, email reboothq@substack.com and we can set you up with a complimentary subscription, no questions asked. In this essay, Archana brings her extensive organizing and nonprofit experience in conversation with the tech industry’s vision of progress. It’s worth a read (or listen) in full. From the Non-Ideal to the Ideal: what technologists can learn from activistsBy Archana Ahlawat (she/her) “When organizers imagine a world without poverty, without war, without borders or prisons—that's science fiction. They're moving beyond the boundaries of what is possible or realistic, into the realm of what we are told is impossible. Being able to collectively dream those new worlds means that we can begin to create those new worlds here.”— Walidah Imarisha (Williams, 2015)The tech industry, with all its science fiction roots and hopes, is supposed to embody this imaginative worldview. At its boldest, the culture of techno-optimism certainly appears to call us to a similar practice of unconstrained visioning and optimism for the future. Yet the bombastic revolutionary rhetoric from well-meaning young technologists to venture capital moguls falls short.It is true that technology companies continue to disrupt and reform major aspects of society. And yes, many in the tech industry at least endeavor to “change the world,” as the old tech mantra asserts. But even those that agree we need to build for material change, to build for good, often lack a robust theory of change and strategy informed by a grounded, pluralistic approach.When technologists think about changing the world, we should adopt the more militantly optimistic, visionary, and grounded practices of grassroots organizing. This is not as radical a shift as one might think. The relentless drive to make the imagined real, the desire to question assumptions to arrive at better solutions, and a sense of high agency are foundational aspects of both organizing and the tech industry’s dominant modes of thinking.But how do we want to change the world? This is a question of our individual and collective imaginations. Activist visioning and building processes, which are fundamentally rooted in communities and context, are instructive for anyone seeking to make change. They also provide a useful contrast to the tech industry’s often individualistic and universalizing methods.Even when armed with an imaginative vision for the future and democratic methods, it can still be challenging to identify how to navigate towards this envisioned bold ideal when we live in a nonideal world. The way that activists balance long-term, ideal visions with immediate gains is instructive here.By connecting and contrasting the tech industry’s rhetoric and implementation of change with strategies employed by activists, specifically grassroots organizers, we further expand our visions for what is possible, ensure we brainstorm and build in participatory ways, and strive towards ideals more carefully.Both technologists and organizers change the world. Small startup teams can disrupt establishment companies and practices and build technologies that quickly become part of the fabric of everyday life. Grassroots organizers in every era catalyze fundamental changes to institutions and norms that govern society, politics, and markets.Their lofty goals both often emerge from taking the speculative seriously. What do we want to see in the world that does not currently exist? For technologists, the answers might evoke the inventions of golden age science fiction, with its flying cars and space cities. Those oriented towards social change might focus on the inventions of new institutions and norms, like a noncoercive economic system that does not require everyone labor for survival.What worlds do we want to live in? Many futures are possible, some more liberatory, fulfilling, caring, and just than others. The question of possibility prefigures our desires for the world — the extent of our social and political imagination constrains or expands our visions for the future. What is possible may just be what we allow ourselves to think is possible.The classic philosophical discussion around ideal and nonideal theory is useful for illustrating why this question matters. Ideal theory is concerned with what ought to be, our most ideal vision for the world, while nonideal theory takes our current world and theorizes immediate steps we can take to improve it. Those focused on ideal theory show how it can be used to formulate an overall long-term strategy that informs the short-term tactics we can take given our nonideal world. However, critics of this approach point out that the ideal may be so radically different from our current society that it is impossible or merely highly improbable that we reach that point. In this case, they argue it is better to advocate for incrementalist reform that takes where we are and seeks to marginally improve it.Though tempting in the face of uncertainty and inevitably incomplete information, incrementalism is akin to a greedy hill climbing algorithm. Without better alternative solutions, we are bound to local maxima worlds (Green, 2019). In this analogy, a mountain range represents the full range of worlds that could exist, with our current world being one point. We might have some idea of what the highest peak looks like, but we don’t know exactly where it is or how to get there from our coordinates. This challenge is no excuse to mindlessly follow the path of highest marginal improvement without considering whether or not the path is the best one to be on.For the most part, thought and action directed at changing the world are dominated by the nonideal stance.When guided by short-term thinking combined with a drive for profit maximization, it is easy to take current trends and double down on them with little consideration for what more is possible. The view of technology as a universal tool to solve any problem lends itself to this, as it can lead to a narrow optimization that loses sight of externalities and second-order effects. The rapid proliferation of surveillance technology, especially for crime and policing, is a prime example.Whether embedded in smart cities or accessed through apps, surveillance technology claims to make people’s lives more seamless and safe. Taken at face value, Citizen, for instance, an app that provides real-time incident reporting via video and text as well as paid protection services, wants to empower individuals to monitor and protect themselves and their communities from crime, disaster, or perceived risky situations. In this framing, the core problem is individuals’ quick access to information and safe environments, and protection is achieved through hiring private security agents (Robertson, 2021). This narrow framing provides that the most effective and profitable way to create a safer world is through privatized, crowdsourced mass surveillance.In a vacuum and at a micro level, Citizen might allow an individual to feel marginally safer. In fact, at surface level this solution looks like a natural follow-up to the proliferation of crowdsourced videos of natural disasters, potential dangerous encounters, and police brutality. It is one step up a particular peak in our problem landscape. However, choosing this path is mistaking tactics for strategy and failing to consider what else is possible.Disseminating information through social media can be a useful method for calling attention to an issue and receiving help, especially when public resources are overwhelmed or absent, in such a case as natural disasters. In the specific case of police brutality and misconduct, sousveillance, which refers to citizens monitoring authority figures to hold them accountable, can be an effective tactic to raise awareness and alarm. It can help connect community members to one another. But digital information sharing is not a sole ideal end itself, especially given how social media platforms ultimately thrive off of extreme content and extreme emotions.As a for-profit social network company, Citizen benefits off of fear-driven usage, stokes user anxieties, and encourages vigilantism (Ashworth, 2021). The ACLU’s comparable sousveillance app, Mobile Justice, seeks to empower individuals by allowing them to record from the app, streaming to selected close contacts and the local ACLU (American Civil Liberties Union). While the two apps are similar, Mobile Justice is directed at unveiling abuses of power and is framed as a tactic for harm reduction, not as the ultimate solution to danger. It also does not provide options to privately summon protection agents. Mobile Justice’s overall purpose is connected to an overarching strategy of holding public institutions accountable and pushing to fundamentally restructure them.Mobile Justice was developed from an understanding of the broader history and context around crime, policing, and neighborhood safety. Surveillance apps and tools like Citizen, Nextdoor, and Amazon Ring assume that the way our society has defined crime and safety is correct and static. But even a rudimentary understanding of history would reveal how notions of crime and public safety have been produced and deputized for the maintenance of racial and economic hierarchies for centuries. Reporting on these apps has repeatedly shown they amplify racism and lead people to become more irrationally fearful. Taking the history of policing seriously would mean anticipating some of these impacts.Through taking the nonideal route, these surveillance technology solutions lead to worse worlds, with less democratic accountability, less trust amongst community members, and an even greater culture of fear and suspicion. In
Fast Food Education

Fast Food Education

2022-01-0219:30

Today’s Kernel piece is a deep dive by Bianca Aguilar into edtech programs like MOOCs and bootcamps, and how their empowerment of students can often be masqueraded exploitation.Fast Food EducationBy Bianca Aguilar (she/her) Speed and scale are redefining industries: fast food changes what we eat, fast fashion changes what we wear, and now, fast food education is changing how and what we learn. Powered by technology, fast food education is capable of disrupting the ancient higher education sector, allowing more people than ever to pursue learning — a worthy goal. But what is being marketed as empowerment is often instead masqueraded exploitation.What is fast food education, and why is it attractive?"The true value proposition of education is employment."- Udacity CEO Sebastian ThrunFast food education is education influenced by what sociologist George Ritzer calls "McDonaldization": a process where principles of the fast food industry, driven by “rationality,” are dominating other sectors of society (Willis, n.d.). In the education sector, this is expressed through vocational education, which is known for a short timeframe, accessible costs, and a practical curriculum. This essay focuses on programs that are prominent in the edtech industry: compressed courses like massive open online courses (MOOCs), and skills-based intensives such as bootcamps.Four characteristics define McDonaldization: efficiency, calculability, predictability, and control.Efficiency is about choosing the fastest and cheapest (in expenditure and effort) way to achieve a goal. This is usually advertised as a benefit to the consumer — for instance, even if MOOCs are cheap, students are “paying” for the privilege of handling pacing and grading by themselves, which are tasks that are usually done by the teacher. Calculability is about making objectives based on what can be calculated, counted, quantified. Under McDonaldization, quantity equals quality: the high number of people enrolled in MOOCs seems to make up for a low three to six percent completion rate (Reich & Ruipérez-Valiente, 2019). Predictability is about minimizing the possibility of surprise. Consumers expect to receive the same product and service no matter where they go: all coding bootcamps offer nearly identical curriculums. Finally, control is about replacing people — the biggest source of uncertainty and unpredictability in a “rational” system — with nonhuman technology. Everything is pre-packaged, pre-measured, and automatically controlled. MOOCs themselves are pre-packaged systems (Ritzer, 2013); they’re often designed to have short pre-recorded lectures and embedded questions that give automatic feedback.The McDonaldization of fast food education makes sense when one considers that it’s designed to benefit the system, not the student. Writer and sociologist Tressie McMillan Cottom (2018) calls this phenomenon Lower Ed, a term she coined to describe the increasing emphasis on credentialism, especially among marginalized communities, as the path to financial stability. She argues that this was created by changes in the way we work, unequal access to liberal education programs, and the risk shift of job training from institutions to individuals for profit.These broad structural patterns tie into fast food education’s rise in popularity; it’s part of the third education revolution, which is about continual training throughout a person's lifetime (Selingo, 2018). People are compelled to enroll in such programs because of very real fears: being left behind by digitalization, losing jobs to automation, and becoming irrelevant in a fluctuating talent economy. They have also lost faith in traditional institutions; due to expensive costs, long timeframes, and slow adaptation, they believe that these institutions aren’t suited to prepare them for their careers (International Consultants for Education and Fairs, 2019).These fears are even more pronounced in the Global South, especially Southeast Asia. Demographic trends, cultural shifts, and economic growth have prompted regional demand for higher education (Sharma, Pelley, & Vazifdar, 2016), but socio-economic barriers such as a global recession, outdated infrastructure, and political instability have made higher education an inaccessible or undesirable option (Lau, 2021).In short, people are pressured to upskill in order to keep up with a rapidly developing world. This makes MOOCs and bootcamps, which market themselves as better investments by “guaranteeing careers” (compared to higher education), especially attractive.Empowerment or exploitation?"Rational systems are unreasonable systems...they deny the basic humanity, the human reason, of the people who work within or are served by them."- George RitzerCompared to a liberal arts education, fast food education is more affordable in terms of time and cost spent. Additionally, since it’s mostly remote, programs are open to people all over the world. The increased accessibility helps educate, train, and empower populations previously denied access to similar opportunities, in areas like the Global South. This also reduces "brain drain" by allowing students to gain expertise locally, instead of having to migrate. But no matter how much education is rationalized, there are bound to be irrational outcomes.Despite its promises of disruption, fast food education upholds the status quo.Questionable qualityAccessibility comes at a cost. Fast food education is known for being lower quality, from what is taught to how it is taught. Since it has a short timeframe, it can only focus on teaching practical skills and methods. For example, programs that teach code focus on frameworks over fundamentals (Pronschinske, 2021), while those that teach user experience (UX) design skip foundational disciplines like anthropology and art history (Teixeira & Braga, n.d.). Content aside, these programs’ curriculums are notoriously unreliable. See Lambda School, which was criticized by their students for constantly changing lesson plans and relying on free training materials (Schiffer & Farokhmanesh, 2020).Delivery of content, such as through technical production, is also important. Third-party providers have put their reputations at risk for insufficient screening (Parr, 2014), leading to the release of courses with shaky filming and conventional slides. Pacing is also compromised in fast food education, where MOOCs and bootcamps seem to be on opposite sides of the spectrum. The former’s pace is completely dependent on the students, and most end up dropping out, leading to completion rates lower than 10% (Chafkin, 2016). Meanwhile, the latter is known for being so fast-paced that students struggle to keep up.Given all of this, the lack of official accreditation makes it especially difficult for graduates to prove what they have learned. While this lack of oversight has enabled programs to constantly update their curriculum, it has also lessened their credibility, which job-seeking students would benefit from. If graduates are already struggling with interviews (McBride, 2016), how else can they ensure that they will be hired? Students of fast food education will be at a disadvantage in the current market, which is more competitive than ever due to globalization and the internet.HomogenizationThe market supposedly calls for universal knowledge, skills, and values. As a result, students aren’t taught how to stand out. Through this standardization, fast food education brings about cultural hybridization, which is the process of blending two or more cultures to fit cultural norms (Bell (Ed.), n.d.). Allowing for variety speeds up historical, economic, and cultural development. However, there’s a fine line between cultural hybridization and cultural hegemony, where dominance is maintained through ideological and cultural means (Cole, 2020). In the case of fast food education, the Global North, and especially countries in the West, is set as the universal standard for others to follow. This is compounded by the acceptance of English as the “universal” language. Minorities who aren't able to communicate well in English are seen as “lesser quality”, even if they are proficient in the craft itself.The field of design is a clear example of cultural hegemonization. In a talk at a prominent design conference, a senior designer describes how he associated "good design" with his Euro-centric aesthetic of minimalist design, and was only able to question it because of a junior designer of color (Figma, 2021). Prevalent design theories often come from white males born decades ago. Thus, what’s considered “quality design” is limited to their perspective. For instance, low-end magazines and East Asian media cannot afford the standard abundance of white space.Standardization also means that students learn to produce homogenized work. For instance, student projects are often made with structured templates. "The case study factory" (Teixeira & Braga, n.d.) discusses how formulaic case studies make it difficult to differentiate UX or UI designers from one another. These studies may demonstrate the students' ability to follow the design process, but not their unique thinking, skills, and point of view.Imposing the logic of factory production on education kills students’ creativity, turning it into repetitive drudgery (Lossin & Battle, 2020). But that’s the goal of fast food education — the myopic focus on hard skills and technical competencies is effective for producing workers, not well-rounded people.InequityFinally, the rhetoric of accessibility in fast food education obscures the reality of its inequity. According to education researchers Mizuko Ito and Justin Reich (2017), digital learning technologies actually exacerbate disparity in learning outcomes in terms of class, race, and gender. The distance from users causes creators of these edtech tools to bias these resources towards the highly privileged. According to sociologis
Comments 
loading