DiscoverActivists Of Tech — The responsible tech podcast
Activists Of Tech — The responsible tech podcast
Claim Ownership

Activists Of Tech — The responsible tech podcast

Author: Activists of Tech

Subscribed: 5Played: 75
Share

Description

Shifting the narrative from Big Tech to Responsible Tech by sharing & archiving the work of change makers.

At the intersection of technology and social justice, Activists Of Tech is a seasonal weekly podcast dedicated to the amplification and archival of minority voices among activists, thought leaders, academics, and practitioners of responsible tech. Shifting the narrative from Big Tech to responsible tech takes honesty: this is a "say it as it is" type of podcast, and no topic is too taboo not to be named and addressed.

The topics covered encompass a variety of responsible tech areas and focus on social justice, AI harm, AI bias, AI regulation and advocacy, minorities in tech, gender equality, tech and democracy, social media, and algorithmic recommendations, to name a few. We also talk about solutions and how to make tech inclusive and beneficial for all.

35 Episodes
Reverse
Reading about TESCREAL feels like reading a bad sci-fi storyline written by a man with a god complex. Unfortunately, it’s real: a movement that allows its proponents to use the threat of human extinction to justify expensive or harmful projects and demand billions of dollars to save us from these "existential threats". Sounds familiar, doesn’t it? These are very real aspirations, and some of the tech billionaires setting the rules of the tech industry even call themselves “creators”. It’s worse than godfather, if you ask me, though it’s fairly close. We’ve seen it for example with OpenAI asking for billions to “save us” from the very AI systems they are still trying to build, somehow, and more generally AI gurus working on how to save us from killer robots or AI systems becoming conscious instead of addressing hunger, homelessness, inequalities, or, environmental issues in their country (even just in their city would have more of an impact than stealing money under the excuse of saving people who don’t exist yet from evil AI systems that also don’t exist yet). But it’s not about helping, or “AI for Humanity” as they like to call it: it’s about power, influence and money. And the pipeline from these delusions to the far right ideology and technofascism is pretty straightforward. Adrienne Williams, researcher at the Distributed AI Research Institute joined me to talk about TESCREAL, neocolonialism, policymaking, and everything in between.
After Trump was elected and tech oligarchs like Elon Musk, an unelected person, started to get power in decision making at the White House, the topic of technofascism got very popular. But this political trend of leveraging technology to empower fascist ideologies is nothing new. A lot has been written about technofascism in the 21st century, but I wanted to go back to the basics: what is fascism before technology? How do fascist movements use technology to take power and what kind of power dynamic is created? How can we resist the rise of technofascism? To answer these questions, I welcomed Dr Emile Dirks. Emile is a Senior Research Associate at the Citizen Lab at the University of Toronto, where he explores Chinese politics and digital authoritarianism.Created, hosted and produced by Mélissa M'Raidi-Kechichian.
Biometrics – our fingerprints, faces, irises, for instance – are increasingly used to verify identity. But what happens when this data collection is applied to vulnerable populations, like refugees and asylum seekers, in ways that can remove agency rather than offer them protection? In the humanitarian space, organizations justify biometric data collection in a way to increase efficiency, yet stories have shown that such mechanisms can be weaponized: data handed over to oppressive governments, misidentifications leading to life-altering mistakes, and accountability often falling on the very people humanitarian programs claim to help. Beyond survival depending on data-driven systems, racial capitalism also plays a critical role by reinforcing the same global inequalities that force people to migrate in the first place. Who benefits from implementing biometric data collection in a humanitarian context, and who bears the consequences when it fails? To answer these questions and more, I had the pleasure to talk with Zara Rahman, author of “Machine Readable Me: The Hidden Ways That Technology Shapes Our Identities”, Strategic Advisor at the SUPERRR Lab and Visiting Research collaborator at the Citizens and Technology Lab at Cornell University. Zara is a researcher, writer, public speaker and non-profit executive, whose interests lie at the intersection of technology, justice and community. For over a decade, her work has focused on supporting the responsible use of data and technology in advocacy and social justice, working with activists from around the world to support context-driven and thoughtful uses of tech and data.
If cartography, the ancestor of GIS, already displayed colonial patterns and racist stereotypes back in the day, why would the digital legacy of maps be any different? Maps have an authoritative value and hold power through the representation of the world from the perspective of who creates them. However, communities are often excluded from their design leading to the misrepresentation or omission of important landmarks and third places. In this episode, Cathy Richards explains why it is critical for communities to have the tools to paint their own stories through mapping, what is the role of communities in the development of tech powered solutions that include GIS and what are the risks associated with the exclusion of said communities.Cathy is the Civic Science Fellow and Data Inclusion Specialist at the Open Environmental Data Project. Previously, she was the Associate for Digital Resilience and Emerging Technology at The Engine Room where she advised civil society organizations on their use of technology and data. As a Green Web Fellow, she investigated the benefits, ethical questions, and security risks associated with using GIS for environmental justice. Cathy holds a Bachelor's degree in International Relations from Boston University, an MPA from the Monterey Institute of International Studies, and she comes from beautiful Costa Rica.
A few months ago, in August 2024, the Federal Trade Commission (FTC) announced a final rule banning fake reviews and testimonials, a rule that will allow to deter AI-generated fake reviews by prohibiting fake or false consumer reviews, consumer testimonials, and celebrity testimonials for instance. Now, if you have ever been online shopping, you know that this decision is pretty groundbreaking. How many of us have been deceived by fake reviews before buying a product? Not only is it a waste of money, but also in the midst of a climate crisis, it's an additional waste and it’s very detrimental to our environment. Joining us today is Janani Kumar, the founder of MyBranz. Janani knew something had to be done way before the FTC even made a move. MyBranz is a software accessible online that promotes transparency at every step of the consumer journey by leveraging AI to provide verified reviews from across the web to help users find the best brands and products based on lawful and real feedback, saving money and helping the environment at the same time. In this episode, we explored the sources and impacts of online fake reviews, consumer trust, and what the FTC ruling means for users.https://www.mybranz.com/
Since #KOSA, protecting kids online has continued to be a very hot topic. However, we often overlook the influence industry that also impacts kids online, for instance with the emergence of thousands of YouTube family channels. Horror stories of behind the scenes abuse have come out in recent news stories, in addition to the serious lack of protection when it comes to their privacy and their financial exploitation. Kids cannot give informed consent to become part of the family influence industry that is, unlike kids in acting careers, barely, if at all, regulated: to date, only three US states have signed this type of protective legislation into law, and many more have bills in the works. To talk about this topic, I welcomed the amazing Chris McCarty.At 17, Chris founded Quit Clicking Kids, an advocacy organization, to safeguard the rights of children who grow up on monetized family social media accounts after discovering that child social media stars lacked the same rights and protections as child actors. Since then, they have worked with legislators across the United States to introduce protective legislation. In addition to leading advocacy efforts at Quit Clicking Kids, Chris is a junior at the University of Washington majoring in Political Science. Their work has been featured by The New York Times, CNN, NBC News, Teen Vogue, and they recently made the Forbes List of 30 under 30 in the social media category.For more information on Quit Clicking Kids:https://quitclickingkids.com/https://www.instagram.com/quit_clicking_kids/
Large Language Models, or LLMs, may be the most popular type of AI systems, often seen as an alternative to search engines, even though they should not as the information they throw at users only resemble and mimic human speech and is not always factual, among many other issues that are talked about in this episode. Our guest today is Khaoula Chehbouni, she is a PhD Student in Computer Science at McGill University and Mila (Quebec AI Institute). Khaoula was awarded the prestigious FRQNT Doctoral Training Scholarship to research fairness and safety in large language models. She previously worked as a Senior Data Scientist at Statistics Canada and completed her Masters in Business Intelligence at HEC Montreal, where she received the Best Master Thesis award. In this episode, we talked about the impact of Western narratives on which LLMs are trained, the limits of trust and safety, how racism and stereotypes are mirrored and amplified by LLMs, and what it is like to be a minority in a STEM academic environment. I hope you’ll enjoy this episode.
We’re all very used to being surveilled by now, especially through surveillance capitalism, or the commodification of our personal data - our age, location, mental state, shopping habits, tax bracket, are collected through various apps and websites and sold to thousands of third parties. On top of that, governments surveil their citizens, and it does not only happen in authoritarian States such as Russia, it also happens in the United States as well, where activists are watched by authorities during and after lawful protests. Looking at how pervasive tech enabled surveillance is, once you’re aware, it feels like living in a dystopia. Who needs to read Orwell’s 1984 when you can just look into civil society’s reports on mass surveillance or read the news? What we need are anti-surveillance alternatives, such as a search engine that, unlike Google, does not track you or any of your personal data, let alone sell them to whoever is willing to pay, and that addresses censorship, government firewalls, and empower users to access the open web. The good news is that it exists, and it’s called Tor: a web browser that protects users' privacy and anonymity by hiding their IP addresses and browsing activity by sending web traffic through a series of routers, called nodes, to anonymize it. The traffic is encrypted three times as it passes through the Tor network, a process conceptualized by the idea of "onion routing" that began in the 90s. The goal was to use the internet with as much privacy as possible, relying on a decentralized network. Today, the Tor Browser has become the world's strongest tool for privacy and freedom online. I had the pleasure to welcome not one, but two guests today from the Tor Project: Raya Sharbain is an Education Coordinator with the Tor Project, where she facilitates training for journalists and human rights defenders on Tor and Tails the anonymous operating system, and also develops and updates educational curricula on the Tor ecosystem, focusing on its use in circumventing network censorship and surveillance. Raya is a part-time Research Fellow with the Citizen Lab, focusing on targeted surveillance. Pavel Zoneff, with over a decade of experience working for some of the world’s leading tech brands, Pavel joined the Tor Project in 2023. As Director of Strategic Communications he supports the organization’s global outreach and advocacy efforts to champion unrestricted access to the open web and encrypted technologies.
Queer Arabs are often portrayed as either hated by their own Arab community or as Western imports as if Queerness came along with colonialism. And online, queer voices are mostly from western countries and don’t represent Queer Arabs. However, Marwan Kaabour is challenging these narratives by researching, digitally archiving, and celebrating Queer History in the Middle East from centuries ago to the present day. Takweer, the Instagram page that Marwan started 5 years ago to take ownership of his own story as a Queer Arab, quickly turned into a space of inclusion and discussion and went viral.Marwan is a Lebanese artist, designer, and the founder of Takweer. He was born and raised in Beirut before moving to London in 2011. From the Takweer project was born a book, The Queer Arab Glossary which is the first published collection of queer Arabic slang and was published this year in 2024. This episode is at the intersection of social media, digital archives, queer History, Arab culture, and languages and I hope you’ll enjoy it.Takweer page: https://www.instagram.com/takweer_/ Book: https://saqibooks.com/books/saqi/the-queer-arab-glossary/
Technology narratives are set in the present, and all of their promises set in the near future. We’ve heard about flying cars, automated jobs, robots able to annihilate Humanity, robots able to save Humanity, and went through many hype cycles, like crypto – a time that I personally tend to block from my memory. But looking at the past, at the evolution of technology, is actually critical for its impacts to be relevant and beneficial for everyone.We often say that History keeps repeating itself, so if we want to predict the future of technology, why not look at its past? Beyond that, I wondered how the History of Technology related to social justice and how interdisciplinary studies could advance social justice, as well as how to choose who and what to archive when it comes to Tech History, and how much AI could be useful or harmful in this endeavour. To answer these questions and more, I had the pleasure to welcome Dr Jeffrey Yost who studies power imbalances & societal inequality in our digital world.Dr Yost is a historian of science, technology, and medicine focused on the social, political, and cultural and intellectual history of the digital world. He is the Director of the Charles Babbage Institute (CBI) for Computing, Information & Culture, a computing and software studies research institute and the leading and most diverse historical archives center for students and scholars to study digital tech & its contexts.Created, hosted and produced by Mélissa M'Raidi-Kechichian.
An interesting thing about AI systems is that without a deliberately designed structure to promote the concept of sharing, AI systems will focus solely on maximizing their individual rewards, even if this prevents them from successfully completing a task. This is fascinating, and Dane Malenfant is working on the long term credit assignment problem at MILA and McGill University, and researching solutions through an indigenous lens. Dane is a citizen of Métis Nation- Saskatchewan and originally from the traditional Métis Homeland of North Battleford (Treaty 6) and of Regina (Treaty 4). He advocates for Indigenous representation in STEM and offers a unique perspective in doing so.In this episode, we explored the role of Indigenous cultural values, like reciprocity, in reshaping AI development – and how diverse perspectives can address both technical and societal challenges. Dane also talked about the importance of having diverse voices in AI and about the systemic barriers that indigenous people face that prevent them from being part of the field of AI, and how we can change it. Created, hosted and produced by Mélissa M'Raidi-Kechichian.
More often than not, on Activists of Tech, we talk about the problems caused by technology and the solutions or alternatives to these issues. And I kind of feel guilty because I don’t want to convey the idea that AI is inherently bad. Though the mainstream culture surrounding AI certainly is, we have agency over what we create, over the AI systems we design and on how they impact others. Today’s guest is a great example and she is nothing short of inspiring: Dr Kadian Davis-Owusu has turned her passion for technology and learning into purpose, advocating for the power of online education and the ethical use of AI in learning. Kadian is the co-founder of TeachSomebody and a university lecturer at the Fontys University of Applied Sciences in the Netherlands.This episode is about tackling educational inequalities with technology, and we dived into Kadian’s work as an educator and technologist at TeachSomebody, a platform on a mission to bridge educational divides by making education accessible and providing equal opportunities to underserved communities. We talked about how online courses can address global inequalities, the potential of AI to revolutionize learning access, and what it takes to tackle concerns about privacy and AI ethics, and much more.Created, hosted and produced by Mélissa M'Raidi-Kechichian.
This episode was initially scheduled to be released in mid-December, but it was actually recorded last week, specifically one day after the American elections took place and after Trump was re-elected. B and I briefly talked about the elections in this episode, but I decided to edit and release earlier because B radically tries to move us forward and their dedication and energy made me feel so much better and hopefully this episode will make you feel better as well.In this episode, B Cavello, Director of emerging technologies at the @ Aspen Institute and I talked about responsible tech, what it looks like, what limits the design and development of ethical artificial intelligence, but also about the importance of media and tech literacy, and bringing diverse voices to the tech conversation.Created, hosted and produced by Mélissa M'Raidi-Kechichian.
Culture, the arts, or what makes humans.. Humans. Yet it is not accessible to just everyone. When you think about it, how do you make a play accessible to visually or hearing impaired people? How can we use technology to translate art into senses they were not designed for in the first place? How does tech and art relate to disability justice?More often than not, people with disabilities are not included in the design process of artistic spaces that are not accessible and don’t accommodate bodies differing from the norm.In this episode, Colin Clark, inclusive designer and creative technologist and co-Founder of Lichen Community Systems Worker Cooperative Canada and Data Communities for Inclusion Solution Network at CIFAR joined me to talk about making the art world accessible with technology, the power of community led design, and move toward another yet related topic: the use of chat gpt by artists and the labor exploitation of the creative sector, and more!Created, hosted and produced by Mélissa M'Raidi-Kechichian.
We produce a LOT of data, tweet for any reason, post a gazillion pictures online, and that somehow will give an idea to future generations about who we are and where we are at culturally speaking. Of course, and thankfully, not every digital content is archived, it has to have cultural or historical significance. But how do you determine that? Who should decide which stories get to be archived or unarchived? Are colonial dynamics taken into consideration when archiving contemporary records and what measures are being taken for archives to be inclusive? To answer these questions and more, I had the pleasure to welcome Valerie Love, Senior Digital Archivist at the Alexander Turnbull Library, which holds the archives and special collections for the National Library of New Zealand.Created, hosted and produced by Mélissa M'Raidi-Kechichian.
Facing an injustice and requiring legal representation is a tedious, complex, and time-consuming process, on top of being expensive and not affordable by everyone who needs help. This hurdle greatly contributes to the prevalence of injustices, and today we are focusing on the case of a Nigerian non -profit, Citizens’ Gavel that is addressing this hurdle and leveraging AI to promote access to justice.Created, hosted and produced by Mélissa M'Raidi-Kechichian.
Growing up in the Bay area with a future drawn for her, Nidhi Sinha did everything she was supposed to do: go to a private high school, pursue a degree in Computer Science and Mathematics, and work at a Big Tech company in the Bay. Until she got into AI ethics and burst the bubble she had grown in. What happens when surveillance happens in your own backyard, and the Big Tech companies you were told to reach for growing up are responsible for the harm of communities locally and beyond borders?Nidhi decided to get involved at CAIDP, at the Citizens Privacy Coalition, and recently to direct her first documentary on surveillance and privacy rights in the Bay area. In this episode, she talks about how Big Tech has impacted local Bay Area life, the growing wealth gap in the region, the extent of surveillance and most importantly what we can do to resist and fight back.Support the crowdfunding campaign: https://seedandspark.com/fund/watch-the-watchers?token=9a83801e4562a6e1e848ee4517b978856dfb832076e4e508745120edc4701853Created, hosted and produced by Mélissa M'Raidi-Kechichian.
AI products and features are becoming ubiquitous. When the targets of AI systems are children in school, it should be common sense that the impacts of these technologies should be assessed before reaching the “market” and the privacy of kids subjected to them should be the number one priority of school boards. Of course, it’s not, and serious questions arise. How does the AI hype and the mass deployment of AI “solutions” impact education in the United States? What are the risks and benefits for the educational system? Who is in charge and who does it impact? To answer these questions and more, I am thrilled to have welcomed the most ✨outspoken✨ person I know (and I know quite a few), Shana V White, who works at the intersection of racial justice and tech as the Director of CS equity initiatives at the Kapor Center.Created, hosted and produced by Mélissa M'Raidi-Kechichian.
Sexism and oppression are tied to culture, which in turn impacts the tech industry, tech design and deployment. To talk about the relationship between the patriarchy, women in tech, and how to fight back, I talked with Mia Shah-Dand, founder of Women in AI ethics and creator of the 100 Brilliant Women in AI Ethics. In this episode, Mia talks about her personal story and upbringing, what women have to put up with when they work in tech, gender based violence enabled by technologies, diversity in tech, and how to fight back to reclaim space.Created, hosted and produced by Mélissa M'Raidi-Kechichian.
The Digital Services Act was recently passed in the European Union, and I wondered how did this piece of regulation related to climate change and how did it impact online activism? In this episode, Rachel Griffin, PhD candidate and lecturer in law at Sciences Po Paris, talks about the intersections between platform regulation and digital justice, the impacts of tech on the environment, shadow banning, and more. Check out Rachel’s policy brief on environmental risks in the DSA: https://www.hertie-school.org/en/news/detail/content/climate-breakdown-as-a-systemic-risk-in-the-digital-services-actCreated, hosted and produced by Mélissa M'Raidi-Kechichian.
loading
Comments