DiscoverSocially Conscious AI
Socially Conscious AI
Claim Ownership

Socially Conscious AI

Author: Valencio Cardoso and Roman Bercot

Subscribed: 0Played: 1
Share

Description

Socially Conscious AI is a podcast that explores how artificial intelligence can serve people and the planet—not just profit. Each episode features conversations with founders, researchers, and thought leaders using AI to build a more ethical, inclusive, and sustainable future.
15 Episodes
Reverse
The world's next billion users are redefining AI's future, and they're not where Silicon Valley expects them to be. Discover why digital sovereignty matters more than profit margins, and how 90% of the world's youth are driving unprecedented tech optimism from the Global South.What You'll LearnCore Insights:• How digital sovereignty empowers communities to control AI development on their own terms• Why the Global South shows explosive AI optimism while the West grows increasingly pessimistic• Real examples of marginalized communities using AI for social change and protest movements• The creator economy revolution driven by AI democratization and multilingual contentKey Topics Covered:• Digital infrastructure and the ethics of Big Tech's undersea cable investments• Balancing AI diversity with scalability - why inclusion drives better business outcomes• CARE principles (Collective benefit, Authenticity, Responsibility, Ethics) for ethical AI development• The romance economy: how AI enables intimate connections across cultural barriers• Behind-the-scenes work at Utrecht University's Inclusive AI Lab with partners like AdobeTimestamps• 00:00 - Digital sovereignty definition and importance• 01:46 - Global South optimism vs Western tech pessimism• 04:36 - AI enabling social change and protest movements• 09:24 - Digital sovereignty in practice: infrastructure and data control• 13:45 - Diversity vs scalability: breaking the false paradox• 16:00 - Inclusive AI Lab methods and Adobe collaboration• 19:50 - CARE principles for ethical AI development• 25:11 - The global romance economy and AI intimacyGuest ExpertDr. Payal Arora - Professor of Inclusive AI Cultures at Utrecht University, co-founder of Inclusive AI Lab, and "Next Billion Champion" by Forbes. Author of award-winning books "The Next Billion Users" and "From Pessimism to Promise," named one of 100 Brilliant Women in AI Ethics.Why This Matters NowWith AI investment reaching $33.9 billion globally and digital sovereignty becoming a national priority for countries worldwide, understanding inclusive AI development is critical. As 1.8 billion people now use AI tools but only 3% pay for premium services, the monetization gap represents massive opportunities for culturally responsive AI solutions.digitalsilk+3Subscribe for more conversations with ethical AI pioneers challenging who shapes technology's future.Connect with our guests:• Dr. Payal Arora on LinkedIn: linkedin.com/in/payalarora• Inclusive AI Lab: inclusiveailab.orginclusiveailab• Utrecht University Centre for Global Challengesuu#DigitalSovereignty #InclusiveAI #GlobalSouth #AIEthics #TechOptimism #NextBillionUsers #CreatorEconomy #DigitalDemocracy #AIForGood #TechInnovation #DigitalInclusion #AIGovernance #SociallyConsciousAISocially Conscious AI explores cutting-edge innovations transforming our world for the better, featuring conversations with brilliant minds leveraging AI's power to address humanity's most important challenges.
What if the fastest way to responsible AI… is to slow down?World-renowned AI ethicist Di Le joins us to unravel the myth of “frictionless” AI. If you think making everything faster is safer, think again. Discover why intentional friction—those thoughtful speed bumps in design—are crucial for safe, ethical, and user-centered artificial intelligence.In this episode:The hidden dangers of rushing AI innovationWhat the GPT-5 rollout reveals about emotional design and trustHow “intentional friction” protects human autonomy and prevents disasterConcrete strategies for UX designers, product leaders, and anyone building with AIReal stories from robotics and Fortune 500 deploymentsAre we flying too close to the sun, Icarus-style, with AI? Di Le has answers—and they might save us all.👉 Like, subscribe, and hit the bell for more social impact tech.🗣️ Share your take in the comments: Should AI progress be slowed down for safety?🔗 Connect with Di Le: LinkedIn: Di Le | diebot.io🎙️ Catch the full podcast everywhere or subscribe for more episodes!
What happens when artificial intelligence makes critical decisions about your job, your healthcare, and your life – but no one can explain why? In this groundbreaking episode, we explore how ethical AI isn't just an academic concept, but a business imperative that could save your organization's reputation, legal standing, and bottom line.Join hosts Roman Bercot and Valencio Cardoso for an eye-opening conversation with Dr. Reid Blackman, founder and CEO of Virtue Consultancy and author of the acclaimed book "Ethical Machines". As the expert who's guided NASA, the FBI, and Fortune 500 companies through AI's ethical minefields, Reid reveals why most corporate AI ethics principles are "bullshit" and shares the practical framework that turns AI's biggest risks into competitive advantages.🎯 What You'll Discover:• The AI Agent Crisis: Why the next generation of AI agents could create unprecedented ethical chaos – and how to prepare for systems that make decisions across dozens of databases and models simultaneously• Beyond Token Ethics: The shocking truth about why corporate AI principles fail and Reid's actionable framework that transforms vague values into concrete procedures that actually work• Cultural Bias at Scale: How AI systems from a handful of tech giants create a global monoculture – and practical strategies for developing culturally respectful AI without breaking the bank• The Philosophy-to-Practice Pipeline: Reid's fascinating journey from fireworks wholesaling professor to advising the Canadian government on AI regulations, and how deep philosophical thinking translates into boardroom-ready solutions• Government Gap Analysis: Why current regulations are failing to address the real-world complexity of AI deployment and what companies can do to stay ahead of the regulatory curve💡 Key Insights from This Episode:🔸 "Most companies have bullshit AI principles" – Reid explains why fairness, transparency, and accountability sound good but mean nothing without specific procedures🔸 The Foundation Model Fallacy – Why pointing responsibility only at OpenAI, Google, and Microsoft misses the downstream risks that companies actually face🔸 The Human-in-the-Loop Myth – How traditional oversight breaks down when AI agents process billions of data points across complex systems🔸 The 80/20 Ethics Rule – Reid's practical approach to managing AI risks when you can't possibly test every scenario🎧 Perfect For:✅ Business executives implementing AI systems✅ Technology leaders managing AI ethics programs✅ Policy makers working on AI governance✅ Anyone concerned about AI's societal impact✅ Philosophy enthusiasts interested in applied ethics📚 Resources Mentioned:• "Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI" by Reid Blackman (Harvard Business Review Press)• Virtue Consultancy – AI ethical risk consulting• Ethical Machines Podcast – Reid's deep-dive discussions on AI ethics🔗 Connect with Dr. Reid Blackman:🌐 Website: reidblackman.com🌐 Company: virtueconsultants.com📧 Newsletter: reidblackman.substack.com🔗 LinkedIn: linkedin.com/in/reid-blackman🎧 Podcast: Ethical Machines 🎙️ About Socially Conscious AI:We explore cutting-edge innovations transforming our world for the better, bringing you conversations with brilliant minds and ethical pioneers leveraging AI's power to address humanity's most important challenges. Join us as we navigate the intersection of technology, ethics, and social good.👥 Your Hosts: Roman Bercot & Valencio Cardoso tackle the big questions about AI's role in creating a better world, one conversation at a time.💬 Join the Conversation:What ethical AI challenges is your organization facing? Share your thoughts in the comments and let us know what topics you'd like us to explore next!🔔 Subscribe for more insights on ethical AI, responsible innovation, and the brilliant minds working to ensure technology serves humanity's best interests.
What if AI isn't the job-killer we fear, but the catalyst for unprecedented human flourishing?In this thought-provoking episode of Socially Conscious AI, we sit down with Justin Bean, tech executive, investor, and author of "What Could Go Right? Designing Our Ideal Future to Emerge From Continual Crises to a Thriving World" to explore how artificial intelligence could usher in an era of sustainable abundance rather than scarcity and fear.🔥 Key Topics Covered:🌱 Sustainable Abundance Framework• Why the choice between environmental protection and human prosperity is false• Moving beyond "degrowth" to sustainable abundance• The four quadrants: sustainable vs. unsustainable, scarcity vs. abundance🤖 AI as an Empowerment Tool• How AI democratizes access to expertise and resources• Real-world examples: from SpaceX rockets to contract analysis• Why everyone becomes a "manager" of AI specialists💼 Triple Bottom Line Business• People, planet, and profit working together• How sustainable practices reduce costs and risks• Building businesses that solve grand challenges🚗 Practical AI Applications• Computer vision in transportation and smart cities• Material development and drug discovery• The future of autonomous vehicles⚖️ Navigating AI's Challenges• Democratization vs. corporate control concerns• Privacy, data, and transparency in an AI-driven world• Preventing the "entertainment into oblivion" trap💡 Standout Quotes:"We're approaching this sort of singularity, and we're nearing the event horizon of AI being as capable as us in digital and physical worlds. How can those be our brothers and sisters that are helping us to achieve more in this world?"🎯 This Episode Is Perfect For:• Entrepreneurs and business leaders• Sustainability advocates• AI enthusiasts and skeptics alike• Anyone curious about technology's role in solving global challenges• Parents concerned about AI's impact on future generations📚 About Justin Bean:Tech executive and investor focusing on building a future of sustainable abundance. He's worked globally on smart cities, AI, sustainable industry, and transportation projects at both startups and Fortune 500 companies. Author of "What Could Go Right?" - a book that challenges us to envision positive futures rather than focusing solely on potential disasters. Connect with Justin: JustinCBean.com🔔 Subscribe for more conversations with ethical AI pioneers who are harnessing technology's power to address humanity's greatest challenges.💬 What's your take? Are you optimistic or pessimistic about AI's role in our future? Share your thoughts in the comments!#AI #SustainableAbundance #EthicalAI #Innovation #Sustainability #FutureOfWork #TechOptimism #ArtificialIntelligence #ClimateChange
In this retrospective episode of 'Socially Conscious AI,' we wrap up Season 1 by revisiting the invaluable insights from our amazing guests. This season, we've discussed the transformative potential of AI in ethics, mental health, education, and the environment. Join us as we reflect on the conversations, share our biggest takeaways, and explore themes like accessibility, sustainability, and the importance of ethical AI. Whether you're a long-time listener or new to the show, this episode is filled with thought-provoking discussions and ideas that have challenged and inspired us. Don't miss out on this comprehensive summary of our journey through Season 1, and get a glimpse of what's coming in Season 2! 00:00 Reflecting on Season 1 of Socially Conscious AI: Valencio Cardoso and Roman Bercot 01:00 Origins of Socially Conscious AI 03:17 Emerging Themes: Accessibility and Sustainability 04:36 User Experience and Regulation 08:50 Guest Highlights and Key Takeaways 13:08 Reflections and Lessons Learned 20:49 Looking Ahead to Season Two 24:52 Audience Feedback and Closing Remarks —- Find us on X, Instagram, and check out our website for more episodes at SociallyConsciousAI.com. If you enjoyed this episode, please subscribe and leave a review. Your support helps others discover the show!
Roman Bercot and Valencio Cardoso interview Duncan Mundell, the Head of Invent at AltaML. With over 25 years of experience in business technology and consulting, Duncan shares his journey from selling computers and dial-up internet to leading innovations in AI at AltaML. The conversation delves into responsible AI practices, the Coastal Resiliency Project, the importance of starting small with AI projects, and utilizing AI for impactful outcomes like disaster response and healthcare advancements. The episode highlights the ethical and inclusive approaches of AltaML and the significance of aligning AI projects with core values and business goals. 00:00 Driving AI Innovation with Duncan Mundell, Head of Invent, AltaML 00:18 Introduction to Socially Conscious AI 00:36 Meet Duncan Mundell 01:18 Duncan's Journey to AltaML 02:50 AltaML and Its Unique Approach 03:54 AI for Social Good: Coastal Resiliency Project 07:55 Challenges and Innovations in AI Modeling 12:44 Responsible AI: Principles and Practices 20:41 Advising Clients on AI Implementation 30:57 Future Projects and Opportunities at AltaML 33:58 Conclusion and Contact Information —- To learn more about about Duncan Mundell and AltaML, visit AltaML.com or follow Duncan on LinkedIn. Find us on X, Instagram, and check out our website for more episodes at SociallyConsciousAI.com. If you enjoyed this episode, please subscribe and leave a review. Your support helps others discover the show!
Matthew Yip, head of Product and Engineering at Benetech, explains how Benetech is revolutionizing accessible education through AI-powered tools like the Bookshare app. Explore the challenges and opportunities in developing inclusive educational technology, the role of digital textbooks, and customization to cater to diverse learning needs. Gain insights on the future of EdTech and the balance between rapid innovation and reliable accessibility solutions, driven by a passion for social impact. Subscribe for more conversations on AI trends and their social benefits. 00:00 Inclusive Education Through AI: A New Era for Students w/ Matthew Yip, VP of Product, Benetech 00:24 Welcome to Socially Conscious AI 00:43 Meet Matthew Yip from Benetech 01:10 Benetech's Mission and Impact 03:13 Innovating with AI in Nonprofits 04:50 Bookshare Plus Initiative 06:58 Challenges and Opportunities with AI 09:22 AI in Education and Accessibility 13:16 Features for Diverse Learning Styles 14:37 Making Content More Useful for Students 15:04 Personalizing Learning with AI 17:10 Challenges in Developing Socially Impactful Technologies 18:12 Impact Quantification in Nonprofits 20:18 Funding and Support for Bookshare 21:11 Exciting Projects and Future Plans 22:47 Improving Accessibility of Legacy Materials 24:38 Leveraging Large Language Models for Accessibility 26:03 Bookshare's Availability and Future Vision 26:26 Conclusion and Final Thoughts To learn more about about Matthew Yip and Benetech, visit Benetech.org or follow Matthew on LinkedIn. Find us on X, Instagram, and check out our website for more episodes at SociallyConsciousAI.com. If you enjoyed this episode, please subscribe and leave a review. Your support helps others discover the show!
Laurie Smith, Head of Foresight Research at Nesta, explores the profound impact of emerging AI technologies on society. This episode revolves around Nesta's mission to leverage innovative solutions for health, childhood equality, and sustainable futures. Hosts Valencio Cardoso and Roman Bercot discuss the multidisciplinary approaches—such as data science and strategic foresight—used by Nesta's Discovery Team to identify technology trends. Examples include AI applications in early childhood education and sustainable home heating in the UK. The discussion critically examines essential ethical considerations, including AI's impact on employment, privacy, and the arts. Laurie also shares his personal journey in foresight research, highlighting notable projects and strategies for responsible AI innovation. Episode Timeline: 00:00 Introduction to Socially Conscious AI 00:40 Meet Laurie Smith: Foresight Research at Nesta 01:58 Understanding Nesta's Mission and Impact 05:54 Laurie's Journey into Foresight Research 08:25 Strategic Foresight and Emerging Technologies 15:52 AI for Good: Opportunities and Challenges 19:17 Interactive Tools for Early Childhood Education 20:02 AI Applications Beyond Early Childhood 21:08 Generative AI in Education 23:28 AI's Role in Workforce Enhancement 30:12 Ethical Considerations in AI Deployment 35:43 Future Directions and Innovations at Nesta 38:04 Conclusion and Contact Information To learn more about Laurie Smith and Nesta, visit nesta.org.uk, or follow Laurie on LinkedIn. Find us on X, Instagram, and check out our website for more episodes at SociallyConsciousAI.com. If you enjoyed this episode, please subscribe and leave a review. Your support helps others discover the show!
Dr. Caleb Sponheim, a user experience specialist at Nielsen Norman Group and former computational neuroscientist, discusses the potential for AI to redistribute power and address social inequalities. He shares insights from his background in neuroscience and quantitative research, and how they apply to his current work in UX. The conversation explores the current state of AI applications in design workflows, the challenges and opportunities of AI tools, and the future of open-source AI models. Listeners will gain practical tips for integrating AI into their workflows and the importance of staying updated in this rapidly evolving field. 00:00 Introduction to Socially Conscious AI 01:02 Meet Dr. Caleb Sponheim 01:41 Caleb's Background in Computational Neuroscience 02:58 From Neuroscience to AI 04:48 The Evolution of Neural Networks 07:39 The UX Perspective on AI 08:20 Brain-Computer Interfaces and UX Challenges 18:26 Practical AI Applications in UX 21:48 Current State of AI in UX Design 24:07 AI Tools in Design: General Applications 24:57 Challenges with Current AI Design Tools 27:00 Evaluating AI's Fit in Workflows 28:42 Personal Experiences with AI in Design 35:39 Goblin Tools: A Handy AI Resource 39:01 Teaching AI in UX Design 44:34 Future of AI in Design and Beyond 49:24 Conclusion and Final Thoughts —- To learn more about about Caleb and Nielsen Norman Group, visit nngroup.com or follow Caleb on LinkedIn. Find us on X, Instagram, and check out our website for more episodes at SociallyConsciousAI.com. If you enjoyed this episode, please subscribe and leave a review. Your support helps others discover the show!
Valencio Cardoso and Roman Bercot interview James Veraldi, CEO, and co-founder of Scenario. James discusses the growing problem of social isolation exacerbated by technology and how Scenario aims to address this by using AI and clinical visualization techniques for role-playing real-life challenges. With a career spanning roles at TikTok, Snapchat, and more, James shares insights into the ethical applications of AI in mental health, the importance of having reliable sounding boards, and his vision for utilizing technology for social good. Tune in for an engaging conversation about the future of AI in tackling everyday and complex challenges. 00:00 Welcome and Introduction 01:42 Understanding Scenario: AI for Real-Life Role-Playing 01:58 Creating and Visualizing Scenarios 02:25 The Importance of Suggested Goals 06:17 The Problem of Modern Isolation 08:29 How Scenario Works: Visualization and Role-Playing 12:36 James Veraldi's Personal Journey 14:41 The Impact of Social Media and Technology 21:27 The Birth of Scenario: From Concept to Reality 28:13 Navigating Ethical Boundaries in AI 33:52 User Feedback and Future Vision 35:55 The Role of Memory in Personalized Companions 36:48 The Isolation Effect of Consumer Technology 37:21 The Evolution of the App's Design 40:25 AI's Role in Mental Health 40:41 The Future of AI in Mental Health 41:00 Challenges in the Mental Health Space 45:51 The Impact of AI on Global Warming and Biodiversity 46:30 The Importance of Communication in Climate Change 47:57 Optimism in the Mental Health Sector 48:55 The Dystopian View of Silicon Valley 53:01 Examples of Positive Tech Innovations 58:04 Final Thoughts and Advice for Entrepreneurs To learn more about about James Veraldi and Scenario, visit https://www.getscenario.ai/ or follow James on LinkedIn. Find us on X, Instagram, and check out our website for more episodes at SociallyConsciousAI.com. If you enjoyed this episode, please subscribe and leave a review. Your support helps others discover the show!
In this enlightening episode of Socially Conscious AI, hosts Valencio Cardoso and Roman Bercot discuss AI ethics with Dr. Kevin LaGrandeur, Director of Research for the Global AI Ethics Institute. Dr. LaGrandeur discusses the dangers of overhyping AI technologies, the essential precepts for responsible AI use, and the vital role of government regulation. With a rich academic background spanning multiple disciplines, Dr. Dr. LaGrandeur provides insights into how AI is both a promising and perilous tool. Tune in to learn about AI's impacts on jobs, ethical considerations in AI development, and potential future innovations. 00:00 The Hype and Risks of AI Adoption 00:45 Introduction to Socially Conscious AI 01:06 Meet Dr. Kevin Lagrande 02:04 Kevin's Journey into AI Ethics 05:44 The Role of AI Ethics in Business 08:19 The Impact of ChatGPT and AI Regulation 23:19 Positive Applications of AI 28:04 The Future of AI and Job Creation 36:26 Final Thoughts and Upcoming Projects 41:06 Conclusion and Farewell —- To learn more about about Dr. Kevin LaGrandeur and the Global AI Ethics Institute, visit https://globalethics.ai/ or follow Kevin on LinkedIn. Find us on X, Instagram, and check out our website for more episodes at SociallyConsciousAI.com. If you enjoyed this episode, don’t forget to subscribe and leave a review! It helps others find the show.
In this episode of Socially Conscious AI, hosts Valencio Cardoso and Roman Bercot welcome Andy Spezzatti, an entrepreneur, AI engineer, and CTO at Taltrics. Andy discusses his extensive experience with the AI for Good Foundation, which supports sustainable and social goals using advanced technologies. He highlights key projects like the SDG Data Catalog and the SDG Trend Scanner, detailing their impact on sustainability research and monitoring. The conversation covers the potential and challenges of AI in accelerating progress towards the United Nations' 17 Sustainable Development Goals (SDGs) and emphasizes the importance of ethical AI practices. Tune in to learn how AI is being harnessed for humanitarian purposes and the collaborative efforts involved in making a global impact. 00:00 AI and the Future of Global Sustainability w/ Andy Spezzatti, CTO of Taltrics 00:27 Introduction to Socially Conscious AI 00:47 Meet Andy Spezzatti: AI Engineer and Entrepreneur 01:51 Overview of the AI for Good Foundation 03:18 Key Values Driving the AI for Good Foundation 04:56 Evolution and Global Footprint of the AI for Good Foundation 06:54 Balancing Roles: CTO at Taltrics and AI for Good 13:57 Highlighting Impactful Projects: SDG Data Catalog 18:38 Understanding Sustainable Development Goals (SDGs) 23:31 AI in Achieving Sustainable Development Goals 24:23 Challenges and Opportunities in AI Development 26:44 Collaboration and Team Building in Non-Profits 32:46 Exciting Future Projects and Ethical AI 34:19 Outro and Final Thoughts —- To learn more about Andy Spezzatti and the AI for Good Foundation, visit AI4good.org or follow Andy on LinkedIn. Learn about the Mozart sound clip Andy mentioned in the episode here, and listen to a music sample from it: with GPT and with LSTM. Find us on X, Instagram, and check out our website for more episodes at SociallyConsciousAI.com. If you enjoyed this episode, don’t forget to subscribe and leave a review! It helps others find the show.
Valencio Cardoso and Roman Bercot are joined by Di Le, an AI ethicist renowned for her work in responsible AI development. Di shares her journey from UX design in video games to conducting impactful research in robotics and AI ethics. She discusses the importance of human-centered AI, how to bridge the technological divide, and the pragmatic approach of responsible AI. They dive into the controversy around Adobe's terms of service and the role of AI in shaping future job landscapes. Stay tuned to learn how organizations and individuals can better prepare for the evolving AI-driven world and gain insights on skills crucial for the future. If you enjoyed this episode, please subscribe and share it with someone who could benefit from this insightful conversation! 00:00 Balancing Automation and Autonomy w/ Di Le, AI Ethicist 00:46 Meet Di Le: AI Ethicist and Innovator 02:04 Di Le's Journey into Responsible AI 04:33 Understanding Ethical, Responsible, and Human-Centered AI 07:07 Current Challenges in AI: Adobe's Controversial Terms 10:21 The Technological Divide and Accessibility 11:53 AI's Impact on Jobs and the Future Workforce 22:23 Balancing Human Workers and AI Automation 26:42 Closing Thoughts and Resources To learn more about Di Le and her work, visit her personal website or follow Di on LinkedIn. Find us on X, Instagram, and check out our website for more episodes at SociallyConsciousAI.com. If you enjoyed this episode, don’t forget to subscribe and leave a review! It helps others find the show. Di's helpful resource links: • Getting started in AI: ◦ AI for Everyone ◦ Generative AI for Everyone Even if you're well-versed in AI, these 7-8 hour courses are valuable for understanding how to communicate AI concepts to a general audience. • Google’s Quickstart Guide to Prompting: ◦ Download the Guide • AI Papers & Reports: ◦ Microsoft Report on AI and Productivity (Pt 2) ◦ Pt 1: AI at Work ◦ Practices for Governing Agentic AI Systems ◦ Should Agentic Conversational AI Change How We Think About Ethics? • AI Governance: ◦ Summary of the EU AI Act ◦ Summary of the White House Executive Order on AI • Favorites: ◦ Nooscope: A Topographical Analysis of Machine Learning ◦ MAD 2024 AI Landscape A visual landscape of AI’s growth, published annually since 2018.
In this episode of Socially Conscious AI, hosts Valencio Cardoso and Roman Bercot interview Amy Brown, the founder and CEO of Authenticx. Amy discusses how Authenticx leverages AI-driven conversational intelligence to help healthcare organizations understand and act on patient voices at scale. She shares the innovative approach of 'listening AI' in structuring unstructured data from customer interactions. Amy also touches on the ethical considerations, organizational advice for implementing AI, and how Authenticx is making a significant impact on healthcare outcomes and patient care. Tune in to learn how AI can connect and humanize healthcare systems. 00:00 From Social Work to AI Leadership w/ Amy Brown, founder and CEO of Authenticx 00:43 Meet Amy Brown: Visionary in Healthcare Technology 01:37 The Mission of Authenticx 04:45 Amy Brown's Journey to Founding Authenticx 08:17 Leveraging AI for Conversational Intelligence 11:44 The Eddy Effect: Identifying Customer Friction 16:20 Ethical Considerations in AI 21:33 Social and Business Determinants of Health 28:31 Real-Life Impact Stories 30:59 Advice for Implementing AI in Healthcare 35:06 Conclusion and How to Learn More —- To learn more about about Amy Brown and Authenticx, visit Authenticx’s website or follow Amy on LinkedIn. Find us on X, Instagram, and check out our website for more episodes at SociallyConsciousAI.com. If you enjoyed this episode, don’t forget to subscribe and leave a review! It helps others find the show.
What Really Powers Artificial Intelligence? In this episode, we sit down with Krystal Kauffman—research fellow at the Distributed AI Research Institute (DAIR), former lead organizer at Turkopticon, and passionate data worker advocate—to reveal the tough truths and hidden stories behind the AI technology changing our world.If you use digital platforms, voice assistants, or social media, you owe a debt to the invisible data workers who filter our content, train our algorithms, and protect us—from the shadows. This episode uncovers their realities, celebrates their organizing, and gives you practical ways to make a difference. Join the movement for ethically conscious AI!Discover:• The personal journey that brought Krystal—from political campaigns and sudden illness to Mechanical Turk and global data worker organizing.• The often-invisible labor force powering modern AI: data annotators, content moderators, and gig workers worldwide.• First-hand accounts of exploitation, mental health toll, wage disparities, and shocking working conditions faced by data workers in countries like Kenya, Syria, Brazil, and Germany.• How the Data Workers’ Inquiry amplifies worker voices and supports community-led research to expose real conditions inside companies like Meta, Amazon, and major outsourcing firms.• The mounting movement for global data worker rights—including recent wins by the African Content Moderators Union and Data Labelers Association.• The urgent need for policy change, transparency, and mental health protections in digital labor.Key Chapters / Timestamps:• 0:00 Introduction & Krystal’s journey to gig work• 2:00 Life as a Mechanical Turk worker: health, pay, and invisible labor• 5:30 The impact of gig work on vulnerability and mental health• 7:30 Ethical dilemmas: Data annotation for governments and tech giants• 10:00 What really counts as “artificial intelligence”?• 12:10 Shocking stories from the Data Workers’ Inquiry project• 16:00 Why tech companies outsource—and how workers are fighting back• 18:20 What real transparency and fair pay would look like• 22:50 The case for legislative and systemic solutions• 23:45 The myth of full AI automation—why human labor won’t disappear• 25:55 Behind the scenes: What data work actually is, and the rise of worker-led organizing• 28:50 Building trust and organizing across borders (Turkopticon)• 30:53 The power of community-based research (Data Workers’ Inquiry & DAIR)• 33:40 Krystal’s takeaway for listeners: amplify worker voices, demand transparency, and never forget the people behind the techTake Action!• Elevate worker voices and demand transparency from tech companies.• Learn more and connect with organizations fighting for better digital labor standards: ◦ Krystal Kauffman: LinkedIn ◦ Data Workers’ Inquiry: data-workers.org ◦ UNI Global Union: uniglobalunion.org ◦ Data Labelers Association: datalabelers.org ◦ Distributed AI Research Institute (DAIR): dair-institute.org→ Like, comment, and subscribe for more true stories behind AI—and help us keep these vital voices in the conversation!Featured/Referenced:• Krystal Kauffman (DAIR, Turkopticon)• The Data Workers’ Inquiry: worldwide research by and for data workers• African Content Moderators Union, Data Labelers Association, UNI Global Union• DAIR Institute: worker-centered research for ethical AI development#Krystal Kauffman, #AI, #Data Workers, #Gig Work, #Exploitation, #Global Organizing, #Digital Labor, #Ethical AI, #Mental Health, #Transparency, #Labor Rights, #Turkopticon, #Data Workers Inquiry, #Community Research, #UNI Global Union, #Data Labelers Association, #DAIR Institute
Comments