Let's Chat Ethics

Join Amanda and Oriana, two friends who love to talk about the state of AI and its ethical implications and sometimes a special guest. Anything goes from academic papers to art, movies and books in the field of AI. You can connect with us via our website: www.letschatethics.co.uk

2025 Relaunch

We are back!! Happy 2025.. we are kicking off the year with a quick catch up on our past year, Meta's freedom of speech & Elon Musk's influence on Twitter. We are looking forward to bringing you bi-weekly news, topics and guests all in AI and AI Ethics.. stay tuned..

01-16
19:10

We are back! Let's Chat chatGPT with Dirk Hovy

Older and wiser, Oriana and Amanda are back to chatting ethics after a hiatus! In this episode, we dive into the world of AI and the technology behind it. Meet ChatGPT, a large language model developed by OpenAI, and learn about its capabilities, limitations, and potential impact on our daily lives. From language generation to answering complex questions, we'll discover how ChatGPT works and how it's being used to enhance human capabilities. Join us as we engage in a conversation with Prof. Dirk Hovy to understand the ethical implications and the future of this rapidly advancing technology. Get ready to be amazed and informed as we explore the fascinating world of AI.  (*)This description was written using chatGPT and some human editing skills. 

02-09
55:46

The Stochastic Parrot & The Human Mind: Are We the Same?

We’re joined by ⁠Ramsay,⁠ founder and CEO of ⁠Mission Control⁠, a leading AI Safety company building trust infrastructure for AI systems where failure is not an option. With a career spanning brain architecture, behavioral engineering, theoretical chemistry, and digital therapeutics, Ramsay has been at the forefront of where technology, innovation, and human flourishing intersect. Now, he and his team are accelerating AI Safety to ensure that synthetic intelligence—humanity’s greatest invention—remains a force for good.In this episode, we dive deep into the nature of intelligence—both artificial and human. The term stochastic parrot—coined by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in their 2021 paper, On the Dangers of Stochastic Parrots—describes large language models (LLMs) that generate language without true understanding, merely predicting words based on probability. But what if humans aren’t so different? Ramsay brings a provocative perspective: maybe our own cognition is just a more advanced version of predictive pattern-matching.Together, we explore the future of Responsible AI, the ethical boundaries of intelligent machines, and what the accelerating pace of AI means for the workforce. Will automation redefine the nature of work itself? What safeguards are needed to ensure AI benefits society? And in a world where machines mimic human cognition, what does it truly mean to be intelligent?Tune in for a thought-provoking discussion at the frontier of AI, ethics and the future of work.

03-27
45:52

Surveillance Capitalism: Myth or Reality?

Is surveillance capitalism as dangerous as its critics claim, or have we overstated its risks? In this episode, we dive into Peter Königs' controversial defense of surveillance capitalism, challenging the dominant narrative that Big Tech is eroding privacy, manipulating politics and fueling government surveillance.We explore:🔹 The true impact of targeted advertising – is it really manipulating us?🔹 The role of social media in politics – filter bubbles, misinformation, and polarisation.🔹 The mental health debate – does social media really "rewire" our brains?🔹 The government surveillance paradox – are Big Tech companies enablers or privacy defenders?Join us as we debunk myths, analyse real-world data, and discuss the ethics of digital capitalism in an era dominated by Google, Facebook and Twitter/X.For those interested to read the research paper:https://link.springer.com/article/10.1007/s13347-024-00804-1

03-14
30:59

The Future of AI Regulation

As the UK’s Labour government takes the reins, the future of AI regulation hangs in the balance. Will they align with the EU’s AI Act, follow the US’s light-touch approach, or carve out their own path? And with growing pressure from a potential second Trump administration, how much independence can the UK maintain?In this episode, we break down Labour’s AI strategy, the implications of Anthropic’s recent contract to boost UK government efficiency, and whether AI policy will be driven by ethics or economic pressure. Is meaningful regulation even possible—or will the UK cave to industry and international influence?Tune in as we explore the politics, power plays, and big tech lobbying shaping the future of AI regulation..

02-27
32:31

AI Unplugged: Controversies, Dramas, and Geopolitics

In this episode of "AI Unplugged," we dive deep into the latest controversies and dramas shaking the world of AI. First, we unravel DeepSeek, exploring the the hype, privacy concerns and data security issues comparing this with OpenAI.Next, we delve into the high stakes drama between OpenAI and Elon Musk, examining Musk's audacious $97.4 billion bid to take over OpenAI and the ensuing power struggle with Sam Altman⁠⁠. Finally, we discuss the broader implications of AI on geopolitics, highlighting how nations are navigating the complex landscape of AI development and big techs involvement in this!Tune in to stay informed about the latest developments in AI and their impact on our worldWhat are your thoughts on these topics? We'd love to hear from you!

02-13
27:59

Swiping Right on Ethics: Love and Relationships in the Age of Algorithms

What would an ethical dating app look like? Can tech help us build real connections, or is it making modern love worse? We sat down with Natasha McKeever and Luke Brunning two philosophers of love & relationships from the Centre for Love, Sex & Relationships (@University of Leeds) to explore: 📱 How dating apps shape our choices💔 The ethics of swiping culture🤖 Can AI help us find love—or just monetize our loneliness? 🔥 This conversation is a must-listen for anyone curious about love, tech, and ethics! Do you think dating apps need an ethical makeover?

01-30
01:15:32

What's Happening with the EU's AI Act?

In this episode, we delve into the evolving landscape of AI regulation as we unravel the intricacies of the European Union's groundbreaking AI Act. Released as a comprehensive regulatory framework, the EU's AI Act is set to shape the future of AI development, deployment, and governance across the member states and the world. Join us as we explore the key provisions of the AI Act, examining its impact on both businesses and individuals. We'll discuss the high-risk AI applications that will face stringent regulations, the requirements for transparency and human oversight, and the implications for fostering innovation while ensuring ethical AI practices.

02-22
34:49

The Role of Humans in the Future of AI with Dr. Lydia Kostopoulos

This week we were joined by the incredible Dr. Lydia Kostopoulos. Lydia is a multifaceted expert who has worked with the United Nations, NATO, US Special Operations, US Secret Service, IEEE, the European Commission, management consultancies, industry, academia and foreign governments. And havs experience working in US, Europe, Middle East and East Asia. Lydia's expertise range across AI, AI Ethics, Cyber Security, Art, Fashion, Health & more! In this episode we explore topics ranging from the role of humans in the future of AI, the value humans offer vs AI and the environmental impact of AI. In the episode we reference Lydia's recent talk on the Corporate Social Responsibility of AI a 15 min talk that we highly recommend people watch where Lydia looks at some of the United Nations SDG's and what they mean. The Corporate Social Responsibility of Artificial Intelligence https://www.youtube.com/watch?v=dnV1E4XiEkY Presentation:https://www.slideshare.net/lkcyber/the-corporate-social-responsibility-of-artificial-intelligence?from_search=4 More of Lydia's work - EmpoweringWorkwear: https://www.empoweringworkwear.com Project Nof1 Interview Series: https://www.projectnof1.com Lydia's Portfolio: https://www.lkcyber.com Lydia's Consultancy:  https://abundance.studio 

01-24
33:37

Exploring AI Governance with Lofred Madzou

Happy New Year! We back in 2024 and what a year 2023 was for tech.. In this episode, we delve into the fascinating and increasingly crucial realm of AI governance. As AI continues to evolve, questions of ethics, accountability, and regulation become paramount. Join us as we explore the challenges and opportunities surrounding AI governance, featuring this weeks special expert guest Lofred Madzou. From the ethical considerations of autonomous systems to the role of policymakers in shaping AI policies, we look at the complex landscape of governing AI. Lofred is the Director of Strategy at Truera – an AI Quality platform to explain, test, debug and monitor machine learning models, leading to higher quality and trustworthiness. Outside his day job, he is a Research Associate at the Oxford Internet Institute (University of Oxford) where he mostly focuses on the governance of AI systems through audit processes. Before Joining Truera, he was an AI Lead at the World Economic Forum where he supported global companies, across industries and jurisdictions, in their implementation of responsible AI processes as well as advised various EU and Asia-Pacific governments on AI regulation. Previously, he was a policy officer at the French Digital Council, where he advised the French Government on AI policy. Most notably, he co-drafted the French AI Strategy.

01-10
40:33

The Wall of ChatGPT: Tackling Risks & Safety.. with Victoria Vassileva

This week we were joined by Victoria Vassileva who is the Sales Director at Arthur. In this episode we get into Victoria's experience on working with organisations across sectors to combat ethical challenges and risks and more specifically how at Arthur they are solving this with regards to large language models (LLMS). We also approach the topic of the media hype surrounding AI being sentient and why this takes attention away from the real risks and ethical issues that are happening today. Victoria's bio & contact info: LinkedIn - https://www.linkedin.com/in/vvassileva/ Twitter - https://twitter.com/hellovictoriav?lang=en-GB Victoria Vassileva (she/her) is the Sales Director at Arthur. She has spent over a dozen years in and around data, starting as an analyst programming in SAS before transitioning to the GTM side. She now works to help complex organizations bring operational maturity and Responsible and Trustworthy practices to AI/ML initiatives across industries. At Arthur, Victoria focuses primarily on partnering with F100 enterprises to bring comprehensive performance management and proactive risk reduction and mitigation across their entire AI production stack. She is deeply motivated by the opportunity to shift industry practices to a "front-end ethics" approach to place equity and fairness considerations at the forefront of machine learning and automation projects. She holds degrees in Mathematics and French from the University of Texas at Austin.

06-15
46:06

'Sex-For-Sale': Ethical Responsibility of Sex for Sale Websites with Ruth Ikwu

This week, we welcome the innovative Ruth Ikwu, an AI Ethicist and MLOps Engineer with a solid foundation in Computer Science. As a Senior Researcher at Fujitsu Research of Europe, Ruth delves into AI Security, Ethics and Trust, playing a role in crafting innovative and reliable AI solutions for cyberspace safety. In this episode, she educates us on the evolving landscape of online sex work, discussing how platforms like AdultWork, OnlyFans and PornHub inadvertently facilitate sex trafficking. This is a heavy topic and contains a lot of distressing information about sex trafficking, Ruth's work is extremely important in bringing forward accountability. To learn more about Ruth's work on identifying human trafficking indicators in the UK online sex market - https://link.springer.com/article/10.1007/s12117-021-09431-0 Connect with Ruth - https://www.linkedin.com/in/ruth-eneyi-i-83a699118/

05-18
52:08

How to Tackle Hate Speech Online with Paul Röttger

In this episode we are joined by Paul Röttger who is CTO & Co-Founder at Rewire and completing his PHD at Oxford University in NLP. Paul chats to us about the challenges of tackling hate speech online and why he decided to pursue this challenge in his PHD as well as how he started Rewire. More recently, Paul was part of an expert team that were hired by OpenAI to 'break' ChatGPT4 called the 'red team', he explains what this involved and how it aimed to solve some of the dangers of ChatGPT4. If you want to connect with Paul - Twitter - https://twitter.com/paul_rottger?lang=en LinkedIn - https://www.linkedin.com/in/paul-rottger/

05-04
43:24

How Do We Value AI Art? The Philosopher's Takeover with Alba Curry & Maddy Page

The Philosophers Takeover!! This is the first in a new monthly series led by Alba Curry who is a Philosophy Professor at the University of Leeds. Alba will be joining us (as well as other special guests of her choice) once a month to do a philosophical deep dive of different episodes we have covered throughout the series. Our special guest this week is Maddy Page who is a PHD student at the University of Leeds focusing on the Philosophy of Art. In this episode Maddy covers the ontology of artwork and why its important. Join us as we deep dive into the world of AI generated art and its value! Can AI be considered as an artist or does art need to be created by a human? Why do we value art based on who the artist is and what do we define as art? Alba - https://www.linkedin.com/in/albacurry/ Maddy - twitter @madeleinesjpage

04-24
52:16

AI News & Sentient Robots..

This weeks episode Oriana and Amanda discuss Sentient Robots and AI News, where we explore the latest news in AI including the famous 'Letter' signed by Elon Musk. As AI technology advances, we are witnessing the emergence of 'sentient robots' - machines that are claimed to be experiencing emotions, developing personalities, and even exhibiting creativity. In this podcast, we explore sentient robotics and examine their potential impact on society, culture, and the economy. So whether you're a technology enthusiast, industry professional, or just curious about the future of robotics and AI, tune in to the Sentient Robots and AI News episode for thought-provoking discussions and insights into the world of sentient machines. (*)This description was written using chatGPT and some human editing skills. 

04-13
37:26

AI Audits & ChatGPT with Marc van Meel

This week we are joined by Marc van Meel who is an AI Ethicist and public speaker with a background in Data Science. He currently works as a Managing Consultant at KPMG, where he helps organizations navigate the ethical implications of Artificial Intelligence and Data Science. In this episode we get into the future of technology in our society, AI auditing, the upcoming AI regulation and of course ChatGPT! To contact Marc: https://www.linkedin.com/in/marc-van-meel/

03-30
48:43

Why is Responsible AI Important with Toju Duke

The start of a series of Responsible AI chats with Toju Duke!  Toju is a popular keynote speaker, author, and thought leader on Responsible AI. She is a Programme Manager at Google where she leads various Responsible AI programmes across Google’s product and research teams with a primary focus on large-scale models. She is also the founder of Diverse AI, a community interest organisation with a mission to support and champion underrepresented groups to build a diverse and inclusive AI future. She provides consultation and advice on Responsible AI practices.Toju’s book “Building Responsible AI Algorithms” is available for preorder. In this episode we focus on why Responsible AI is important to Toju, her work at Google as a Responsible AI Programme Manager and her new venture Diverse AI.  To learn more about Toju: www.tojuduke.com To learn more about Diverse AI: www.diverse-ai.org

03-16
36:20

Ableism in Tech with Tess Buckley

*Trigger warning* this episode we recognise includes some harmful prejudices that people with disability face and this can be upsetting for some listeners. In this episode we are joined by Tess Buckley whose primary interests include studying the intersection of AI and disability rights, AI governance and corporate digital responsibility, amplifying marginalised voices in data through AI literacy training (HumansforAI), and computational creativity in music AI systems (personal project).  Our conversation covers topics across ableism in biotechnology and society as a whole and how disability is represented in datasets. We are hooping this episode opens more dialogue as often people with disabilities are not included in conversations around bias. Connect with Tess on social media: https://www.linkedin.com/in/tess-buckley-a9580b166/

03-02
45:12

Intention, intention, intention: Are we all utilitarians when it comes to judging machines?

This week Alba and Amanda discuss a new book called How Humans Judge Machine by Cesar A. Hidalgo.  Get the book on: https://www.judgingmachines.com/ Eric Schwitzgebel's Aiming for Moral Mediocrity: https://faculty.ucr.edu/~eschwitz/SchwitzAbs/MoralMediocrity.htm The puppy cartoon: https://images.app.goo.gl/C4zKG5hsfE6419Ra6

09-02
01:07:29

Emotion AI: What kind of happiness are we aiming for?

This week Alba Curry joins us to discuss emotion AI grounded on the story "Under Old Earth". Are we aiming for happiness that is "bland as honey and sickening in the end"? Resources: Under Old Earth by Cordwainer Smith How emotions are made by Lisa Feldman Barret Affective Computing by Rosalind W. Picard  

07-08
51:14

Recommend Channels