DiscoverLet's Chat Ethics
Let's Chat Ethics
Claim Ownership

Let's Chat Ethics

Author: Let's Chat Ethics

Subscribed: 9Played: 167
Share

Description

Join Amanda and Oriana, two friends who love to talk about the state of AI and its ethical implications and sometimes a special guest. Anything goes from academic papers to art, movies and books in the field of AI. You can connect with us via our website: www.letschatethics.co.uk
50 Episodes
Reverse
In this episode, we delve into the evolving landscape of AI regulation as we unravel the intricacies of the European Union's groundbreaking AI Act. Released as a comprehensive regulatory framework, the EU's AI Act is set to shape the future of AI development, deployment, and governance across the member states and the world. Join us as we explore the key provisions of the AI Act, examining its impact on both businesses and individuals. We'll discuss the high-risk AI applications that will face stringent regulations, the requirements for transparency and human oversight, and the implications for fostering innovation while ensuring ethical AI practices.
This week we were joined by the incredible Dr. Lydia Kostopoulos. Lydia is a multifaceted expert who has worked with the United Nations, NATO, US Special Operations, US Secret Service, IEEE, the European Commission, management consultancies, industry, academia and foreign governments. And havs experience working in US, Europe, Middle East and East Asia. Lydia's expertise range across AI, AI Ethics, Cyber Security, Art, Fashion, Health & more! In this episode we explore topics ranging from the role of humans in the future of AI, the value humans offer vs AI and the environmental impact of AI. In the episode we reference Lydia's recent talk on the Corporate Social Responsibility of AI a 15 min talk that we highly recommend people watch where Lydia looks at some of the United Nations SDG's and what they mean. The Corporate Social Responsibility of Artificial Intelligence https://www.youtube.com/watch?v=dnV1E4XiEkY Presentation:https://www.slideshare.net/lkcyber/the-corporate-social-responsibility-of-artificial-intelligence?from_search=4 More of Lydia's work - EmpoweringWorkwear: https://www.empoweringworkwear.com Project Nof1 Interview Series: https://www.projectnof1.com Lydia's Portfolio: https://www.lkcyber.com Lydia's Consultancy:  https://abundance.studio 
Happy New Year! We back in 2024 and what a year 2023 was for tech.. In this episode, we delve into the fascinating and increasingly crucial realm of AI governance. As AI continues to evolve, questions of ethics, accountability, and regulation become paramount. Join us as we explore the challenges and opportunities surrounding AI governance, featuring this weeks special expert guest Lofred Madzou. From the ethical considerations of autonomous systems to the role of policymakers in shaping AI policies, we look at the complex landscape of governing AI. Lofred is the Director of Strategy at Truera – an AI Quality platform to explain, test, debug and monitor machine learning models, leading to higher quality and trustworthiness. Outside his day job, he is a Research Associate at the Oxford Internet Institute (University of Oxford) where he mostly focuses on the governance of AI systems through audit processes. Before Joining Truera, he was an AI Lead at the World Economic Forum where he supported global companies, across industries and jurisdictions, in their implementation of responsible AI processes as well as advised various EU and Asia-Pacific governments on AI regulation. Previously, he was a policy officer at the French Digital Council, where he advised the French Government on AI policy. Most notably, he co-drafted the French AI Strategy.
This week we were joined by Victoria Vassileva who is the Sales Director at Arthur. In this episode we get into Victoria's experience on working with organisations across sectors to combat ethical challenges and risks and more specifically how at Arthur they are solving this with regards to large language models (LLMS). We also approach the topic of the media hype surrounding AI being sentient and why this takes attention away from the real risks and ethical issues that are happening today. Victoria's bio & contact info: LinkedIn - https://www.linkedin.com/in/vvassileva/ Twitter - https://twitter.com/hellovictoriav?lang=en-GB Victoria Vassileva (she/her) is the Sales Director at Arthur. She has spent over a dozen years in and around data, starting as an analyst programming in SAS before transitioning to the GTM side. She now works to help complex organizations bring operational maturity and Responsible and Trustworthy practices to AI/ML initiatives across industries. At Arthur, Victoria focuses primarily on partnering with F100 enterprises to bring comprehensive performance management and proactive risk reduction and mitigation across their entire AI production stack. She is deeply motivated by the opportunity to shift industry practices to a "front-end ethics" approach to place equity and fairness considerations at the forefront of machine learning and automation projects. She holds degrees in Mathematics and French from the University of Texas at Austin.
This week, we welcome the innovative Ruth Ikwu, an AI Ethicist and MLOps Engineer with a solid foundation in Computer Science. As a Senior Researcher at Fujitsu Research of Europe, Ruth delves into AI Security, Ethics and Trust, playing a role in crafting innovative and reliable AI solutions for cyberspace safety. In this episode, she educates us on the evolving landscape of online sex work, discussing how platforms like AdultWork, OnlyFans and PornHub inadvertently facilitate sex trafficking. This is a heavy topic and contains a lot of distressing information about sex trafficking, Ruth's work is extremely important in bringing forward accountability. To learn more about Ruth's work on identifying human trafficking indicators in the UK online sex market - https://link.springer.com/article/10.1007/s12117-021-09431-0 Connect with Ruth - https://www.linkedin.com/in/ruth-eneyi-i-83a699118/
In this episode we are joined by Paul Röttger who is CTO & Co-Founder at Rewire and completing his PHD at Oxford University in NLP. Paul chats to us about the challenges of tackling hate speech online and why he decided to pursue this challenge in his PHD as well as how he started Rewire. More recently, Paul was part of an expert team that were hired by OpenAI to 'break' ChatGPT4 called the 'red team', he explains what this involved and how it aimed to solve some of the dangers of ChatGPT4. If you want to connect with Paul - Twitter - https://twitter.com/paul_rottger?lang=en LinkedIn - https://www.linkedin.com/in/paul-rottger/
The Philosophers Takeover!! This is the first in a new monthly series led by Alba Curry who is a Philosophy Professor at the University of Leeds. Alba will be joining us (as well as other special guests of her choice) once a month to do a philosophical deep dive of different episodes we have covered throughout the series. Our special guest this week is Maddy Page who is a PHD student at the University of Leeds focusing on the Philosophy of Art. In this episode Maddy covers the ontology of artwork and why its important. Join us as we deep dive into the world of AI generated art and its value! Can AI be considered as an artist or does art need to be created by a human? Why do we value art based on who the artist is and what do we define as art? Alba - https://www.linkedin.com/in/albacurry/ Maddy - twitter @madeleinesjpage
This weeks episode Oriana and Amanda discuss Sentient Robots and AI News, where we explore the latest news in AI including the famous 'Letter' signed by Elon Musk. As AI technology advances, we are witnessing the emergence of 'sentient robots' - machines that are claimed to be experiencing emotions, developing personalities, and even exhibiting creativity. In this podcast, we explore sentient robotics and examine their potential impact on society, culture, and the economy. So whether you're a technology enthusiast, industry professional, or just curious about the future of robotics and AI, tune in to the Sentient Robots and AI News episode for thought-provoking discussions and insights into the world of sentient machines. (*)This description was written using chatGPT and some human editing skills. 
This week we are joined by Marc van Meel who is an AI Ethicist and public speaker with a background in Data Science. He currently works as a Managing Consultant at KPMG, where he helps organizations navigate the ethical implications of Artificial Intelligence and Data Science. In this episode we get into the future of technology in our society, AI auditing, the upcoming AI regulation and of course ChatGPT! To contact Marc: https://www.linkedin.com/in/marc-van-meel/
The start of a series of Responsible AI chats with Toju Duke!  Toju is a popular keynote speaker, author, and thought leader on Responsible AI. She is a Programme Manager at Google where she leads various Responsible AI programmes across Google’s product and research teams with a primary focus on large-scale models. She is also the founder of Diverse AI, a community interest organisation with a mission to support and champion underrepresented groups to build a diverse and inclusive AI future. She provides consultation and advice on Responsible AI practices.Toju’s book “Building Responsible AI Algorithms” is available for preorder. In this episode we focus on why Responsible AI is important to Toju, her work at Google as a Responsible AI Programme Manager and her new venture Diverse AI.  To learn more about Toju: www.tojuduke.com To learn more about Diverse AI: www.diverse-ai.org
*Trigger warning* this episode we recognise includes some harmful prejudices that people with disability face and this can be upsetting for some listeners. In this episode we are joined by Tess Buckley whose primary interests include studying the intersection of AI and disability rights, AI governance and corporate digital responsibility, amplifying marginalised voices in data through AI literacy training (HumansforAI), and computational creativity in music AI systems (personal project).  Our conversation covers topics across ableism in biotechnology and society as a whole and how disability is represented in datasets. We are hooping this episode opens more dialogue as often people with disabilities are not included in conversations around bias. Connect with Tess on social media: https://www.linkedin.com/in/tess-buckley-a9580b166/
Older and wiser, Oriana and Amanda are back to chatting ethics after a hiatus! In this episode, we dive into the world of AI and the technology behind it. Meet ChatGPT, a large language model developed by OpenAI, and learn about its capabilities, limitations, and potential impact on our daily lives. From language generation to answering complex questions, we'll discover how ChatGPT works and how it's being used to enhance human capabilities. Join us as we engage in a conversation with Prof. Dirk Hovy to understand the ethical implications and the future of this rapidly advancing technology. Get ready to be amazed and informed as we explore the fascinating world of AI.  (*)This description was written using chatGPT and some human editing skills. 
This week Alba and Amanda discuss a new book called How Humans Judge Machine by Cesar A. Hidalgo.  Get the book on: https://www.judgingmachines.com/ Eric Schwitzgebel's Aiming for Moral Mediocrity: https://faculty.ucr.edu/~eschwitz/SchwitzAbs/MoralMediocrity.htm The puppy cartoon: https://images.app.goo.gl/C4zKG5hsfE6419Ra6
This week Alba Curry joins us to discuss emotion AI grounded on the story "Under Old Earth". Are we aiming for happiness that is "bland as honey and sickening in the end"? Resources: Under Old Earth by Cordwainer Smith How emotions are made by Lisa Feldman Barret Affective Computing by Rosalind W. Picard  
Could sex robots enhance our intimacy in relationships?  This week we are back with an incredible guest Kate Devlin who shares her super interesting research into sex robots and our relationship with technology.  Bio: Kate Devlin is Senior Lecturer in the Department of Digital Humanities at King's College London. Having begun her career as an archaeologist before moving into computer science, Kate's research is in the fields of Human Computer Interaction (HCI) and Artificial Intelligence (AI), investigating how people interact with and react to technology in order to understand how emerging and future technologies will affect us and the society in which we live. Kate has become a driving force in the field of intimacy and technology, running the UK's first sex tech hackathon in 2016. In short, she has become the face of sex robots – quite literally in the case of one mis-captioned tabloid photograph. Her 2018 book, Turned On: Science, Sex and Robots, was praised for its writing and wit.
Addiction Culture

Addiction Culture

2021-06-1030:38

Are we addicted to our smartphones? How did we function before?  This week Amanda shares her journey of giving up her smartphone (influenced by Charles Radclyffe from EthicsGrade) and we look at how social media and smart phones have infiltrated our lives!   We also have an announcement - the podcast will be bi-weekly for the summer 
Ever bought a five-star moisturiser only to find out it breaks you out? YutyBazar is tackling waste by ultra-personalising your beauty routine using AI. Find out more about how it in this week's episode! Want to try YutyBazar yourself? https://www.yutybazar.com/ Simi Lindgren is the founder and CEO of YutyBazar.   Socials: Twitter: @YUTYBAZAR Insta: yutybazar Twitter: @letschatethics Insta: lets.chat.ethics www.letschatethics.co.uk
How does ESG work when rating big tech on their ethics? This week we are joined by Charles Radclyffe who is the Co-Founder of EthicsGrade. EthicsGrade is an ESG ratings agency specialising in evaluating companies on their maturity against AI governance best-practice. Listen to why Charles decided to give up his phone pre-pandemic.. phone addiction culture has us all trapped!! Twitter: @dataphilosopher LinkedIn: Charles Radclyffe  Bio: Charles is the Co-Founder of EthicsGrade and has built and sold three tech companies. In-between he has consulted to large Financial Services organisations on Emerging Technology. Charles advises organisations on how to develop a strategy of ethical implementation of AI, Automation and Robotics as well as speak at events on this subject, co-host a soon-to-be-released podcast, and writes a blog on the ethics and societal impact of emerging technology. Charles holds an MA in Law from Cambridge University 
This week we are back reflecting on the past month of incredible guests from Chinese philosophy, ethical investing and the future of innovation. We also look the new EU regulations and how this compares to China's social scoring system. We will be going deeper into the new EU regulations in a coming episode. 
What will tech look like in 2025? This week we are joined by the amazing Charlie Oliver CEO and founder of the incredible platform Tech2025. We get deep into voice recognition, the effect of technology on children and Charlie shares her journey of founding Tech2025 and why it was so important to her. Twitter: @itscomplicated LinkedIn: Charlie Oliver Bio: Charlie’s years of experience in the trenches of old media include working in advertising in New York at such media goliaths as BBDO Worldwide and Condé Nast, to producing sitcoms and dramas at Sony Pictures Entertainment, Paramount Pictures, Warner Brothers, Dreamworks and Oscar-award winning indie production companies, to event management at the Sundance Film Festival. After spending several years in corporate law in document review at global firms (White & Case, Clifford Chance and Wachtel Lipton, to name a few), Charlie segued seamlessly into tech and new media as a web video producer where she co-created and co-produced experimental video projects such as an 8-hour live webathon for the 2008 presidential election and numerous web video series. Soon thereafter, Charlie launched ArtofTalk.tv (a site that brought the vast world of tv, web and radio talk shows online to Users in bite-size video snippets of debates and interviews in social media). I n 2009, Charlie launched Served Fresh Media™ (a New York-based company) where her team provides digital marketing strategy, event management, product development, and senior management advisory for companies. Clients Served Fresh Media has worked with include IBM, New York Press Club, Cognizant, Digital Flash, Digital Realty, Tierpoint, and It’s About Time, among others. In January 2017, Charlie launched Tech 2025 — a community and platform for professionals to learn about the next wave of disruptive, emerging technologies and to facilitate discourse about the impact of these technologies on society with an emphasis on problem-solving. Having produced over 80 events since they launched, coupled with providing professional services, Tech 2025 has quickly gained a reputation for helping professionals and companies to understand and embrace emerging technologies and the whirlwind changes they bring, and to strategize for the future impact of accelerating innovation. 
loading
Comments 
Download from Google Play
Download from App Store