DiscoverEthical Machines
Ethical Machines

Ethical Machines

Author: Reid Blackman

Subscribed: 11Played: 100
Share

Description

I talk with the smartest people I can find working or researching anywhere near the intersection of emerging technologies and their ethical impacts.

From AI to social media to quantum computers and blockchain. From hallucinating chatbots to AI judges to who gets control over decentralized applications. If it’s coming down the tech pipeline (or it’s here already), we’ll pick it apart, figure out its implications, and break down what we should do about it.
38 Episodes
Reverse
See You June 20

See You June 20

2024-05-0901:48

Ethical Machines on hiatus until 20 June 2024.
There’s good reason to think AI doesn’t understand anything. It’s just moving around words according to mathematical rules, predicting the words that come next. But in this episode, philosopher Alex Grzankowski argues that AI may not understand what it’s saying but it does understand language. In this episode we do a deep dive into the nature of human and AI understanding, ending with strategies for how AI researchers could pursue AI that has genuine understanding of the world.
Imagine we’re awash in high quality AI-generated creative content. Books, poems, podcasts, images, TV and Film. And imagine it’s every bit as moving as human-generated art. We cry, we laugh, we’re inspired. Does it matter that it was generated by an AI? Does it undermine the experience? I think it does, and I’ll try to convince you of just that point.
What is Manipulation?

What is Manipulation?

2024-04-1847:54

We’re told that algorithms on social media are manipulating us. But is that true? What is manipulation? Can an AI really do it? And is it necessarily a bad thing? These questions and more with philosopher Michael Klenk. Michael Klenk is a tenured Assistant Professor of Ethics and Philosophy of Technology at TU Delft. He earned his Ph.D. in Philosophy from Utrecht University, graduating with the highest possible distinction. Before becoming a professional philosopher, he earned Business Administration and Psychology degrees and worked as a management consultant. Focusing on resolving foundational philosophical issues with practical implications, Klenk investigates the ethical dimensions of emerging technologies. His recent work is on manipulation, particularly in online contexts. He co-edited the Philosophy of Online Manipulation with Fleur Jongepier (Routledge, 2022), and his work has appeared in journals such as American Philosophical Quarterly, Analysis, Synthese, Erkenntnis, Philosophy and Technology, and Ethics and Information Technology.
Unless you don't mind decreased autonomy and increased narcissism
How bad is it and what could possibly fix it? Countering Disinformation Effectively: An Evidence-Based Policy Guide https://carnegieendowment.org/2024/01/31/countering-disinformation-effectively-evidence-based-policy-guide-pub-91476 Jon Bateman is a senior fellow at the Carnegie Endowment for International Peace, where he focuses on global technology challenges at the intersection of national security, economics, politics, and society. His research areas include techno-nationalism, cyber operations, disinformation, and AI. Bateman is the author of U.S.-China Technological “Decoupling”: A Strategy and Policy Framework (2022). Former Google CEO Eric Schmidt, in his foreword, called it “a major achievement” that “stands out for its ambition, clarity, and rigor” and “will remain a touchstone for years to come.” Bateman is also the co-author of Countering Disinformation Effectively: An Evidence-Based Policy Guide (2024). His other major works include a military assessment of Russia’s cyber operations in Ukraine and a proposal to reform cyber insurance for catastrophic and state-sponsored events. Before joining Carnegie, Bateman was a special assistant to Chairman of the Joint Chiefs of Staff General Joseph F. Dunford, Jr., serving as the chairman’s first civilian speechwriter and the lead analyst in the chairman’s internal think tank. Bateman previously worked in the Office of the Secretary of Defense, developing several key policies and organizations for military cyber operations, and at the Defense Intelligence Agency, leading teams responsible for assessing Iran’s senior leadership, decisionmaking, internal stability, and cyber activities. Bateman’s writings have appeared in the Wall Street Journal, MSNBC, Politico, Slate, Harvard Business Review, Foreign Policy, and elsewhere.  His TV and radio appearances include BBC News, NPR Morning Edition, and C-SPAN After Words. Bateman is a graduate of Harvard Law School and Johns Hopkins University. Dean Jackson was project manager of the Influence Operations Researchers’ Guild, a component of the Partnership for Countering Influence Operations at the Carnegie Endowment for International Peace. He specializes in how democracies and civil society around the world can respond to disinformation, influence operations, and other challenges to a free, healthy digital public square. From 2013 to 2021, Jackson managed workshops and publications related to disinformation at the International Forum for Democratic Studies, a center for research and analysis within the National Endowment for Democracy. Prior to his time at the National Endowment for Democracy, he worked in external relations at the Atlantic Council. He holds an MA in international relations from the University of Chicago and a BA in political science from Wright State University in Dayton, OH.
Or should we value human deliberation even when the results are worse?
Can we train AI to be ethical the same way we teach children? #AI #ethics #AIethics Cameron Buckner’s research primarily concerns philosophical issues which arise in the study of non-human minds, especially animal cognition and artificial intelligence. He began his academic career in logic-based artificial intelligence. This research inspired an interest into the relationship between classical models of reasoning and the (usually very different) ways that humans and animals actually solve problems, which led him to the discipline of philosophy. He received a PhD in Philosophy at Indiana University in 2011 and an Alexander von Humboldt Postdoctoral Fellowship at Ruhr-University Bochum from 2011 to 2013. He just published a book with Oxford University Press that uses empiricist philosophy of mind (from figures such as Aristotle, Ibn Sina, John Locke, David Hume, William James, and Sophie de Grouchy) to understand recent advances in deep-neural-network-based artificial intelligence.
We Need AI Regulations

We Need AI Regulations

2024-03-0747:02

Can regulations curb the ethically disastrous tendencies of AI? David Evan Harris is Chancellor's Public Scholar at UC Berkeley and faculty member at the Haas School of Business, where he teaches courses including AI Ethics for Leaders; Social Movements & Social Media; Civic Technology; and Scenario Planning & Futures Thinking. He is also a Senior Fellow at the Centre for International Governance Innovation; a Senior Research Fellow at the International Computer Science Institute; Visiting Fellow at the Integrity Institute; a Senior Advisor for AI Ethics at the Psychology of Technology Institute. He previously worked as a Research Manager at Meta (formerly Facebook) on the Responsible AI, Civic Integrity and Social Impact teams. Before that, he worked as a Research Director at the Institute for the Future. He was named to Business Insider’s AI 100 list for his work on AI governance, fairness and misinformation. He has published a book and numerous articles in outlets including The Guardian, BBC, Tech Policy Press and Adbusters. He has been interviewed and quoted by CNN, BBC, AP, Bloomberg, The Atlantic, and given dozens of talks around the world.
AI Needs Historians

AI Needs Historians

2024-02-2231:31

How can we solve AI’s problems if we don’t understand where they came from? Jason Steinhauer is a public historian and bestselling author of History, Disrupted: How Social Media & the World Wide Web Have Changed the Past. He is the founder of the History Communication Institute, Global Fellow at The Wilson Center, Senior Fellow at the Foreign Policy Research Institute, an adjunct professor at the Maxwell School for Citizenship & Public Affairs, a contributor to TIME, CNN and DEVEX; a past editorial board member of The Washington Post "Made By History" section; and a Presidential Counselor of the National WWII Museum. He previously worked for seven years at the U.S. Library of Congress.
AI in Warfare

AI in Warfare

2024-02-0844:15

How much control should AI have when your enemy has AI too? As Jeremy Kofsky, a member of the Marine Corps explains, AI will be everywhere in military operations. That’s a bit frightening, given the speed at which AI operates and given the stakes involved. My discussion with Jeremy covers a range of issues, including how and where a human should be in control, what needs to be done given that the enemy can use AI as well, and just how much responsibility lies not with military policy, but with individual commanders. Jeremy Kofsky is a 20-year Marine with small-unit operational experience on five continents. Over his 12 deployments, he has conducted combat operations, provided tactical to strategic level intelligence, and seen the growth of Artificial Intelligence in the military sphere. He conducts Artificial Intelligence work for the Key Terrain Cyber Institute as a 2nd Lt J.P. Blecksmith Research Fellow. Recently he completed the Brute Krulak Scholar Program as the first ever enlisted member to complete the year-long process.
We're all familiar with cybersecurity threats. Stories of companies being hacked and data and secrets being stolen abound. Now we have generative AI to throw fuel on the fire. I don't know much about cybersecurity, but Matthew does. In this conversation, he provides some fun and scary stories about how hackers have operated in the past, how they can leverage genAI to get access to things they shouldn't have access to, and what cybersecurity professionals are doing to slow them down. Matthew Rosenquist is the Chief Information Security Officer (CISO) for Eclipz, the former Cybersecurity Strategist for Intel Corp, and benefits from over 30+ diverse years in the fields of cyber, physical, and information security. Matthew specializes in security strategy, measuring value, developing best-practices for cost-effective capabilities, and establishing organizations that deliver optimal levels of cybersecurity, privacy, governance, ethics, and safety. As a cybersecurity CISO and strategist, he identifies emerging risks and opportunities to help organizations balance threats, costs, and usability factors to achieve an optimal level of security. Matthew is very active in the industry. He is an experienced keynote speaker, collaborates with industry partners to tackle pressing problems and has published acclaimed articles, white papers, blogs, and videos on a wide range of cybersecurity topics. Matthew is a member of multiple advisory boards and consults on best-practices and emerging risks to academic, business, and government audiences across the globe.
When you think about AI in the criminal justice system, you probably think either about biased AI or mass surveillance. This episode focuses on the latter and takes up the following challenge: can we integrate AI into the criminal justice system without realizing the nightmarish picture provided by the film “Minority Report.” Explaining what that vision is and why it’s important is the goal of my guest, professor of law, and my good friend, Guha Krishnamurthi. Guha is an Associate Professor of Law at the University of Maryland Francis King Carey School of Law. His research interests are in criminal law, constitutional law, and antidiscrimination law. Prior to academia, Guha clerked on the California Supreme Court, U.S. District Court for the Northern District of Illinois, and the U.S. Court of Appeals for the Seventh Circuit. Between those clerkships he worked in private practice for five years in California.
My conversation with Chris covered everything from government to corporate surveillance to why we should care about data privacy to the power that technologists have and how they should wield it responsibly. Always great to chat with Chris (we’ve been talking about this for 5 years now) and nice to bring it to a larger audience. Chris Wiggins is an associate professor in the Department of Applied Physics and Applied Mathematics and the chief data scientist at The New York Times. He is a member of Columbia’s Institute for Data Sciences and Engineering, a founding member of the University’s Center for Computational Biology and Bioinformatics, and the co-founder of hackNY, a New York City-based initiative seeking “to create and empower a community of student-technologists.”
The hospital faced an ethical question: should we deploy robots to help with elder care? Let’s look at a standard list of AI ethics values: justice/fairness, privacy, transparency, accountability, explainability. But as Ami points out in our conversation, that standard list doesn’t include a core value at the hospital: the value of caring. And that’s one example of one of three objections to a view he calls “Principalism.” Principalism is the view that we do AI ethics best by first defining our AI ethics values or principles at that very abstract level. This objection is that the list will always be incomplete. Given Ami’s expertise in ethics and experience as a clinical ethicist, it was insightful to see how he gets ethics done on the ground and his views on how organizations should approach ethics more generally. Ami Palmer received his PhD in philosophy from Bowling Green State University where he wrote his dissertation on the challenges conspiricism and science denialism pose to democratic policymaking. His primary research areas include the effects of medical misinformation on clinical interactions and the ethics of AI in healthcare. With respect to medical misinformation he has recently developed a conversation guide to help providers better navigate conversations with patients who endorse medical misinformation. In the ethics of AI, he coauthored the American Nursing Association's Position Statement of the Ethics of AI in Nursing. His hobbies include judo, Brazilian Jiu Jitsu, dance, and hiking with his 4 wiener dogs.
Well, I didn’t see this coming. Talking about legal and philosophical conceptions of copyright turns out to be intellectually fascinating and challenging. It involves not only concepts about property and theft, but also about personhood and invasiveness. Could it be that training AI with author/artist work violates their self?
 I talked with Darren Hick about all this, who wrote a few books on the topic. I definitely didn’t think he was going to bring up Hegel. Darren Hudson Hick is an assistant professor of philosophy at Furman University, specializing in philosophical issues in copyright, forgery, authorship, and related areas. He is the author of Artistic License: The Philosophical Problems of Copyright and Appropriation (Chicago, 2017) and Introducing Aesthetics and the Philosophy of Art (Bloomsbury, 2023), and the co-editor of The Aesthetics and Ethics of Copying (Bloomsbury, 2016). Dr. Hick gained significant media attention as one of the first professors to catch a student using ChatGPT to plagiarize an assignment.
Are you on the political left or the political right? Ben Steyn wants to ask you the same question with regards to nature and technology. Do you lean tech or do you lean nature? For instance, what do you think about growing human babies outside of a womb (aka ectogenesis)? Are you inclined to find it an affront to nature and you want politicians to make it illegal? Are you inclined to find it a tech wonder and you want to make sure elected officials don’t ban such a thing? Ben claims that nature vs. tech leanings don’t map on nicely to the political left vs right distinction. We need a new axis by which we evaluate our politicians. Really thought-provoking conversation - enjoy!
Before I did AI ethics, I was a philosophy professor, specializing in ethics. One of my senior colleagues in the field was David Enoch, also an ethicist and philosopher of law. David is also Israeli and a long-time supporter of a two-state solution. In fact, he went to military jail for refusing to serve in Gaza for ethical reasons. Given David’s rare, if not unique, combination of expertise and experience, I wanted to have a conversation with him about the Israeli-Hamas war. In the face of the brutal Hamas attacks of October 7, what is it ethically permissible for Israel to do? David rejects both extremes. It’s not the case that Israel should be pacifist. That would be for Israel to default on its obligations to safeguard its citizens. Nor should Israel bomb Gaza and its people out of existence; that would be to engage in genocide. If you’re looking for an “Israel is the best and does nothing wrong” conversation, you won’t find it here. If you’re looking for “Israel is the worst and should drop their weapons and go home,” you won’t find that here, either. It’s a complex situation. David and I do our best to navigate it as best we can. David Enoch studies law and philosophy in Tel Aviv University, and then clerked for Justice Beinisch at the Israeli Supreme Court. He got a PhD in philosophy from NYU in 2003, and has been a professor of law and philosophy at the Hebrew University ever since. This year he started as the Professor of the Philosophy of Law at Oxford. He does mainly moral, political, and legal philosophy.
We want to create AI that makes accurate predictions. We want that not only because we want our products to work, but also because reliable products are, all else equal, ethically safe products. But we can’t always know whether our AI is accurate. Our ignorance leaves us with a question: which of the various AI models that we’ve developed is the right one for this particular use case? In some circumstances, we might decide that using AI isn’t the right call. We just don’t know enough. In other instances, we may know enough, but we also have to choose our model in light of the ethical values we’re trying to achieve. Julia and I talk about this and a lot of other (ethical) problems that beset AI practitioners on the ground, and what can and cannot be done about it. Dr. Julia Stoyanovich is Associate Professor of Computer Science & Engineering and of Data Science, and Director of the Center for Responsible AI at NYU.  Her goal is to make “responsible AI” synonymous with “AI”.  Julia has co-authored over 100 academic publications, and has written for the New York Times, the Wall Street Journal and Le Monde.  She engages in technology policy, has been teaching responsible AI to students, practitioners and the public, and has co-authored comic books on this topic. She received her Ph.D. in Computer Science from Columbia University.
If I look inside your head when you’re talking, I’ll see various neurons lighting up, probably in the prefrontal cortex as you engage in the reasoning that’s necessary to say whatever it is you’re saying. But if I opened your head and instead found a record playing and no brain, I’d realize I was dealing with a puppet, not a person with a brain/intellect. In both cases you’re saying the same things (let’s suppose). But because of what’s going on in the head, or “under the hood,” it’s clear there’s intelligence in the first case and not in the second. Does an LLM (large language models like GPT or Bard) have intelligence. Well, to know that we need to look under the hood, as Lisa Titus argues. It’s not impossible that AI could be intelligent, she says, but judging by what’s going on under the hood at the moment, it’s not. Fascinating discussion about the nature of intelligence, why we attribute it to each other (mostly), and why we shouldn’t attribute it to AI. Lisa Titus (née Lisa Miracchi) is a tenured Associate Professor of Philosophy at the ⁠University of Denver⁠. Previously, she was a tenured Associate Professor of Philosophy at the ⁠University of Pennsylvania⁠, where she was also a General Robotics, Automation, Sensing, and Perception (⁠GRASP⁠) Lab affiliate and a ⁠MindCORE⁠ affiliate. She works on issues regarding mind and intelligence. What makes intelligent systems different from other kinds of systems? What kinds of explanations of intelligent systems are possible, or most important? What are appropriate conceptions of real-world intelligent capacities like those for agency, knowledge, and rationality? How can conceptual clarity on these issues advance cognitive science and aid in the effective and ethical development and application of AI and robotic systems? Her work draws together diverse literatures in the cognitive sciences, AI, robotics, epistemology, ethics, law, and policy to systematically address these questions. 
loading
Comments 
Download from Google Play
Download from App Store