DiscoverEthical Machines
Ethical Machines
Claim Ownership

Ethical Machines

Author: Reid Blackman

Subscribed: 8Played: 347
Share

Description

I have to roll my eyes at the constant click bait headlines on technology and ethics.  

If we want to get anything done, we need to go deeper. 

That’s where I come in. I’m Reid Blackman, a former philosophy professor turned AI ethics advisor to government and business. 

If you’re looking for a podcast that has no tolerance for the superficial, try out Ethical Machines.

75 Episodes
Reverse
Wendell Wallach, who has been in the AI ethics game longer than just about anyone and has several books to his name on the subject, talks about his dissatisfaction with talk of “value alignment,” why traditional moral theories are not helpful for doing AI ethics, and how we can do better.Advertising Inquiries: https://redcircle.com/brands
Let AI Do the Writing

Let AI Do the Writing

2026-02-1250:43

We hear that “writing is thinking.” We believe that teaching all students to be great writers is important. All hail the essay! But my guest, philosopher Luciano Floridi, professor and Founding Director of the Digital Ethics Center, sees things differently. Plenty of great thinkers were not also great writers. We should prioritize thoughtful and rigorous dialogue over the written word. As for writing, perhaps it should be considered akin to a musical instrument; not everyone has to learn the violin…Advertising Inquiries: https://redcircle.com/brands
AI is deployed across the globe. But how sensitive is it to the cultural contexts - ethics, norms, laws and regulations - in which it finds itself. My guest today, Rocky Clancy of Virginia Tech, argues that AI is too Western-focused. We need to engage in empirical research so that AI is developed in a way that comports with the people it interacts with, wherever they are.Advertising Inquiries: https://redcircle.com/brands
When we’re playing a game or a sport, we like being measured. We want a high score, we want to beat the game. Measurement makes it fun. But in work, being measured, hitting our numbers, can make us miserable. Why does measuring ourselves sometimes enhance and sometimes undermine our happiness and sense of fulfillment? That’s the question C. Thi Nguyen tackles in his new book “The Score: How to Stop Playing Somebody Else’s Game.” Thi is one of the most interesting philosophers I know - enjoy!Advertising Inquiries: https://redcircle.com/brands
When it comes to the foundation models that are created by the likes of Google, Anthropic, and OpenAI, we need to treat them as utility providers. So argues my guest, Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Business in Berlin, Germany. She further argues that the only way we can move forward safely is to create a transnational coalition of the willing that creates and enforces ethical and safety standards for AI. Why such a coalition is necessary, who might be part of it, how plausible it is that we can create such a thing, and more are covered in our conversation.Advertising Inquiries: https://redcircle.com/brands
In the last episode, Brian Wong, argued that there’s a “gap” between the harms that developing and using AI causes, on the one hand, and identifying who is responsible for those harms. At the end of that discussion, Brian claimed that we’re all responsible for those harms. But how could that be? Aren’t some people more responsible than others? And if we are responsible, what does that mean we’re supposed to do differently? In part 2 Brian explains how he thinks about what responsibility is and how it has implications for our social responsibilities.Advertising Inquiries: https://redcircle.com/brands
How can one of the most high risk industries also be the safest place to test AI? That’s what I discuss today with former Navy Commander Zac Staples, currently Founder and CEO of Fathom, an industrial cybersecurity company focused on the maritime industry. He walks me through how the military performs its due diligence on new technologies, explains that there are lots of “watchers” of new technologies as they’re tested and used, and that all of this happens against a backdrop of a culture of self-critique. We also talk about the increasing complexity of AI, which makes it harder to test, and we zoom out to larger, political issues, including China’s use of military AI. Advertising Inquiries: https://redcircle.com/brands
Are we dependent on social media in a way that erodes our autonomy? After all, platforms are designed to keep us hooked and to come back for more. And we don’t really know the law of the digital lands, since how the algorithms influence how we relate to each other online in unknown ways. Then again, don’t we bear a certain degree of personal responsibility for how we conduct ourselves, online or otherwise? What the right balance is and how we can encourage or require greater autonomy is our topic of discussion today.Advertising Inquiries: https://redcircle.com/brands
AI is leading the economic charge. In fact, without the massive investments in AI, our economy would look a lot worse right now. But what are the social and political costs that we incur? My guest, Karen Yeung, a professor at Birmingham Law School and School of Computer Science, argues that investments in AI our consolidating power while disempowering the rest of society. Our individual autonomy and our collective cohesion are simultaneously eroding. We need to push back - but how? And on what grounds? To what extent is the problem our socio-economic system or our culture or government (in)action? These questions and more in a particularly fun episode (for me, anyway).Advertising Inquiries: https://redcircle.com/brands
We’ve been doing risk assessments in lots of industries for decades. For instance, in financial services and cyber security and aviation, there are lots of ways of thinking about what the risks are and how to mitigate them at both a microscopic and microscopic level. My guest today, Jason, Stanley of Service now, is probably the smartest person I’ve talked to on this topic. We discussed the three levels of AI risk and the lessons he draws from those other industries that we crucially need in the AI space.Advertising Inquiries: https://redcircle.com/brands
Can AI Do Ethics?

Can AI Do Ethics?

2026-01-2943:53

Many researchers in AI think we should make AI capable of ethical inquiry. We can’t teach it all the ethical rules; that’s impossible. Instead, we should teach it to ethically reason, just as we do children. But my guest thinks this strategy makes a number of controversial assumptions, including how ethics works and what actually is right and wrong. From the best of season two. Advertising Inquiries: https://redcircle.com/brands
What happens when students turn to LLMs to learn about history? My guest, Nuno Moniz, Associate Research Professor at the University of Notre Dame, argues this can ultimately lead to mass confusion, which in turn can lead to tragic conflicts. There are at least three sources of that confusion: AI hallucinations, misinformation spreading, and biased interpretations of history getting the upper hand. Exactly how bad this can get and what we’re supposed to do about it isn’t obvious, but Nuno has some suggestions.Advertising Inquiries: https://redcircle.com/brands
When thinking about AI replacing people, we usually look to the extremes: utopia and dystopia. My guest today, Finn Morehouse, a research fellow at Forethought, a nonprofit research organization, thinks that neither of these extremes are the most likely. In fact, he thinks that one reason that AI defies prediction is that it’s not a normal technology. What’s not normal about it? It’s not merely in the business of multiplying productivity, he says, but of replacing the standard bottleneck to greater productivity: humans.Advertising Inquiries: https://redcircle.com/brands
We’re all connected to how AI is developed and used across the world. And that connection, my guest Brian Wong, Assistant Professor of Philosophy at the University of Hong Kong argues, is what makes us all, to varying degrees, responsible for the harmful impacts of AI. This conversation has two parts. This is the first, where we focus on the kinds of geo-political risks and harms he concerned about, why he takes issue with “the alignment problem,” and how AI operates in a way that produces what he calls “accountability gaps and deficits” - ways in which it looks like no one is accountable for the harms and how people are not compensated by anyone after they’re harmed. There’s a lot here - buckle up!Advertising Inquiries: https://redcircle.com/brands
Orchestrating Ethics

Orchestrating Ethics

2025-11-1344:16

One company builds the LLM. Another company uses that model for their purposes. How do we know that the ethical standards of the first one match the ethical standards of the second one? How does the second company know they are using a technology that is commensurate with their own ethical standards? This is a conversation I had with David Danks, Professor OF Philosophy and data science UCSD, almost 3 years ago. But the conversation is just as pressing now as it was then. In fact, given the widespread adoption of AI that’s built by a handful of companies, it’s even more important now that we get this right.Advertising Inquiries: https://redcircle.com/brands
Deepfakes to deceive people? No good. How about a digital duplicate of a lost loved one so you can keep talking to them? What’s the impact of having a child talk to the digital duplicate of their dead father? Should you leave instructions about what can be done with your digital identify in your will? Could you lose control of your digital duplicate? These questions are ethically fascinating and crucial in themselves. They also raise other longer standing philosophical issues: can you be harmed after you die? Can your rights be violated? What if a Holocaust denier uses a digital duplicate of a survivor to say the Holocaust never happened? I used to think deepfakes were most of the conversation. Now I know better thanks to this great conversation with Atay Kozlovski, Visiting Research Fellow at Delft University of Technology.Advertising Inquiries: https://redcircle.com/brands
The engineering and data science students of today are tomorrow’s tech innovators. IF we want them to develop ethically sound technology, they better have a good grip on what ethics is all about. But how should we teach them? The same way we teach ethics in philosophy? Or is something different needed given the kinds of organizational forces they’ll find themselves subject to once they’re working. Steven Kelts, a lecturer in Princeton’s School of Public and International Affairs and in the Department of Computer Science researches this subject and teaches those very students himself. We explore what his research and his experience shows us about how we can best train our computer scientists to take the welfare of society into their minds and their work.Advertising Inquiries: https://redcircle.com/brands
In August, I recorded a discussion with David Ryan Polgar, Founder of the nonprofit All Tech Is Human, in front of an audience of around 200 people. We talked about how AI mediated experiences make us feel sadder, that the tech companies don’t really care about this, and how people can organize to push those companies to take our long term well being more seriously.Advertising Inquiries: https://redcircle.com/brands
It would be crazy to attribute legal personhood to AI, right? But then again, corporations are regarded as legal persons and there seems to be good reason for doing so. In fact, some rivers are classified as legal persons. My guest, David Gunkel, author of many books including “Person Thing Robot” argues that the classic legal distinction between ‘person’ and ‘thing’ doesn’t apply well to AI. How should we regard AI in a way that allows us to create it in a legally responsible way? All that and more in today’s episodeAdvertising Inquiries: https://redcircle.com/brands
LLMs behave in unpredictable ways. That’s a gift and a curse. It both allows for its “creativity” and makes it hard to control (a bit like a real artist, actually). In this episode, we focus on the cyber risks of AI with Walter Haydock, a former national security policy advisor and the Founder of StackAware.Advertising Inquiries: https://redcircle.com/brands
loading
Comments