Discover
Ethical Machines

Ethical Machines
Author: Reid Blackman
Subscribed: 10Played: 261Subscribe
Share
© All rights reserved.
Description
I have to roll my eyes at the constant click bait headlines on technology and ethics.
If we want to get anything done, we need to go deeper.
That’s where I come in. I’m Reid Blackman, a former philosophy professor turned AI ethics advisor to government and business.
If you’re looking for a podcast that has no tolerance for the superficial, try out Ethical Machines.
58 Episodes
Reverse
Wendell Wallach, who has been in the AI ethics game longer than just about anyone and has several books to his name on the subject, talks about his dissatisfaction with talk of “value alignment,” why traditional moral theories are not helpful for doing AI ethics, and how we can do better.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Are we dependent on social media in a way that erodes our autonomy? After all, platforms are designed to keep us hooked and to come back for more. And we don’t really know the law of the digital lands, since how the algorithms influence how we relate to each other online in unknown ways. Then again, don’t we bear a certain degree of personal responsibility for how we conduct ourselves, online or otherwise? What the right balance is and how we can encourage or require greater autonomy is our topic of discussion today.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
It would be crazy to attribute legal personhood to AI, right? But then again, corporations are regarded as legal persons and there seems to be good reason for doing so. In fact, some rivers are classified as legal persons. My guest, David Gunkel, author of many books including “Person Thing Robot” argues that the classic legal distinction between ‘person’ and ‘thing’ doesn’t apply well to AI. How should we regard AI in a way that allows us to create it in a legally responsible way? All that and more in today’s episodeAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
LLMs behave in unpredictable ways. That’s a gift and a curse. It both allows for its “creativity” and makes it hard to control (a bit like a real artist, actually). In this episode, we focus on the cyber risks of AI with Walter Haydock, a former national security policy advisor and the Founder of StackAware.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
AI can stand between you and getting a job. That means for you to make money and support yourself and your family, you may have to convince an AI that you’re the right person for the job. And yet, AI can be biased and fail in all sorts of ways. This is a conversation with Hilke Schellmann, investigative journalist and author of ‘The algorithm” along with her colleague Mona Sloane, Ph.D., is an Assistant Professor of Data Science and Media Studies at the University of Virginia. We discuss Hilke’s book and all the ways things go sideways when people are looking for work in the AI era. Originally aired in season one. Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
We want accurate AI, right? As long as it’s accurate, we’re all good? My guest, Will Landecker, CEO Accountable Algorithm, explains why accuracy is just one metric among many to aim for. In fact, we have to make tradeoffs across things like accuracy, relevance, and normative (including ethical) considerations in order to get a usable model. We also cover whether explainability is important and whether it’s even on the menu and the risks of multi-agentic AI systems.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
We’re told that algorithms on social media are manipulating us. But is that true? What is manipulation? Can an AI really do it? And is it necessarily a bad thing? These questions and more with philosopher Michael Klenk. Originally aired in season one.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
We often defer to the judgment of experts. I usually defer to my doctor’s judgment when he diagnoses me, I defer to quantum physicists when they talk to me about string theory, etc. I don’t say “well, that’s interesting, I’ll take it under advisement” and then form my own beliefs. Any beliefs I have on those fronts I replace with their beliefs. But what if an AI “knows” more than us? It is an authority in the field in which we’re questioning it. Should we defer to the AI? Should we replace our beliefs with whatever it believes? On the one hand, hard pass! On the other, it does know better than us. What to do? That’s the issue that drives this conversation with my guest, Benjamin Lange, Research Assistant Professor in the Ethics of AI and ML at the Ludwig Maximilians University of Munich.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Are claims about AI destroying humanity just more AI hype we should ignore? My guests today, Risto Uuk and Torben Swoboda assess three popular arguments for why we should dismiss them and focus solely on the AI risks that are here today. But they find each argument flawed, arguing that, unless some fourth powerful argument comes along, we should devote resources to identifying and avoiding potential existential risks to humanity posed by AI.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
I have to admit, AI can do some amazing things. More specifically, it looks like it can perform some impressive intellectual feats. But is it actually intelligent? Does it understand? Or is it just really good at statistics? This and more in my conversation with Lisa Titus, former professor of philosophy at the University of Denver and now AI Policy Manager at Meta. Originally aired in season one.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
By the end of this crash course, you’ll understand a lot about the AI ethics landscape. Not only will it give you your bearings, but it will also enable you to identify what parts of the landscape you find interesting so you can do a deeper dive.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
People want AI developed ethically, but is there actually a business case for it? The answer better be yes since, after all, it’s businesses that are developing AI in the first place. Today I talk with Dennis Hirsch, Professor of Law and Computer Science at Ohio State University, who is conducting empirical research on this topic. He argues that AI ethics - or as he prefers to call it, Responsible AI - delivers a lot of bottom line business value. In fact, his research revealed something about its value that he didn’t even expect to see. We’re in the early days of businesses taking AI ethics seriously, but if he’s right, we’ll see a lot more of it. Fingers crossed.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Automation is great, right? It speeds up what needs to get done. But is that always a good thing? What about in the process of scientific discovery? Yes, AI can automate a lot of science by running thousands of virtual experiments and generating results - but is something lost in the process? My guest, Ramón Alvarado a professor of philosophy and a member of the Philosophy and Data Science Initiative at the University of Oregon, thinks something crucial is missing: serendipity. Many significant scientific discoveries occurred by happenstance. Penicillin, for instance, was discovered by Alexander Fleming who accidentally left a petri dish on a bench before going off for vacation. Exactly what is the scientific value of serendipity, how important is it, and how does AI potentially impinge on it? That’s today’s conversation.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Behind all those algorithms are the people who create them and embed them into our lives. How did they get that power? What should they do with it? What are their responsibilities? This and more with my guest Chris Wiggins, Chief Data Scientist at the New York Times, Associate Professor of Applied Mathematics at Columbia University, and author of the book “How Data Happened: A History from the Age of Reason to the Age of Algorithms”. Originally aired in season one.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
People in the AI safety community are laboring under an illusion, perhaps even a self-deception, my guest argues. They think they can align AI with our values and control it so that the worst doesn’t happen. But that’s impossible. We can never know how AI will act in the wild any more than we can know how our children will act once they leave the house. Thus, we should never give more control to an AI than we would give an individual person. This is a fascinating discussion with Marcus Arvan, professor of philosophy at The University of Tampa and author of three books on ethics and political theory. You might just leave this conversation wondering if giving more and more capabilities to AI is a huge, potentially life-threatening mistake.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Developers are constantly testing how online users react to their designs. Will they stay longer on the site because of this shade of blue? Will they get depressed if we show them depressing social media posts? What happens if we intentionally mismatch people on our dating website? When it comes to shades of blue, perhaps that’s not a big deal. But when it comes to mental health and deceiving people? Now we’re in ethically choppy waters. My discussion today is with Cennydd Bowles, Managing Director of NowNext, where he helps organizations develop ethically sound products. He’s also the author of a book called “Future Ethics.” He argues that A/B testing on people is often ethically wrong and creates a culture among developers of a willingness to manipulate people. Great conversation ranging from the ethics of experimentation to marketing and even to capitalism.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
There’s a picture in our heads that’s overly simplistic and the result is not thinking clearly about AI risks. Our simplistic picture is that a team develops AI and then it gets used. The truth, the more complex picture, is that 1000 hands touch that AI before it ever becomes a product. This means that risk identification and mitigation is spread across a very complex supply chain. My guest, Jason Stanley, is at the forefront of research and application when it comes to managing all this complexityAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
From the best of season 1: Microsoft recently announced an (alleged!) breakthrough in quantum computing. But what in the world is quantum computer, what can they do, and what are the potential ethical implications of this new powerful tech?Brian and I discuss these issues and more. And don’t worry! No knowledge of physics required.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Every specialist in anything thinks they should have a seat at the AI ethics table. I’m usually skeptical. But psychologist Madeline Reinecke, Ph.D. did a great job defending her view that – you guessed it – psychologists should have a seat at the AI ethics table. Our conversation ranged from the role of psychologists in creating AI that supports healthy human relationships to when children start and stop attributing sentience to robots to loving relationships with AI to the threat of AI-induced self-absorption. I guess I need to have more psychologists on the show.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
A fun format for this episode. In Part I, I talk about how I see agentic AI unfolding and what ethical, social, and political risks come with it. In part II, Eric Corriel, digital strategist at the School of Visual Arts and a close friend, tells me why he thinks I’m wrong. Debate ensues.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy