DiscoverSource CodeHow ethical AI really works
How ethical AI really works

How ethical AI really works

Update: 2021-08-04
Share

Description

Twitter recently released one of its algorithms into the world — the one that controls how images are cropped in the Twitter app — and said it would pay people to find all the ways it was broken. Rumman Chowdhury and Jutta Williams, two executives on Twitter’s META team, called it an “algorithmic bias bounty challenge,” and said they hoped it would set a precedent for “proactive and collective identification of algorithmic harms.”

The META team’s job is to help Twitter (and the rest of the industry) make sure its artificial intelligence and machine-learning products are as ethically and responsibly used as they can be. What does that mean or look like in practice? Well, Twitter (and the rest of the industry) is still figuring that out. And this work, at Google and elsewhere, has led to huge internal turmoil as companies have begun to reckon more honestly with the ramifications of their own work.

Chowdhury and Williams joined the Source Code podcast to talk about how the META team works, what they hope the bias bounty challenge will accomplish, and the challenges of doing qualitative research in a quantitative industry. That, and what “Chitty Chitty Bang Bang” can teach us about AI.

For more on the topics in this episode:

For all the links and stories, head to Source Code’s homepage.

 

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

How ethical AI really works

How ethical AI really works

Protocol Media