How ethical AI really works
Twitter recently released one of its algorithms into the world — the one that controls how images are cropped in the Twitter app — and said it would pay people to find all the ways it was broken. Rumman Chowdhury and Jutta Williams, two executives on Twitter’s META team, called it an “algorithmic bias bounty challenge,” and said they hoped it would set a precedent for “proactive and collective identification of algorithmic harms.”
The META team’s job is to help Twitter (and the rest of the industry) make sure its artificial intelligence and machine-learning products are as ethically and responsibly used as they can be. What does that mean or look like in practice? Well, Twitter (and the rest of the industry) is still figuring that out. And this work, at Google and elsewhere, has led to huge internal turmoil as companies have begun to reckon more honestly with the ramifications of their own work.
Chowdhury and Williams joined the Source Code podcast to talk about how the META team works, what they hope the bias bounty challenge will accomplish, and the challenges of doing qualitative research in a quantitative industry. That, and what “Chitty Chitty Bang Bang” can teach us about AI.
For more on the topics in this episode:
- Rumman Chowdhury on Twitter
- Jutta Williams on Twitter
- How Twitter hired tech's biggest critics to build ethical AI
- Twitter will pay you to find bias in its AI
For all the links and stories, head to Source Code’s homepage.