DiscoverLeading LearningBias and Equity in Learntech
Bias and Equity in Learntech

Bias and Equity in Learntech

Update: 2021-04-27
Share

Description

<figure class="alignleft size-large">Jeff Cobb and Celisa Steele<figcaption>Jeff Cobb & Celisa Steele</figcaption></figure>




This marks the halfway point in our series on the frontiers of learntech. Our hope is that you have already begun to consider the implications this explosion in learning technology will have on your learning business. And when it comes to implications we want to focus on some of the major themes that emerged from our earlier conversations with Donald Clark and Sae Schatz—particularly those related to bias and equity.





Bias and artificial intelligence or, more specifically, bias in AI is not a new concern. But it’s one that’s been garnering more and more attention in recent years, and it feels appropriate to focus on it now because of the rise in social justice movements we’ve experienced in the United States.





In this fourth episode of the series, we explore the potential harm of bias in AI drawing on research from Joy Buolamwini and Cathy O’Neil, both featured in the documentary Coded Bias, available on Netflix. We also discuss the difference between interpretability and explainability when it comes to understanding AI and why looking for bias in data is equally important to looking for bias in AI processes.





To tune in, listen below. To make sure you catch all future episodes, be sure to subscribe via RSS, Apple Podcasts, Spotify, Stitcher Radio, iHeartRadio, PodBean, or any podcatcher service you may use (e.g., Overcast). And, if you like the podcast, be sure to give it a tweet.





Listen to the Show









Access the Transcript





Download a PDF transcript of this episode’s audio.





Read the Show Notes





[00:18 ] – A summary of what we’ve covered up to this point in the series and a preview of what’s to come in our conversations with Sam Sannandeji, founder and CEO of Modest Tree, and Ashish Rangnekar, co-founder and CEO of BenchPrep.





Harm from Bias





<figure class="alignright size-large is-resized">Coded Bias documentary film promo</figure>




We recently watched Coded Bias, a 2020 documentary available on Netflix. The film investigates bias in algorithms and features the work of MIT Media Lab researcher Joy Buolamwini, who uncovered flaws in facial recognition technology. The technology was really good at recognizing the faces of white men, less good with the faces of women and people of color. Because of her work, Google and other tech companies have worked to improve their AI, and it’s gotten better at recognizing faces of all types.





Joy founded the Algorithmic Justice League, which “combines art and research to illuminate the social implications and harms of artificial intelligence. AJL’s mission is to raise public awareness about the impacts of AI, equip advocates with empirical research to bolster campaigns, build the voice and choice of most impacted communities, and galvanize researchers, policymakers, and industry practitioners to mitigate AI bias and harms.”





Watch Joy Buolamwini’s TED Talk below about the need for accountability in coding and how she’s fighting bias in algorithms.





<figure class="wp-block-embed is-type-video is-provider-ted wp-block-embed-ted wp-embed-aspect-16-9 wp-has-aspect-ratio">

<iframe title="Joy Buolamwini: How I'm fighting bias in algorithms" src="https://embed.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms" width="800" height="451" frameborder="0" scrolling="no" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe>
</figure>



Cathy O’Neil is also featured in the Coded Bias documentary. Cathy wrote Weapons of Math Destruction: How Big Data Increases Inequity and Threatens Democracy (2016). In the documentary, Joy and Cathy focus on the use of AI in policing, surveillance, credit and lending decisions, insurance, advertising, and more. In the film Cathy O’Neil says, “People are suffering algorithmic harm.”





Both Cathy and Joy are focused on the harm. Harm is in the quote from Cathy, and the mission of AJL also mentions harm. They have real concerns—and there is real reason for their concerns. Lost opportunities in accessing money through lending, greater likelihood of being stopped by police, higher interest rates, etc. There’s enough harm and enough real instances of harm that many are clamoring for legislation and regulation and standards. In fact, as we’re recording, the European Commission is expected to unveil a proposal on artificial intelligence regulations in the European Union.





One concern covered in Coded Bias involved a teacher in Houston who had won numerous teaching awards over many years, but he received a poor evaluation when the district implemented an algorithmic approach to assessing teachers. He and other teachers sued, and part of their argument (they won the case) was that they didn’t know why they’d gotten the poor evaluation—the algorithm was a black box that they couldn’t question, and so they couldn’t contest the result because the premises for the result weren’t known.





Black Boxes, Explainability, and Interpretability





[05:45 ] – The black box argument is interesting. As a society, we use a lot of technology we don’t understand—for example, our laptops, smartphones, and Google Search. We have a pretty crude understanding of how all those work, but we aren’t likely to want give any of them up.





We recently came across a really helpful distinction from Christopher Penn in a Marketing Over Coffee podcast episode. He says when we want to understand how software arrived at a particular outcome, we choose between explainability and interpretability. “Interpretability is the decompilation of the model into its source code. We look at the raw source code used to create the model to understand the decisions made along the way,” per Penn. “Explainability is the post-hoc explanation of what the model did, of what outcome we got, and whether that outcome is the intended one or not.”





Christopher uses an analogy to make explainability and interpretability more digestible. Explainability is tasting a cake—we can taste it and get a general idea of what went into making

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Bias and Equity in Learntech

Bias and Equity in Learntech

Jackie Harman