Safiya Noble on Search Engines

Safiya Noble on Search Engines

Update: 2024-01-08
Share

Description

<iframe title="Embed Player" src="https://play.libsyn.com/embed/episode/id/29363918/height/128/theme/modern/size/standard/thumbnail/yes/custom-color/87A93A/time-start/00:00:00 /playlist-height/200/direction/backward/download/yes/font-color/FFFFFF" height="128" width="100%" scrolling="no" allowfullscreen="" webkitallowfullscreen="true" mozallowfullscreen="true" oallowfullscreen="true" msallowfullscreen="true" style="border: none;"></iframe>





The work of human hands retains evidence of the humans who created the works. While this might seem obvious in the case of something like a painting, where the artist’s touch is the featured aspect, it’s much less obvious in things that aren’t supposed to betray their humanity. Take the algorithms that power search engines, which are expected to produce unvarnished and unbiased results, but which nonetheless reveal the thinking and implicit biases of their programmers.





While in an age where things like facial recognition or financial software algorithms are shown to uncannily reproduce the prejudices of their creators, this was much less obvious earlier in the century, when researchers like Safiya Umoja Noble were dissecting search engine results and revealing the sometimes appalling material they were highlighting.





In this Social Science Bites podcast, Noble — the David O. Sears Presidential Endowed Chair of Social Sciences and professor of gender studies, African American studies, and information studies at the University of California, Los Angeles — explains her findings, insights and recommendations for improvement with host David Edmonds.





And while we’ve presented this idea of residual digital bias as something somewhat intuitive, getting here was an uphill struggle, Noble reveals. “It was a bit like pushing a boulder up a mountain — people really didn’t believe that search engines could hold these kinds of really value-laden sensibilities that are programmed into the algorithm by the makers of these technologies. Even getting this idea that the search engine results hold values, and those values are biased or discriminatory or harmful, is probably the thrust of the contribution that I’ve made in a scholarly way.”





But through her academic work, such as directing the Center on Race & Digital Justice and co-directing of the Minderoo Initiative on Tech & Power at the UCLA Center for Critical Internet Inquiry and books like the 2018 title Algorithms of Oppression: How Search Engines Reinforce Racism, the scale of the problem and the harm it leaves behind are becoming known. Noble’s own contributions have been recognized, too, such as being named a MacArthur Foundation fellow in 2021 and the inaugural NAACP-Archewell Digital Civil Rights Award winner in 2022. 





To download an MP3 of this podcast, right-click this link and save. The transcript of the conversation appears below.










David Edmonds: Type ‘social science podcasts’ into Google, and I’m pleased to say that Social Science Bites appears on the front page, evidence surely that Google is a search engine to be trusted. But Safiya Noble, who teaches at UCLA, is not so sure. She’s the author of Algorithms of Oppression. Safiya Noble, welcome to Social Science Bites. 





Safiya Noble: Thank you. It’s such a pleasure and an honor to be here, I’m so grateful. 





Edmonds: This interview is going to focus on search engines and bias. By a search engine, we mean what exactly? 





Noble: Well, you know, a search engine is really, and I think for those who are listening who remember the internet before the search engine, we will remember that there were lots of websites, web addresses, all over. The way we organized information on the internet was to build complex directories, they were often built by librarians or subject matter experts. These were curated heavily by communities of practice we could say, or hobbyists, or people who knew a lot about different kinds of things. To find information, you’d need to know a web address.





And then the search engine comes along and it’s a kind of artificial intelligence or a set of algorithms that start to index all of these links all over the web and try to cohere, or make sense of them, in some type of rank order. You can figure out, allegedly, the kind of the good stuff from the junk. And the way that the dominant search engine that most of us use, which is probably Google in the Western world — or for sure in the United States — would be to their crawlers. Their web crawlers would look and see which sites were pointing to other sites. And of course, this was the process called hyperlinking. So you had a blog, or you had a website, and then you would have links to other people’s information or websites or web addresses.





And the process of hyperlinking was kind of allegedly like a credibility factor. It kind of says, “Well, if I’m pointing to David Edmonds’ web site, then you know you can trust it, if you trust what I have to say.” And of course, now it’s so much more sophisticated, because we have these things we called ‘search engine optimization’ cottage industries, where you can purchase keywords to help optimize and help make sure that the algorithm finds your website out of millions of potential websites that are using the same kinds of words that you’re using on your site. So now we have a huge industry, an SEO industry. But I think for everyday people, we open up a browser, there’s a plain white screen with a box, we type in some words, and we get back what we think is the very best kinds of things that you can get back. And that’s really the way people experience search engines.  





Edmonds: And do we know what kinds of searches are the most common? Is there data on that? What are people looking for generally? Or is that impossible to say? 





Noble: No, I don’t think that’s impossible to say. I mean, the last time I looked at what Google was reporting out (in my work, I kind of focus on Google just because they’re the largest) the most frequent kinds of information and research are health information. I know this is probably hard for you to believe, but in the United States where we don’t have a nationalized health care system, that health care is extremely precarious for most people, even if you have insurance. You find that people really use search to diagnose themselves or get medical advice or help. And that’s one of the most prevalent series of search terms that are looked for. 





Edmonds: Right, how fascinating. Now, obviously, when you do a Google search, you get dozens, even hundreds of links. How many of these links do people typically scroll through? Do they get beyond the first page of search results typically?  





Noble: No, the majority of people who use search engines — we know this from search engine use studies that are conducted by Pew Research, for example, they periodically do the search engine, user studies — and we know the majority of people do not go past the first page. So what happens on the first page of search results is extremely important. As far as I’m concerned, that’s the place to look and study because that’s where most people are.  





Edmonds: OK, so most of your research is on search engines and bias. Give us a couple of examples of the kinds of bias that search engines throw up. 





Noble: Yeah well, let me just say that even the notion of search engines being biased is probably something that I helped introduce into our lexicon [from work] a handful of us who studied search engines more than a decade ago. I will say that now it’s kind of more of a common sense understanding that search engines favor, let’s say, companies that pay them the most money in the ad words system that Google in particular, but other search engines also use. So they certainly favor larger companies over smaller businesses, unless you know the keywords to search to find that small business, like you know the name of that business. So they’re always going to kind of favor the people who paid them to be made more visible.





In my work, you know, I conducted this study over a decade ago now that was really the first look at how people, especially racialized people, or ethnic minorities, racial minorities, and women, girls, were profoundly misrepresented in search engines. I took the U.S. Census, and all of the categories, racial and ethnic categories there, and I took the gender categories available then and I just kind of crossed them and did a lot of search, you know, dozens and dozens of searches. And what I found was that Black girls, Latina girls, Asian girls, in the U.S. were almost exclusively misrepresented as pornography, or as sexual commodities.





And, of course, this opened up for me a pathway to talking about what happens when we rely upon something like a search engine to tell us about other people, other communities, ourselves. And in this way, I was arguing that these are profoundly biased toward the interests of the most powerful industries, which of course, the most powerful industry over determining what women

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Safiya Noble on Search Engines

Safiya Noble on Search Engines

Social Science Bites