Big Tech's Big AI Blind Spot
Melissa Harris-Perry: This is The Takeaway. I'm Melissa and I am a Trekkie and not just an OG, Captain Kirk, and Mr. Spook Trekkie. I also love Star Trek: The Next Generation. Now, my fave is Worf obviously, but I also have a bit of a soft spot for the android Data as well. Do you remember the episode where the issue of Data's rights were on trial?
?Data: Webster's 24th-century dictionary fifth edition defines an Android as an automaton made to resemble a human being.
?Picard: Automaton made by whom?
?Data: Sir.
?Picard: Who built you, Commander?
Melissa Harris-Perry: It's just a little pop culture moment, but it was also a prophetic glimpse into a future we now inhabit. Artificial intelligence is not a distant possibility. It's a lived reality. Data may not be our pale-faced Starfleet colleague, but artificial intelligence is here. It's living and working among us, making decisions about whether we could rent an apartment, buy a house, get credit, board a flight or even wash our hands in a public restroom.
Just like Data, the android from Star Trek, the artificial intelligence which uses or misuses our personal data to make critical decisions about our lives, well, it was made by us, by people. We introduced into this artificial intelligence all of the biases that have long characterized our social and political lives. Artificial intelligence is powerful, but it is not neutral.
Timnit Gebru: My name is Timnit Gebru and I am the founder and executive director of DAIR. DAIR is short for the Distributed AI Research Institute. I also co-founded Black in AI.
Melissa Harris-Perry: You may remember first hearing about Timnit back in late 2020 when she left her job at Google in a high-profile departure. Timnit says she was fired from her job as co-lead of Google's Ethical AI team after she expressed concerns about the company's insufficient efforts to create meaningful racial diversity. She says the company asked her to retract a paper she co-authored about racial bias in artificial intelligence. Google maintains that Timnit resigned. Now, late last year Timnit made headlines again when she announced the formation of a new research institute, Distributed AI Research or DAIR.
Timnit Gebru: It is basically just supposed to be an AI research institute. If we were to do research on artificial intelligence, how would we do that research in a way that's not extractive, that is not exploitative, and in a way that hopefully can be more beneficial to people at the margins? That's not what's currently happening and this research is being done in a way that's very extractive and exploitative. The way in which not just products that are commercialized, but research itself is produced, is actually harmful to many people around the world.
Melissa Harris-Perry: She shared some examples about the methodological biases in gathering AI research.
Timnit Gebru: There are many examples of how the way research in AI is done is harmful to marginalized communities. If you think of, for instance, the current paradigm is based on large datasets, large compute power, datasets that are scraped for instance and if you look at many parts of that pipeline are harmful. So to start with how the data, the raw material is extracted, it's extracted from many people around the world without their consent and without compensation to them.
All of these companies and all of these researchers are dependent on people's data, but the people who are providing that data many times are not aware that their data is being used for many purposes and they're not, let alone, being compensated for it. When you look at the people annotating this data, they're also being exploited and extracted from. In summary, when you look at how the research is produced and the paradigm within which both academic institutions and tech companies operate, that is extractive. Then when you look at the way in which these models are deployed and the applications that are used, that is also harmful to people in marginalized communities.
Melissa Harris-Perry: Now I wanted to know how DAIR plans to do things differently.
Timnit Gebru: I've come to realize when many people asked me about DAIR that our biggest focus is actually on the way that we work, the way that we produce this research and that also means culture, the culture that we hope to create in our institute because we believe that all of the tech artifacts that you see are very much influenced by the culture at the tech companies. For example, we hear very prominent people boasting about working 90 plus hours, then they talk about how exceptional innovation requires exceptional work ethic and that's not what I believe.
I believe that human beings need to work a certain amount of time and live their lives other amount of time. We're not machines. You can't do good research working those hours. There's a reason that the labor movement existed to make sure that we're not exploited in that way. At DAIR we're not hoping to have those kind of ridiculous work hours.
In addition to that, the types of people we want to have, we want to value their lived experiences. What do I mean by that? Many times the research process is exploitative because you have people who are subjects that are being researched about who are usually not the ones who benefit from that output of the work. What does it mean to value someone's lived experience? It means to value that type of knowledge that they bring, appropriately compensate them and make sure that the work that's being done actually benefits them as well.
Melissa Harris-Perry: We'll be back with more of my conversation with artificial intelligence researcher Timnit Gebru in just a moment. [silence] Back with you on The Takeaway. I'm Melissa Harris-Perry. We have been speaking with Timnit Gebru, founder and executive director of Distributed AI Research. We've been talking about racial bias, AI research, and tech. I asked her to tell us about her role at Google in their Ethical AI team and the circumstances surrounding her departure.
Timnit Gebru: At Google, like I said, I was an AI researcher, but I was co-leading a team called the Ethical AI team that was founded by my co-lead Margaret Mitchell. As part of that work, there were a lot of people asking us about the ethical implications and considerations of something called Large Language Models. Large Language Models are a type of language technology. For instance, if you have machine translation, you're trying to translate from one language to another automatically or they're even used in search. You have a search query and the search query is ranked and so what you see at the top versus elsewhere.
Language technology is literally used everywhere, autocomplete, all sorts of things, chatbots. Partly motivated by these questions that people were asking us internally, myself and my collaborator at University of Washington, Emily Bender and Megan and many other members of our team wrote this paper called, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? We were talking about the dangers of this race that we were seeing of building larger and larger language models.
All of these companies seemed to have this race to have the ever-larger models without really thinking about why do you want to build larger and larger models and what are the costs? We talk about what it means when you depend on this huge data set of the internet. There's this misconception that because you have such a large data set on the internet, you're representing a diversity of views. We talk about how that's not the case. The internet represents really the dominant views that often are sexist, racist, enablist, et cetera, and ageist. We discuss that.
Melissa Harris-Perry: Timnit says that she and her team did not find a warm reception for their work at Google.
Timnit Gebru: We listed out all of these kind of potential harms and our thoughts on how to mitigate them on a scientific academic paper. We submitted this paper for peer review at an academic conference with one of the foremost academic conferences in this area. Last-minute, people at Google told us to retract this paper. I wanted to know why and what process they used to arrive at the conclusion that we need to retract this paper because we followed all the internal processes we knew at Google. Then they said, "Oh, we accept your resignation," and they cut me off of the corporate account immediately. My manager's manager emailed my direct report saying that I had resigned and my manager didn't even know. Then there was a whole public thing and they wrote an email to the entire Google research community saying that my paper was subpar. Then that was a dog whistle to these white supremacists who started attacking me and it was a whole thing.
Melissa Harris-Perry: In a message shared publicly following Timnit's departure from Google the head of the company's AI division, Jeff Dean wrote that her paper "had some important gaps that prevented us from being comfortable putting Google affiliation on it." Timnit says that the problems she encountered are not exclusive to Google.
Timnit Gebru: DAIR's director of the research Alex Hannah who recently announced her resignation from Google to join DAIR wrote a resignation letter which was basically like an academic work. A lot of people said they will actually designate it as a reading material in their classes. Alex mentions the fact that Google is a white organization and we need to name that whiteness. She mentions that many of these large tech companies are the same.
Once you start analyzing how they're created and how their structure works and what they prioritize you start understanding why many of us and many of my colleagues, many other people of color A, they can't get through the door and B, once they get through the door they're completely miserable. At Google, the black women leave at the highest rates more than anyone. It's not just Google culture. Google culture has its specifics but if I were to look at most large tech companies they are built in a really a white supremacist and patriarchal foundation. If you try to poke at it you'll be ejected immediately. It's basically, to me, very indicative of the larger Silicon Valley and then the US and many other Western countries
Melissa Harris-Perry: Now given that so many different aspects of the tech industry revealed these endemic biases, I asked Timnit how DAIR can have a sustainable fiscal model without being vulnerable to many of these same problems.
Timnit Gebru: Resources and funding are something I think very much about because currently when you look at the AI research, and a lot of times products come from research, research is turned into products, there are two big motivators and one is warfare. You see a lot of money whether it's academia or not coming from DARPA or other sorts of defense institutions like that or large corporations whose goal is basically to maximize profit.
If you look at many historical examples of research it's like that. For instance, self-driving cars where there were DARPA robot cops. If you look at automated machine translation has to do with the Cold War. It's either warfare or large tech companies. What we're hoping to achieve here is something different. If you want to arrive at a different conclusion, that is if you want to work on AI that is not solely driven by either warfare or maximizing profit for a few corporations you have to start the entire process from scratch with not that in mind. Otherwise, it's retrofitting, like creating a tank and then trying to retrofit it to something that's not warfare.
That's what we're hoping to do in DAIR. That means that the source of funding also has to reflect that goal. Currently, we are funded by these large foundations. Our initial supporters are the MacArthur Foundation, the Ford Foundation, the Kapor Center. The Kapor Center gave us a gift, and Open Society Foundations and now Rockefeller Foundation. I strongly believe that solely depending on these types of grants is not a sustainable way to run an entire research institution where you have people's full-time jobs that are dependent on it. We are also trying to figure out how we can have our own source of revenue and exploring different ways that we can do that ourselves too.
Melissa Harris-Perry: Our thanks to Timnit Gebru, founder and executive director of DAIR and founder of Black in AI.
[00:15:30] [END OF AUDIO]
Copyright © 2022 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.