Is Facebook Responsible in Ethiopia?
[music]
Melissa Harris-Perry: In early November, two years of brutal fighting in Ethiopia neared an official close as the Ethiopian federal government and the Tigrayan People's Liberation Front reached a ceasefire deal. The former president of Nigeria, Olusegun Obasanjo, brokered the deal with the African Union saying this.
Olusegun Obasanjo: Today is the beginning of a new dawn for Ethiopia, for the Horn of Africa, and indeed for Africa as a whole.
Melissa Harris-Perry: Researchers at Ghent University in Belgium have estimated that as many as 600,000 civilians have died in the Civil War. While the conflict stemmed from an ongoing political struggle between the country's new power brokers and its old ones, it also capitalized on years of ethnic hostilities. Human rights investigators have documented a campaign of ethnic cleansing against Tigrayans, allegedly co-signed by the regional and federal governments. Ethnic Amharas have also suffered human rights abuses at the hands of the Tigray regional forces. The old divisions have taken on a new dimension in the 21st century.
Speaker 3: Two Ethiopians have filed a lawsuit against Meta, accusing Facebook's parent company of fanning violence and hate speech in their country.
Melissa Harris-Perry: Now, one of the lawsuit petitioners is the son of a professor who was ethnically Tigrayan. The professor was gunned down outside of his home in 2021 after being targeted in a series of Facebook posts in which people called for his death. The lawsuit called for the company to deprioritize hateful content in its algorithms, to add to its content moderation staff, and to establish a $2 million fund for victims. Here's a lawyer for the petitioners.
Lawyer: The case my clients have made is that not only do Facebook allow such content to be on the platform, they prioritize it and they make money from such content.
Melissa Harris-Perry: We reach out to Meta for a statement, and it reads in part, "We employ staff with local knowledge and expertise, and continue to develop our capabilities to catch violating content in the most widely spoken languages in the country."
With me now is Berhan Taye, a practitioner fellow at the Digital Civil Society Lab of Stanford University, and she's based in Nairobi, Kenya.
Berhan, welcome to The Takeaway.
Berhan Taye: Thank you for having me.
Melissa Harris-Perry: What are the practices of Meta Facebook that are under scrutiny in this lawsuit?
Berhan Taye: Basically, I think the whole of the Facebook or Meta's existence, and the way it moderates content, is being scrutinized here. In context like in Ethiopia and other places where significant language support does not exist within the platform, we're being forced to depend on algorithms that do not necessarily understand our language, and with us being forced to depend on an algorithm that doesn't understand the language, decision is being made automatically.
When content like the ones that's being discussed in this litigation right now is being moderated, they actually don't have any human beings sitting there and moderating the content. It's an algorithm that doesn't understand the language, that doesn't understand the context that's moderating and deciding what content goes up and goes down.
Basically at the core of it is really this algorithm and the fact that Facebook over and over again has really refused to invest in humans that understand this language, this context, and for them to be able to moderate and decide what content stays and goes. I think for me at the core of it is language like Amharic, Tigrinya, Kiswahili, and others that are not English, that are not Spanish, that are not Mandarin, that are being discussed here, but also the algorithm that is making these decisions on behalf of millions.
Melissa Harris-Perry: When I hear you say to invest in humans, this is in part that if there were in fact humans who speak these languages rather than simply relying on digital technology that is untrained in these languages, then it would be possible to see, to read the hateful content.
Berhan Taye: 100%. What particularly happened, particularly now with this case, is that a content that had the picture of the professor was posted online saying that he should be killed, and so many allegations were being made about him. Myself included and many other people reported the content online using the Facebook's in-app reporting mechanism. We got automatic message that says this content doesn't violate Facebook's community standards. You can only imagine this is being done by an automated system that doesn't understand the context because you would know a sane human being would say a content that's asking for someone to be killed in the most barbaric forms is okay with the platform's policy.
In the way that Facebook does this right now is that they claim that they've hired individuals that speak the language, but unfortunately what we know particularly for sure within the Ethiopian context is they have less than double-digit number content moderators for over 10 million users. Particularly the working conditions of this content moderators is also up for question here.
They outsourced this process to places like Nairobi, and these are not Facebook employees that are hired there, and the working condition is they're being forced to work more than eight hours. Their pay is really meager amount. They're also being asked to review this content. Even if they have human beings that are doing this work, which is not enough currently, the working conditions of those individuals also really facilitates to this really vicious cycle that we're in right now.
Melissa Harris-Perry: Is there something about the way that Facebook's technology that its algorithm works that not only doesn't stop this hate but actually allows hateful content to spread more easily and more widely?
Berhan Taye: Yes. Their algorithm works on virality, so good news or benign news is not the one that goes further, or the normal content is not the one that goes viral. What we see from their algorithm is that content that's inciteful, that's false or misleading, or has egregious comments and hate speech is the one that goes viral. I think it's within how they monetize and incentivize the content within the way that they code and design their whole platform. We understand that the algorithm actually prioritizes and recommends inciteful, hateful, and dangerous content, and we've seen this over and over again, and they benefit out of it as well.
The moment that the content goes viral, the more people see it. There's a monetization aspect to it as well. They actually associate monetization with human rights, so unless a language is making the platform money, they won't be able to hire human rights policy officers or content moderators. There's, again, this really vicious cycle of virality monetization, and then now content going viral. It is really, I think, the business model and the design.
Melissa Harris-Perry: Pause with us here for just a moment. When we return, we're going to continue our conversation here about the effects of online hate speech in Ethiopia right after this. It's The Takeaway.
It's The Takeaway. I'm Melissa Harris-Perry, and I'm still with Ethiopian Technology and Social Justice Researcher, Berhan Taye. We've been talking about the role of hate speech on Facebook in Ethiopia's civil war. The factions in the conflict have reached a peace deal, a ceasefire in November, but residents and humanitarian workers report that violence continues in many places. I'm sorry I had to cut you off there, Berhan. Please continue on talking about the ways that you see these as deeply embedded in the way that Facebook works.
Berhan Taye: I think there's also this aspect where these platforms are not necessarily built for individuals that maybe look like the plaintiffs that are in this case, but also for those of us that don't speak English. A platform like Facebook has been accused of genocide and perpetrating really egregious crimes in Myanmar. Ethiopia is not the first time that this has happened. People have been talking about this, so it's not the first time that this has happened to Facebook. I think for me there's really a lack of willingness, and to a certain extent I'm even emboldened to say racism towards their approach to other languages and people that are not necessarily white or are speaking English.
Facebook should have learned from other platforms. We've been discussing this with Facebook, myself personally as well, since 2017, 2018, that this is going to unfold, this is going to come, and we've seen this. The Tigrayan situation is not the only situation where we're seeing this. If you look at Wollega and other conflicts that are happening in the country, there are similar cases where people are being literally gunned down because of the content on the platform. Victims are begging the platform to take down the pictures of the dead bodies of their siblings, their parents. No one responds to that. The platform doesn't care.
For me, I think it's really important that now we're actually litigating on this issue, and that we want the courts to adjudicate on this issue particularly. The big elephant in the room for me is that Facebook doesn't care, and doesn't care unless you are American and white or you are in Europe and governed by the EU. Unfortunately, our reality is that we have to beg a Silicon Valley Mark Zuckerberg to really see our lives, that we actually really matter, and we should not be killed because of a content on the platform while they're making money.
Melissa Harris-Perry: Certainly this is not the first time that Facebook's content moderation practices, that its algorithm, that its business model have come under scrutiny. I guess I'm wondering if you think there has been any meaningful change.
Berhan Taye: Yes and maybe. I think for me, we do make money for the platform, and whether we like it or not, we are the next billion users. Like Facebook's account users' registration have saturated in the US, most likely in Western Europe, and we are the next frontier. That's why Facebook is actually investing millions with Google and others on fiber optics and making sure that the rest of Africa and Asia and Latin America is connected. They see that the monetary value in us, but politically we don't have any-- We're not like the EU that we can summon Mark Zuckerberg to come to Brussels, or for them to testify at Congress.
I think for me the reality is that, yes, they see the value in us and our ability to generate money for them, and they are investing in those ways, but really when it comes to content moderation, content governance, it is really embarrassing to an extent where we are completely invisible to them.
They have made a few investments, but given the profit margin that they're making off of us and the impact it has on-- no one should be forced to lose their parents because of the content on the platform, and once that person has died, beg the platform to take down the content that has led to this. That complete carelessness is still, I think for me, glaringly visible, and they have not learned from Myanmar, have not learned from India, and unfortunately, they haven't learned from the Ethiopian context as well. Then the next conflict that we're going to see come up in content moderation is going to be at the core of this. Maybe now because of this litigation and because it has implications for the shareholders and the investors, and because they're also asking significant sum of money here, maybe there could potentially be some real change.
Melissa Harris-Perry: Berhan Taye is practitioner fellow at the Digital Civil Society Lab of Stanford University, and is based in Nairobi, Kenya. Thank you so much for taking the time with us today.
Berhan Taye: Thank you so much for having me.
[00:11:34] [END OF AUDIO]
Copyright © 2023 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.