person taking video of protest
Photo by Kym Ellis on Unsplash.

March 19, 2018

The Downsides of Take-downs: Online Content Regulation and Human Rights Fact-Finding

Anna Banchik is a PhD candidate in the Department of Sociology at the University of Texas at Austin, conducting interdisciplinary research at the intersections of science and technology studies (STS), visual media, and human rights. She is also a member of the Working Paper Series Editorial Committee (2017-8). 

Once lauded as purveyors of free expression and “technolog[ies] for liberation”[1] given their role in powering the Arab Spring and subsequent pro-democracy movements, social media sites[2] have, in 2017, unquestionably met their reckoning.

The year witnessed Facebook, Twitter, and Google (which owns YouTube) coming under intense scrutiny for their failure to remove hate speech, extremist content, and inaccurate information spread intentionally (disinformation) or unintentionally (misinformation) from their sites.[3] Members of the United States Congress have demanded that the companies ramp up their content moderating efforts in the midst of ongoing investigation into their facilitation of Russian meddling in the 2016 U.S. elections. For their part, European lawmakers concerned with the widespread dissemination of hate speech and extremist content are also applying pressure. On January 1, 2018, new German legislation came into effect which fines social media sites up to 50 million euros ($59 million) for failing to remove “manifestly unlawful” posts within 24 hours. The “Facebook Law,” as it is colloquially known, targets posts involving incitement to hatred and other prohibited forms of speech.[4] The European Commission has warned it is considering similar measures for targeting terrorist and extremist content.[5]

However, aggressive content moderation is not universally embraced. First Amendment activists in the U.S. and internet rights groups more broadly have argued that empowering social media companies to arbitrate what constitutes hate speech or extremist content would be a dangerous move, particularly in response to government requests that accounts be censored or shuttered.[6] Moreover, while cautioning that severe fines may have a chilling effect on online speech, Facebook has itself acknowledged the inherent difficulties in determining “the intent behind one post, or the risk implied in another.”[7] Imagine that “[s]omeone posts a graphic video of a terrorist attack,” writes Monika Bickert, Facebook’s Head of Global Policy Management. “Will it inspire people to emulate the violence, or speak out against it?”[8] That such difficulties arise in the case of human review brings into stark relief the added complications entailed in training machine learning algorithms to properly detect the context, intended meaning, and potential consequences of online content.

Overshadowed in this debate are the voices of a growing body of human rights groups who rely on social media sites to find and corroborate possible evidence of abuses. The widening accessibility of camera phones, participatory media, remote-sensing imagery, and other information and communication technologies have multiplied and diversified the sources of human rights-related information available for advocacy and legal accountability efforts. The International Criminal Court recently issued its first arrest warrant based largely on evidence collected from social media. The warrant cites seven videos documenting Libyan commander Mahmoud Mustafa Busayf Al-Werfalli shooting or ordering the execution of 33 civilians or wounded fighters.[9] Had these videos been quickly removed, they might have never made it to court or been preserved.

Such is the worry over a staggering volume of content from Syria and Myanmar taken down from YouTube and Facebook in recent months. Modifications in YouTube’s machine learning algorithms in August 2017 resulted in the swift removal of 900 YouTube channels posting videos of the Syrian conflict.[10] A month later, Facebook removed videos and images documenting a wave of attacks against the Rohingya, a Muslim ethnic minority in Myanmar,[11] while keeping “fake news” and hate speech directed against the group on its platform.[12]

The conundrum defies easy fixes. Recuperating channels and content can involve a lengthy process of appeals which may not be possible for the most vulnerable users posting content and attempting to document and expose human rights abuses. Numerous mobile apps have been developed to enable eyewitnesses to anonymously send content along with its metadata directly to legal experts and NGOs.[13] However, their adoption pales in comparison to that of Facebook, Twitter, and YouTube. For now, human rights groups will continue, as best they can, to track companies’ disappearing acts.

[1] Samidh Chakrabarty, “Hard Questions: What Effect Does Social Media Have on Democracy?” Facebook Newsroom. January 22, 2018. https://newsroom.fb.com/news/2018/01/effect-social-media-democracy/

[2] By “social media site,” I refer here to both social media platforms including Twitter and Facebook and user-generated content websites like YouTube.

[3] See Claire Wardle, “Fake news. It’s complicated,” Medium. February 16, 2017. https://medium.com/1st-draft/fake-news-its-complicated-d0f773766c79

[4] Linda Kinstler, “Can Germany Fix Facebook?” The Atlantic. November 2, 2017. https://www.theatlantic.com/international/archive/2017/11/germany-facebook/543258/

[5] Samuel Gibbs, “EU Warns Tech Firms: Remove Extremist Content Faster or Be Regulated.” The Guardian. December 7, 2017. https://www.theguardian.com/technology/2017/dec/07/eu-warns-tech-firms-facebook-google-youtube-twitter-remove-extremist-content-regulated-europ.

[6] Glenn Greenwald, “Facebook Says It Is Deleting Accounts at the Direction of the U.S. and Israeli Governments.” The Intercept. December 30, 2017. https://theintercept.com/2017/12/30/facebook-says-it-is-deleting-accounts-at-the-direction-of-the-u-s-and-israeli-governments/

[7] Monika Bickert. “Facebook’s Community Standards: How and Where We Draw the Line.” Facebook Newsroom. May 23, 2017. https://newsroom.fb.com/news/2017/05/facebooks-community-standards-how-and-where-we-draw-the-line/. See also supra note 4.

[8] Ibid.

[9] Prosecutor v. Al-Werfalli, Case No. ICC-01-11-01/17-2, Public Warrant of Arrest.

August 15, 2017. https://www.icc-cpi.int/CourtRecords/CR2017_05031.PDF.

[10] Avi Asher-Schapiro, “YouTube and Facebook are Removing Evidence of Atrocities, Jeopardizing Cases Against War Criminals.” The Intercept. November 2, 2017. https://theintercept.com/2017/11/02/war-crimes-youtube-facebook-syria-rohingya/.

[11] Ibid.

[12] Facebook did, however, recently remove the account of one of Myanmar’s most outspoken voices against the Rohingya, Buddhist monk Wirathu. See Laignee Barron, “Nationalist Monk Known as the ‘Burmese bin Laden’ Has Been Stopped From Spreading Hate on Facebook.” February 28, 2018. http://time.com/5178790/facebook-removes-wirathu/. See also Megan Specia, “A War of Words Puts Facebook at the Center of Myanmar’s Rohingya Crisis.” October 27, 2017. https://mobile.nytimes.com/2017/10/27/world/asia/myanmar-government-facebook-rohingya.html?_r=1&referer=http%3A%2F%2Fm.facebook.com.

[13] E.g., the International Bar Association’s eyeWitness to Atrocities app (http://www.eyewitnessproject.org/) and The Whistle, based at the University of Cambridge (http://www.thewhistle.org/)

Project & Publications Type: Human Rights Commentary