Report and repeat: Investigating Facebook’s hate speech removal process




Hate Speech, Social Media, Content Moderation, Freedom of Expression


Social media is rife with hate speech. Although Facebook prohibits this content on its site, little is known about how much of the hate speech reported by users is actually removed by the company. Given the enormous power Facebook has to shape the universe of discourse, this study sought to determine what proportion of reported hate speech is removed from the platform and whether patterns exist in Facebook’s decision-making process. To understand how the company is interpreting and applying its own Community Standards regarding hate speech, the authors identified and reported hundreds of comments, posts, and images featuring hate speech to the company (n=311) and recorded Facebook’s decision regarding whether or not to remove the reported content. A qualitative content analysis was then performed on the content that was and was not removed to identify trends in Facebook’s content moderation decisions about hate speech. Of particular interest was whether the company’s 2018 policy update resulted in any meaningful change.

Our results indicated that only about half of reported content containing hate speech was removed. The 2018 policy change also appeared to have little impact on the company’s decision-making. The results suggest that Facebook also had substantial issues including: removing misogynistic hate speech, establishing consistency in removing attacks and threats, an inability to consider context in removal decisions, and a general lack of transparency within the hate speech removal processes. Facebook’s failure to effectively remove reported hate speech allows misethnic discourses to spread and perpetuates stereotypes. The paper concludes with recommendations for Facebook and other social media organizations to consider to minimize the amount and impact of hate speech on their platforms.

Author Biographies

Caitlin Ring Carlson, Seattle University

Caitlin Ring Carlson is an Associate Professor in the Department of Communication at Seattle University. She teaches courses in Media Law, Social Media, and Strategic Communication. Her research focuses on media law, policy, and ethics from a feminist perspective. Carlson’s work has appeared in journals such as Communication Law and Policy, the Journal of Mass Media Ethics, and Communication Law Review. Carlson received her PhD in Media Studies from the University of Colorado Boulder.

Hayley Rousselle

Student at Syracuse University College of Law. She recently graduated from Seattle University with a Bachelor’s Degree in Communication and Media.




How to Cite

Carlson, C. R., & Rousselle, H. (2020). Report and repeat: Investigating Facebook’s hate speech removal process. First Monday, 25(2).