This month: February 2020
“We only have 12 years”: YouTube and the IPCC report on global warming of 1.5°C
This paper examines how the IPCC Special Report on Global Warming of 1.5°C (SR15) played out on YouTube following its release in October 2018. Forty videos were studied, those ranked the highest in YouTube’s search engine over the course of four weeks after the publication of the report. Media activity around SR15 was animated by a mix of professional and user-led channels. Four main recurrent themes were identified: disaster and impacts, policy options and solutions, political and ideological struggles around climate change, and contested science. The discussion of policy options and solutions was particularly prominent. Critiques of SR15 took different forms, as well as denialist videos which downplayed the severity of climate change. There were also several clips which criticized the report for underestimating the extent of warming or overestimating the feasibility of proposed policies.
Also this month
Report and repeat: Investigating Facebook’s hate speech removal process
Social media is rife with hate speech. Although Facebook prohibits this content, little is known about how much of the hate speech reported by users is actually removed by the company. Given the enormous power Facebook has to shape discourse, this study sought to determine what proportion of reported hate speech is actually removed from the platform and whether patterns exist in Facebook’s decision-making process. To understand how the company is interpreting and applying its own Community Standards regarding hate speech, the authors identified and reported hundreds of comments, posts, and images featuring hate speech to the company and recorded Facebook’s decision regarding whether or not to remove the reported content. The results indicated that only about half of reported content containing hate speech was removed. The results suggest that Facebook also had substantial issues including: removing misogynistic hate speech, establishing consistency in removing attacks and threats, an inability to consider context in removal decisions, and a general lack of transparency within hate speech removal processes. Facebook’s failure to effectively remove reported hate speech allows misethnic discourses to spread and perpetuates stereotypes.