First Monday

Algorithmic extremism: Examining YouTube's rabbit hole of radicalization by Mark Ledwich and Anna Zaitsev



Abstract
The role that YouTube and its behind-the-scenes recommendation algorithm plays in encouraging online radicalization has been suggested by both journalists and academics alike. This study directly quantifies these claims by examining the role that YouTube’s algorithm plays in suggesting radicalized content. After categorizing nearly 800 political channels, we were able to differentiate between political schemas in order to analyze the algorithm traffic flows out and between each group. After conducting a detailed analysis of recommendations received by each channel type, we refute the popular radicalization claims. To the contrary, these data suggest that YouTube’s recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content. Instead, the algorithm is shown to favor mainstream media and cable news content over independent YouTube channels with slant towards left-leaning or politically neutral channels. Our study thus suggests that YouTube’s recommendation algorithm fails to promote inflammatory or radicalized content, as previously claimed by several outlets.

Contents

1. Introduction
2. Prior academic studies on YouTube radicalization
3. Analyzing the YouTube recommendation algorithm
4. Findings and discussion
5. Limitations and conclusions

 


 

1. Introduction

The Internet can both be a powerful force for good, prosocial behaviors by providing means for civic participation and community organization (Ferdinand, 2000), as well as an attractor for anti-social behaviors that create polarizing extremism (Blaya, 2019). This dual nature of the Internet has been evident since the early days of online communication, where “flame wars” and “trolling” have been present in online communities for over two decades (Pfaffenberger, 1996; Kayany, 1998; Berghel and Berleant, 2018). While such behaviors were previously confined to Usenet message boards and limited IRC channels, with the expansion of social media, blogs, and microblogging following the rapid growth of Internet participation rates, these inflammatory behaviors are no longer confined and have left their early back channels into public consciousness (International Telecommunication Union (ITU), 2019).

The explosion of platforms, as well as ebbs and flows in the political climate, has exacerbated the prevalence of antisocial messaging (Gagliardone, et al., 2015). Research focusing on uninhibited or antisocial communication, as well as extremist messaging online has previously been conducted on platforms including Facebook (Ben-David and Matamoros-Fernández, 2016), Twitter (Burnap and Williams, 2015), Reddit (Chandrasekharan, et al., 2017), 4chan and 8chan (Knuttila, 2011; Nagle, 2017), Tumblr (Agarwal and Sureka, 2016) and even knitting forums such as Ravelry (Shen, et al., 2018).

In addition to these prior studies on other platforms, attention has recently been paid to the role that YouTube may play as a platform for radicalization (Roose, 2019; Anti-Defamation League (ADL), 2019; Munn, 2019). As a content host, YouTube provides a great opportunity for broadcasting a large and widely diverse set of ideas to millions of people worldwide. Included among general content creators are those who specifically target users with polarizing and radicalizing political content. While YouTube and other social media platforms have generally taken a strict stance against most inflammatory material on their platform, extremist groups from jihadi terrorist organizations (Andre, 2012; Awan, 2017), various political positions (de Boer, et al., 2012), and conspiracy theorists have nonetheless been able to permeate the content barrier (Schmitt, et al., 2018).

Extreme content exists on a spectrum. YouTube and other social media platforms have generally taken a strict stance against the most inflammatory materials or materials that are outright illegal. No social media platform tolerates ISIS beheading videos, child porn, or videos depicting cruelty towards animals. There seems to a consensus amongst all social media platforms that human moderators or moderation algorithms will remove this type of content (Gillespie, 2018).

YouTube’s automatic removal of the most extreme content, such as explicitly violent acts, child pornography, and animal cruelty, has created a new era of algorithmic data mining (Agarwal and Sureka, 2015; Sureka, et al., 2010; Agarwal and Sureka, 2016). These methods range from metadata scans (Hussain, et al., 2018) to sentiment analysis Asghar, et al., 2015). Nevertheless, content within an ideological grey area or that can nonetheless be perceived as “radicalizing” exists on YouTube (YouTube, 2019b). Definitions of free speech differ from country to country. However, YouTube operates on a global scale within the cultural background of the United States with robust legislation that protects speech (Gagliardone, et al., 2015). Even if there are limitations to what YouTube will broadcast, the platform does allow a fair bit of content that could be deemed as radicalizing, either by accident or by lack of monitoring resources.

Means such as demonetization, flagging, or comment limiting is several tools available to content moderators on YouTube (YouTube, 2019a). Nevertheless, removing or demonetizing videos or channels that present inflammatory content has not curtailed scrutiny of YouTube by popular media (Martineau, 2019). Recently, the New York Times published a series of articles, notably critiquing YouTube’s recommendation algorithm, which suggests related videos for users based on their prior preferences and users with similar preferences (Tufekci, 2018; Roose, 2019). The argument put forward by the New York Times is that users would not otherwise have stumbled upon extremist content if they were not actively searching for it since the role of recommendation algorithms for content on other Web sites is less prevalent. As such, YouTube’s algorithm may have a role in guiding content, and to some extent, preferences towards more extremist predispositions. Critical to this critique is that while previous comments on the role that social media Web sites play in spreading radicalization have focused on user contributions, the implications of the recommendation algorithm strictly implicate YouTube’s programming as an offender.

The critique of the recommendation algorithm is another difference that sets YouTube apart from other platforms. In most cases, researchers are looking at how the users apply social media tools as ways to spread jihadism (Andre, 2012), alt-right messages of white supremacy (Nagle, 2017). Studies are also focusing on the methods the content creators might use to recruit more participants in various movements; for example, radical left-wing Antifa protests (Vacca, 2019). Nevertheless, the premise is that users of Facebook, Tumblr, or Twitter would not stumble upon extremists if they are not actively searching for it since the role of recommendation algorithms is less prevalent. There are always some edge cases where innocuous Twitter hashtags can be co-opted for malicious purposes by extremists or trolls (Awan, 2017), but in general, users get what they specifically seek. However, the case for YouTube is different: the recommendation algorithm is seen as a major factor in how users engage with YouTube content. Thus, the claims about YouTube’s role in radicalization are twofold. First, there are content creators that publish content that has the potential to radicalize (Roose, 2019). Second, YouTube is being scrutinized for how and where the recommendation algorithm directs the user traffic (Munn, 2019; Roose, 2019). Nevertheless, empirical evidence of YouTube’s role in radicalization is insufficient (Ribeiro, et al., 2020). There are anecdotes of a radicalization pipeline and hate group rabbit hole, but academic literature on the topic is scant, as we discuss in the next section.

 

++++++++++

2. Prior academic studies on YouTube radicalization

Data-driven papers analyzing radicalization trends online are an emerging field of inquiry. To date, few notable studies have examined YouTube’s content in relation to radicalization. As discussed, previous studies have concentrated on the content itself and have widely proposed novel means to analyze these data (Agarwal and Sureka, 2016; Agarwal, et al., 2017; Hussain, et al., 2018). However, these studies focus on introducing means for content analysis, rather than the content analysis itself.

However, a few studies go beyond content analysis methods. One such study, Ottoni, et al., (2018), analyzed the language used in right-wing channels compared to their baseline channels. The study concludes that there was little bias against immigrants or members of the LGBT community, but there was limited evidence for prejudice towards Muslims. However, the study did find evidence for the negative language used by channels labeled as right-wing. Nevertheless, this study has a few weaknesses. The authors of this paper frame their analysis as an investigation into right-wing channels but then proceed to analyze kooky conspiracy channels instead of more mainstream rightwing content. They have chosen a conspiracy theorist Alex Jones’ InfoWars (currently removed from YouTube) as their seed channel, and their list of right-wing channels reflects this particular niche. InfoWars and other conspiracy channels represent only a small segment of right-wing channels. Besides, the study applies a topic analysis method derived from the Implicit Association Test (IAT) (Greenwald, et al., 1998). However, the validity of IAT has been contested (Forscher, et al., 2019). In conclusion, we consider the seed channel selection as problematic and the range of the comparison channels as too vaguely explained (Ottoni, et al., 2018).

In addition to content analysis of YouTube’s videos, Riberio, et al. (2019) took a novel approach by analyzing the content of video comment sections, explaining which types of videos individual users were likely to comment on overtime. Categorizing videos in four categories, including alt-right, alt-light, the intellectual dark Web (IDW), and a final control group, the authors found inconclusive evidence of migration between groups of videos [1].

There is also a tiny portion of commenter migration from the centrist IDW to the potentially radicalizing alt-right videos. However, we believe that one cannot conclude that YouTube is a radicalizing force based on commenter traffic only. There are several flaws in the setting of the study. Even though the study is commendable, it is also omitting the migration from the center to the left-of-center altogether, presenting a somewhat skewed view of the commenter traffic. In addition, only a tiny fraction of YouTube viewers engage in commenting. For example, the most popular video by Jordan Peterson, a central character of the IDW, has 4.7 million views but only 10,000 comments. Besides, commenting on a video does not necessarily mean agreement with the content. A person leaving a comment on a controversial topic might stem from a desire to get a reaction (trolling or flaming) from either the content creator or other viewers (Moor, et al., 2010; Berghel and Berleant, 2018). We are hesitant to draw any conclusions based on the commenter migration without analyzing the content of the comments.

The most recent study by Munger and Phillips (2019) analyzed political content on YouTube and suggested that the content creators operated on a simple supply-and-demand principle. That is, rather than algorithms driving viewer preference and further radicalization, there exists a demand for radical content that is external to YouTube. The demand has inspired content creators to produce more radicalized content, providing a supply of the material to the viewers. The study furthermore failed to find support for radicalization pathways, instead, the study claims that the demand for the most radical content peaked in 2017 whereas currently, the significant growth in content stems from the centrist IDW category reflected a deradicalization trend rather than further radicalization. Furthermore, these authors are critical towards claims that watching content on Youtube will lead to the spread of radical ideas like a “zombie bite” and are further critical of the potential pipeline from moderate, centrist channels to radical right-wing content.

 

++++++++++

3. Analyzing the YouTube recommendation algorithm

Our study focuses on the YouTube recommendation algorithm and the direction of recommendations between different groups of political content. To analyze the common claims from media and other researchers, we have distilled them into specific claims that can be assessed using our data set.

C1 — Radical Bubbles. Recommendations influence viewers of radical content to watch more similar content than they would otherwise, making it less likely that alternative views are presented.

C2 — Right-Wing Advantage. YouTube’s recommendation algorithm prefers right-wing content over other perspectives.

C3 — Radicalization Influence. YouTube’s algorithm influences users by exposing them to more extreme content than they would otherwise seek out.

C4 — Right-Wing Radicalization Pathway. YouTube algorithm influences viewers of mainstream and center-left channels by recommending extreme right-wing content, content that aims to disparage left-wing or centrist narratives.

By analyzing whether the data supports these claims, we will be able to draw preliminary conclusions on the impact of the recommendation algorithm.

3.1. YouTube channel selection criteria

The data for this study is collected from two sources. First, YouTube offers a few tools for software developers and researchers. Our research applies an application programming interface (API) that YouTube provides for other websites that integrate with YouTube and also for research purposes to define the channel information, including view and engagement statistics and countries. We have been collecting data daily on channels, videos and recommendations from the API since November 2018. As of November 2019, we upgraded our collection method to scraping the recommendations and video statistics data directly from the YouTube Web site. This modification in data collection was done due to the limitations of the amount of information we could retrieve through the API as well as limited data retention period the period. We were also unable to verify that the data provided by the API was exactly the same that what the viewers would see. All of the analysis presented in this paper is for recommendation and video statistics in the period of November and December 2019. We have retained the data collected prior to this time period and it might be used in further studies.

The scraped data, as well as the YouTube API, provides us a view of the recommendations presented to an anonymous account. In other words, the account has not “watched” any videos, retaining the neutral baseline recommendations, described in further detail by YouTube in their recent paper that explains the inner workings of the recommendation algorithm (Zhao, et al., 2019). One should note that the recommendations list provided to a user who has an account and who is logged into YouTube might differ from the list presented to this anonymous account. However, we do not believe that there is a drastic difference in the behavior of the algorithm. Our confidence in the similarity is due to the description of the algorithm provided by the developers of the YouTube algorithm (Zhao, et al., 2019). It would seem counterintuitive for YouTube to apply vastly different criteria for anonymous users and users who are logged into their accounts, especially considering how complex creating such a recommendation algorithm is in the first place.

The study includes 816 channels which fulfill the following criteria:

The primary channel selection was made based on the number of subscriptions. The YouTube API provides channel details, including the number of subscribers and aggregate views of all time on the channel. YouTube also provides detailed info on the views of each video and dislikes, thus providing information on the additional engagement each video receives from the users.

Generally, only channels that had over ten thousand subscriptions were analyzed. However, we also included channels that were averaging over 10,000 views per month, even if the subscription numbers were lower than our threshold.

We based our selection criteria on the assumption that tiny channels with minimal number of views or subscriptions are unlikely to fulfil YouTube’s recommendation criteria: “1) engagement objectives, such as user clicks, and degree of engagement with recommended videos; 2) satisfaction objectives, such as user liking a video on YouTube, and leaving a rating on the recommendation.” (Zhao, et al., 2019)

Another threshold for the channels was the focus of the content: only channels where more than 30 percent of the content was on U.S. political or cultural news or cultural commentary, were selected. We based the cultural commentary selection on a list of social issues on the Web site ISideWith. A variety of qualitative techniques compiled the list of these channels.

The lists provided by Ad Fontes Media provides a starting point for the more mainstream and well-known alternative sites. Several blogs and other Web sites further list political channels or provide tools for advanced searches based on topics (ChannelCrawler, 2019; SocialBlade, 2019; Feedspot, 2019). We also analyzed the recent academic studies and their lists of channels such as Ribero, et al. (2019) and Munger and Philips (2019). However, not all channels included in these two studies fit our selection criteria. Thus one can observe differences between the channel lists and categories between our research and other recent studies on a similar subject.

We added emerging channels by following the YouTube recommendation algorithm, which suggests similar content and which fit the criteria and passed our threshold. We can conceptualize the recommendation algorithm as a type of snowball sampling, a common technique applied in social sciences when one is conducting interview-based data collection but also in the analysis of social networks. Each source is “requested” to nominate a few candidates that would be of interest to the study. The researcher follows these recommendations until the informants reveal no new information or the inclusion criteria are met (e.g., channels become too marginal, or content is not political). In our case, there is a starting point; a channel acts as a node in the network. Each connected channel (e.g., node) in the network is visited. Depending on the content of the channel, it is either added to the collection of channels or discarded. Channels are visited until there are no new channels, or the new channels do not fit the original selection criteria (Lee, et al., 2006).

3.2. The categorization process

The categorization of YouTube channels was a non-trivial task. Activist organizations provide lists and classifications, but many of them are unreliable. For example, there are several controversies around the lists of hate groups discussed by the Southern Poverty Law Center (SPLC) (Thiessen, 2018). Also, there seems to be a somewhat contentious relationship between the Anti-Defamation League and YouTubers (Mandel, 2017; Alexander, 2019). We decided to create our categorization, based on multiple existing sources.

First, one has several resources to categorize mainstream or alternative media outlets. Mainstream media such as CNN or Fox News have been studied and categorized over time by various outlets (Eberl, et al., 2017; Ribeiro, et al., 2018). In our study, we applied two sites that provide information on the political views of mainstream media outlets: Ad Fontes Media and Media Bias Factcheck. Neither Web site is guaranteed to be unbiased, but by cross-referencing both, one can come to a relatively reliable categorization on the political bias of the major news networks. These sites covered the 50 largest mainstream channels, which make up for almost 80 percent of all YouTube views.

Nevertheless, the majority of the political YouTube channels were not included in sources categorizing mainstream outlets. After reviewing the existing literature on political YouTube and the categorization created by authors such as Ribero, et al. (2019) or Munger and Philips (2019), we decided to create a new categorization. Our study strives for a granular and precise classification to facilitate a deep dive into the political subcultures of YouTube, and the extant categories were too narrow in their scope. We decided to apply on both a high-level left-center-right political classification for high-level analysis and create a more granular distinction between 18 separate labels, described briefly in Table 1 or at length in Appendix A.4.

In addition to these ‘soft tags’ we applied a set of so-called ‘hard tags.’ These additional tags allowed us to differentiate between YouTube channels that were part of mainstream media outlets and independent YouTubers. The hard tags are discussed in more detail in Appendix A.3. The difference between ‘soft’ and ‘hard’ tags is that hard tags were based on external sources, whereas the soft tags were based on the content analysis of the labelers.

The tagging process allowed each channel to be characterized by a maximum of four different tags to create meaningful and fair categories for the content. In addition to labeling created by the two authors, we recruited an additional volunteer labeler, who was well versed in the YouTube political sphere, and whom we trusted to label channels by their existing content accurately. When two or more labelers defined a channel by the same label, that label was assigned to the channel. When the labelers disagreed and ended in a draw situation, the tag was not assigned. The majority was needed for a tag to be applied.

 

The intraclass correlation coefficiency between the three labelers
 
Figure 1: The intraclass correlation coefficiency between the three labelers.
 

The visual analysis in Figure 1 shows the intraclass correlation coefficiency (ICC) between the three labelers. Based on this analysis, we can determine that all three labelers were in agreement when it comes to the high-level labels, e.g., left-right-center. Besides, there is a high coefficiency in the majority of the granular categories. On the left side of the graph, we can see the intraclass correlation coefficiency values, the estimates of the “real” information captured by our classification, which ranges from 0 to one. The larger the number, the more similar the tags were. One the right side of the Figure, we see the reviewer agreement in percentages.

The ICC values above 0.75 are considered excellent, between 0.75 and 0.59 are good and above 0.4 are considered as fair (Cicchetti, 1994). In our categorization, few classifications measure under 0.4. However, we believe that the explanation for this convergence is related to the nature of these categories. The low coefficiency scoring of groups, ‘Provocateur’, ‘Anti-whiteness’ and ‘Revolutionary,’ could be explained by the labeler’s hesitation to apply these rather extreme labels where consistent evidence was lacking. Besides, since each channel was allowed four different ‘soft tags’ defining these subcategories, the channels were likely tagged by the other, milder tags. The rationale behind the lack of agreement on the ‘Educational’ label is best explained by the fact that this category classification might be somewhat redundant. Political content, even educational one, often has a clear bias, and the content already belongs to one or more stronger categories, such as Partisan Left or in the centrist Uncategorised.

However, if one looks at the percentages of the agreement, the agreement if very high in most cases. The only category where disagreement seems to be significant is the left-right-center categorization. However, this disagreement can be explained by the weighing applied when calculating the ICC factor.

To assign a label, we investigated which topics the channels discussed and from which perspective. Some channels are overtly partisan or declare their political stances and support for political parties in their introductions or have posted several videos where such topics are discussed. For example, libertarian channels support Ron and Rand Paul (Libertarian politicians affiliated with the Republican party) or discuss Austrian economics with references to economists such as Frederick von Hayek or Ludwig von Mises or the fictional works of the author Ayn Rand. Comparably, many channels dedicated to various social justice issues title their videos to reflect the content and the political slant, e.g., “Can Our Planet Survive Capitalism” or “The Poor Go To Jail And The Rich Make Bail In America” from AJ+.

Nevertheless, other channels are more subtle and required more effort to tease out their affiliation. In these cases, we analyzed the perspective that these channels took on political events that have elicited polarized opinions (for example, the nomination of Brett Kavanaugh in the U.S. Supreme Court, the Migrant Caravan, Russiagate). Similarly, we also analyzed the reactions that the channels had for polarizing cultural events or topics (e.g., protests at university campuses, trans activism, free speech). If the majority of these considerations aligned in the same direction, then the channel was designated as left-leaning or right-leaning. If there was a mix, then the channels were likely assigned to the centrist category.

The only way to conduct this labeling was to watch the content on the channels until the labelers found enough evidence for assigning specific labels. For some channels, this was relatively straightforward: the channels had introductory videos that stated their political perspectives. Some of the intros are very clearly indicating the political viewers of the content creator; some are more subtle. For example, a political commentator Kyle Kulinski explicitly states his political leanings (libertarian-left) in channel SecularTalk description. In contrast, a self-described Classical Liberal discussion host Dave Rubin has a short introduction of various guests, providing examples of the political discussions that take place on his channel The Rubin Report. In other cases, the labelers could not assign a label based on introduction or description but had to watch several videos on the channel to determine the political leanings. On average, every labeler watched at two to three videos per channel. In more difficult cases, the labelers had to watch more content, sometimes over 10 sample videos. The labelers were previously familiar with several channels but for the purposes of extending the understanding of the political YouTube for this study, each labeler watched over 60 additional hours of YouTube videos. These hours were spent on the content the labelers were previously unfamiliar with in order to define the political leanings without miscategorizing the channel and thus misrepresenting the views of the content creators. Based on the 18 classification categories, we created 13 aggregate groups that broadly represent the political views of the YouTube channels. The 18 ‘soft tags’ were aggregated from ideological groups and better differentiated between the channels. For more details on tagging aggregation, please see the Appendix A.2. These groupings were applied in the data visualization rather than the more granular 18 categories for clarity and differentiation purposes. The next section will discuss the data in more detail.

 

++++++++++

4. Findings and discussion

The data on YouTube channels a viewership each channel garners provides us with insights as to how the recommendation algorithm operates.

Per the data collected for 2019, YouTube hosted more channels with content that could be considered right-wing than before. In defining right-wing, we considered categories such as proactive “Anti-SJW” (for anti-Social Just Warrior, a term describing feminist/intersectionality advocates), Partisan-Right, Religious Conservative, and to some extent Conspiracy Channels (for brief explanations, see Table 1). For longer descriptions on the labels, see Appendix 5.2). However, these more numerous channels gained only a fraction of the views of mainstream media and centrist channels. Categories such as the Center/Left MSM category, Unclassified category (consisting mainly of centrist, non-political and educational channels), and Partisan Left, capture the majority of viewership. The difference here is considerable: where Center/left MSM has 22 million daily views, the largest non-mainstream category, Anti-SJW, has 5.6 million daily views. Figure 2 illustrates the number of views for each category compared to the number of channels [2].

 

Daily views and number of channels
 
Figure 2: Daily views and number of channels.
 

 

Table 1: Categorization soft tags and examples.
TagExamples
Conspiracy A channel that regularly promotes a variety of conspiracy theories.X22Report, Next News Network
Libertarian Political philosophy with liberty as the main principle.Reason, John Stossel, Cato Institute
Anti-SJW Have a significant focus on criticizing “social justice” (see next category) with a positive view of the marketplace of ideas and discussing controversial topics.Sargon of Akkad, Tim Pool
Social Justice Promotes identity politics and intersectionality.Peter Coffin, hbomberguy
White Identitarian Identifies-with/is-proud-of the superiority of “whites” and western civilization.NPIRADIX (Richard Spencer)
Educational Channel that mainly focuses on education material.TED, SoulPancake
Late Night Talk shows Channel with content presented humorous monologues about the daily news.Last Week Tonight, Trevor Noah
Partisan Left Focused on politics and exclusively critical of RepublicansThe Young Turks, CNN
Partisan Right Channel mainly focused on politics and exclusively critical of Democrats, supporting Trump.Fox News, Candace Owens
Anti-theist Self-identified atheist who are also actively critical of religion.CosmicSkeptic, Matt Dillahunty
Religious Conservative A channel with a focus on promoting Christianity or Judaism in the context of politics and culture.Ben Shapiro, PragerU
Socialist (Anti-Capitalist) Focus on the problems of capitalism.Richald Wolf, NonCompete
Revolutionary Endorses the overthrow of the current political system.Libertarian Socialist Rants, Jason Unruhe
Provocateur Enjoys offending and receiving any kind of attention.StevenCrowder, MILO
MRA (Men’s Rights Activist) Focus on advocating for rights for men.Karen Straughan
Missing Link Media Channels not large enough to be considered “mainstream.”Vox, NowThis News
State Funded Channels funded by governments.PBS NewsHour, Al Jazeera, RT
Anti-Whiteness A subset of Social Justice that in addition to intersectional beliefs about race.African Diaspora News Channel

 

Figure 3 presents a chart of channel relations illustrating relations between channels and channel clusters based on the concept of a force-directed graph (Bannister, et al., 2012). The area of each bubble, but not the radius, corresponds to the number of views a channel has. The force/size of the line links between channels corresponds to the portion of recommendations between these channels. From this chart, we can see left-wing and centrist mainstream media channels are clustered tightly together. The Partisan Right cluster is also closer to the large mainstream media cluster than it is to any other category. Anti-SJW and Provocative Anti-SJW are clustered tightly together with libertarian channels, while smaller categories such as Anti-theists and socialists are very loosely linked to a limited number of other categories. White Identitarian channels are small and dispersed across the graph.

 

Channel clusters
 
Figure 3: Channel clusters.
 

When analyzing the recommendation algorithm, we are looking at the impressions the recommendation algorithm provides viewers of each channel. By impressions, we are referring to an estimate for the number of times a viewer was presented with a specific recommendation. This number is an estimate because only YouTube is privy to the data reflecting granulated impressions. However, public-facing data obtained from channels themselves provide us with information on at least the top 10 recommendations. A simplified formula for calculating the number of impressions from Channel A to Channel B is calculated by dividing the number of recommendations from A to B by the number of total recommendations channel A receives summed with channel views and recommendations per video, multiplied by ten (for further information, see Appendix A.1). Such a calculation of impressions allows us to aggregate the data between channels and categories.

 

Flow diagram presenting the flow or recommendations between different groups
 
Figure 4: Flow diagram presenting the flow or recommendations between different groups.
 

Figure 4 presents the recommendation algorithm in a flow diagram format. The diagram shows the seed channel categories on the left side and the recommendation channel categories on the left side. The sizes of channel categories are based on overall channel view counts. The fourth category from the top is the most viewed channel category, the Center/Left Mainstream media category (MSM). This group is composed of late-night talk shows, mainstream media shows, including the New York Times’ YouTube channel. The Partisan Left category closely follows the Center/Left MSM category, with the primary differentiating factor being that the Partisan Left category includes the content of independent YouTube creators. Together, these two most viewed categories garner close to 40 million daily views.

Several smaller categories follow the top two categories. Notably, the two second largest categories are also centrist or left-leaning in their political outlook. For example, the two largest channels in the Anti-SJW category (JRE Clips and PowerfulJRE)) both belong to an American podcast host, Joe Rogan, who includes guests from a wide range of political beliefs. The Unclassified groups consist of centrist, mostly apolitical, educational channels such as TED or government-owned mainstream media channels such as Russia Today. Based on our flow diagram, we can see that the recommendation algorithm directs traffics from all channel groups into the two largest ones, away from more niche categories.

Based on these data, we can now evaluate the claims that the YouTube recommendation algorithm will recommend content that contributes to the radicalization of YouTube’s user base. By analyzing each radicalization claim and whether the data support these claims, we can also conclude whether the YouTube algorithm has a role in political radicalization.

The first claim tested is that YouTube creates C1 — Radical Bubbles, i.e., recommendations influence viewers of radical content to watch more similar content than they would otherwise, making it less likely that alternative views are presented. Based on our data analysis, this claim is partially supported. The flow diagram presented in Figure 4 shows a high-level view of the intra-category recommendations. The recommendations provided by the algorithm remain within the same category or categories that bear similarity to the original content viewed by the audience. However, from the flow diagram, one can observe that many channels receive fewer impressions than what their views are i.e., the recommendation algorithm directs traffic towards other channel categories. A detailed breakdown of intra-category and cross-category recommendations is presented by recommendations percentages in Figure 12 and by a number of impressions in Figure 13 in Appendix B show the strength of intra-category recommendations by channel.

We can see that the recommendation algorithm does have an intra-category preference, but this preference is dependent on the channel category. For example, 51 percent of traffic from Center Left/MSM channels is directed to other channels belonging to the same category (see Figure 12). Also, the remaining recommendations are directed mainly to two categories: Partisan Left (18.2 percent) and Partisan Right (11 percent), both primarily consisting of mainstream media channels.

 

Direction of algorithmic recommendations
 
Figure 5: The direction of algorithmic recommendations.
 

Figure 5 presents a simplified version of the recommendation flows, highlighting the channel categories that benefit from the recommendations traffic. From this figure, we can observe that there is a significant net flow of recommendations towards channels that belong to the category Partisan Left. For example, the Social Justice category suffers from cross-category recommendations. For viewers of channels that are categorized as Social Justice, the algorithm presents 5.9 more recommendations towards the Partisan Left channels than vice versa and another 5.2 million views per day towards Center/Left MSM channels. Figure 5 also shows a “pipeline” that directs traffic towards the Partisan Left category from other groups via the intermediary Center/Left MSM category. This is true even for the other beneficiary category, the Partisan Right, which loses 2.9 million recommendations to Partisan Left but benefits with a net flow of recommendations from different right-leaning categories (16.9M).

However, when it comes to categories that could be potentially radicalizing, this statement is only partially supported. Channels that we grouped into Conspiracy Theory or White Identitarian have very low percentages of recommendations within the group itself (as shown in Figure 12). In contrast, channels that we categorized into Center/Left MSM or Partisan Left or Right have higher numbers for recommendations that remain within the group. These data show that a dramatic shift to more extreme content, as suggested by media (Roose, 2019; Tufekci, 2018), is untenable.

Second, we posited that there is a C2 — Right-Wing Advantage, i.e., YouTube’s recommendation algorithm prefers right-wing content over other perspectives. This claim is also not supported by the data. On the contrary, the recommendation algorithm favors content that falls within mainstream media groupings. YouTube has stated that its recommendations are based on content that individual users watch and engage in and that watching habits influence 70 percent of recommendations.

 

Algorithmic advantage by groups
 
Figure 6: Algorithmic advantage by groups.
 

Figure 6 shows the algorithmic advantage based on daily views. From this Figure, we can observe that the two out of the top three categories (Partisan Left, and Partisan Right) receive more recommendations than other categories irregardless of what category the seed channels belong to. Conversely, any other category does not get their channels suggested by the algorithm. In other words, the recommendation algorithm influences the traffic from all channels towards Partisan Left and Partisan Right channels, regardless of what category the channel that the users viewed belonged to.

 

High-level view of algorithmic advantages/disadvantages in recommendation impressions
 
Figure 7: High-level view of algorithmic advantages/disadvantages in recommendation impressions.
 

We can also observe this trend from a higher level aggregate categorization, as is presented in Figure 7. The Figure affirms that channels that present left or centrist political content are advantaged by the recommendation algorithm, while channels that present content on the right are at a disadvantage.

The recommendations algorithm advantages several groups to a significant extent. For example, we can see that when one watches a video that belongs to the Partisan Left category, the algorithm will present an estimated 3.4M impressions to the Center/Left MSM category more than it does the other way. On the contrary, we can see that the channels that suffer the most substantial disadvantages are again channels that fall outside mainstream media. Both right-wing and left-wing YouTuber channels are disadvantaged, with White Identitarian and Conspiracy channels being the least advantaged by the algorithm. For viewers of conspiracy channel videos, there are 5.5 million more recommendations to Partisan Right videos than vice versa.

We should also note that right-wing videos are not the only disadvantaged groups. Channels discussing topics such as social justice or socialist view are disadvantaged by the recommendation algorithm as well. The common feature of disadvantages channels is that their content creators are seldom broadcasting networks or mainstream journals. These channels are independent content creators.

When it comes to the third claim regarding YouTube’s potential C3 — Radicalization Influence, i.e., YouTube’s algorithm influences users by exposing them to more extreme content than they would otherwise; this claim is also not supported by our data. On the contrary, the recommendation algorithm appears to restrict traffic towards extreme right-wing categories actively. The two most drastic examples are channels we have grouped under the categories of White Identitarian and Conspiracy theory channels. These two groups receive almost no traffic based on the recommendation algorithm, as presented in Figure 12 and Figure 6.

 

Traffic from white identitarian channels
 
Figure 8: Traffic from white identitarian channels.
 

Another way to visualize the lack of traffic from recommendations is to view the recommendations’ flow. Figures 8 and 9 show that the majority of the recommendations flow to either towards Partisan Right, Center/Left MSM, and Partisan Left content. The White Identitarian channel traffic is also directed towards Libertarian and, to a small extent, even towards centrist Anti-SJW content.

 

Traffic from conspiracy channels
 
Figure 9: Traffic from conspiracy channels.
 

Figure 2 illustrated that the daily views for White Identitarian channels are marginal. Even if we would compare the views of White Identitarian channels with the Conspiracy channels, we could see that Conspiracy channels are twice as viewed than content created by the White Identitarians. This discrepancy is notable since Conspiracy channels seem to gain zero traffic from recommendations (as shown in Figure 12) and are the least advantaged group of all categories. While MRA (Men’s Rights Activists) channels form the smallest category in our study, the White Identitarian category is in the bottom five of all groups. Another comparison that illustrates the marginality of White Identitarian channels is the fact that this group consists of 37 channels with enough views to fit within the scope of the study. The White Identitarian category includes almost the same number of channels as Libertarian channels but receives only a third as many views.

Our fourth claim stated that there exists C4 — Right-Wing Radicalization Pathway i.e., YouTube algorithm influences viewers of mainstream and center-left channels via increasingly left-wing critical content to the extreme right. Again, these data suggest the opposite. The right-wing channel that benefits the most from the recommendation algorithm is Fox News, a mainstream right-wing media outlet. Figure 10 shows that Fox News receives over 50 percent of the recommendations form other channels, which map to the category of the Partisan Right. Fox News also receives large numbers of recommendations from every other category that could be considered right-wing. This observation is aligned with the overall trend of the algorithm to benefiting mainstream media outlets over independent YouTube channels. Fox News is likely disproportionally favored on the right due to a lack of other right-leaning mainstream outlets, while traffic in the Center/Left MSM and Partisan Left is more evenly distributed among their representative mainstream outlets.

 

Algorithmic advantage for Fox News
 
Figure 10: Algorithmic advantage for Fox News.
 

We can also analyze the overall net benefits the mainstream media channels are receiving from the algorithm by aggregating the mainstream channels into one high-level group and independent YouTubers into another group and comparing the algorithmic advantages and disadvantages for each. The third group we separated from mainstream media and YouTubers is the group we called the “Missing Link Media.” This group encompasses media outlets that have financial backing with the traditional mainstream outlets but are not considered part of the conventional mainstream media. For example, left-wing channels such as Vox or Vice belong to this category, while BlazeTV is an equivalent for the right-leaning media. Figure 11 shows the clear advantage mainstream media channels receive over both independent channels and Missing Link Media channels.

 

Algorithmic advantage of mainstream media
 
Figure 11: Algorithmic advantage of mainstream media.
 

Finally, based on the findings and analysis of our four claims, we conclude that these data offer little support to the claims that YouTube’s recommendation algorithm will recommend content that might be contributing to the radicalization of the user base. Only the first claim is partially supported, while the data refute all the other three claims. Rejection of these claims seems to be in line with studies that critique the claims of YouTube’s algorithm as a pathway to radicalization (Munger and Phillips, 2019). Table 2 summarizes our findings.

 

Table 2: Claims and data support.
ClaimData
C1 — Radical Bubbles. Recommendations influence viewers of radical content to watch more similar content than they would otherwise, making it less likely that alternative views are presented.Partially supported
C2 — Right-Wing Advantage. YouTube’s recommendation algorithm prefers right-wing content over other perspectives.Not supported
C3 — Radicalization Influence. YouTube’s algorithm influences users by exposing them to more extreme content than they would otherwiseNot supported
C4 — Right-Wing Radicalization Pathway. YouTube algorithm influences viewers of mainstream and center-left channels by recommending extreme right-wing content, content that aims to disparage left-wing or centrist narratives.Not supported

 

YouTube has stated that its algorithm will favor more recent videos that are popular both in terms of views as well as engagement (Zhao, et al., 2019). The algorithm will recommend more videos based on a user profile, or the most current, popular videos for anonymous viewers. YouTube has stated that they are attempting to maximize the likelihood that a user will enjoy their recommended videos and will remain on the platform for as long as possible. The viewing history determines whether the algorithm will recommend the viewer more extreme content. Antithetical to this claim is that our data show that even if the user is watching very extreme content, their recommendations will be populated with a mixture of extreme and more mainstream content. YouTube is, therefore, more likely to steer people away from extremist content rather than vice versa.

 

++++++++++

5. Limitations and conclusions

There are several limitations to our study that must be considered for the future. First, the main limitation is the anonymity of the data set and the recommendations. The recommendations the algorithm provided were not based on videos watched over extensive periods. We expect and have anecdotally observed that the recommendation algorithm becomes more fine-tuned and context-specific after each video that is watched. However, we currently do not have a way of collecting such information from individual user accounts, but our study shows that the anonymous user is generally directed towards more mainstream content than extreme. Similarly, anecdotal evidence from a personal account shows that YouTube suggests content that is very similar to previously watched videos while also directing traffic into more mainstream channels. That is, contrary to prior claims; the algorithm does not appear to stray into suggesting videos several degrees away from a user’s normal viewing habits.

Second, the video categorization of our study is partially subjective. Although we have taken several measures to bring objectivity into the classification and analyzed similarities between each labeler by calculating the intraclass correlation coefficiencies, there is no way to eliminate bias. There is always a possibility for disagreement and ambiguity for categorizations of political content. We, therefore, welcome future suggestions to help us improve our classification.

In conclusion, our study shows that one cannot proclaim that YouTube’s algorithm, at the current state, is leading users towards more radical content. There is clearly plenty of content on YouTube that one might view as radicalizing or inflammatory. However, the responsibility of that content is with the content creator and the consumers themselves. Shifting the responsibility for radicalization from users and content creators to YouTube is not supported by our data. The data shows that YouTube does the exact opposite of the radicalization claims. YouTube engineers have said that 70 percent of all views are based on the recommendations (Zhao, et al., 2019). When combined with this remark with the fact that the algorithm clearly favors mainstream media channels, we believe that it would be fair to state that the majority of the views are directed towards left-leaning mainstream content.

We agree with the Munger and Phillips (2019), the scrutiny for radicalization should be shined upon the content creators and the demand and supply for radical content, not the YouTube algorithm. On the contrary, the current iteration of the recommendations algorithm is working against extremists. Nevertheless, YouTube has conducted several deletion sweeps targeting extremist content (Martineau, 2019). These actions might be ill-advised. Deleting extremist channels from YouTube does not reduce the supply for the content (Munger and Phillips, 2019). These banned content creators migrate to other video hosting more permissible sites. For example, a few channels that were initially included in the Alt-right category of Ribero, et al. (2019), are now gone from YouTube but still exist on alternative platforms such as BitChute. The danger we see here is that there are no algorithms directing viewers from extremist content towards more centrist materials on these alternative platforms or the dark Web, making deradicalization efforts more difficult (Hussain and Saltman, 2014). We believe that YouTube has the potential to act as a deradicalization force. However, it seems that YouTube itself will have to decide first if the platform is meant for independent YouTubers or if it is just another outlet for mainstream media. End of article

 

About the authors

Mark Ledwich is a software engineer in Brisbane, Australia.
E-mail: mark [at] ledwich [dot] com [dot] au

Anna Zaitsev is Postdoctoral Scholar and Lecturer in the School of Information at University of California, Berkeley.
E-mail: anna [dot] zaitsev [at] berkeley [dot] edu

 

Acknowledgments

First, we would like to thank our volunteer labeler for all the hours spent on YouTube. We would also like to thank Cody Moser, Brenton Milne and Justin Murphy and everyone else who gave their feedback on the early drafts of this paper.

Our data, channel categorization, and data analysis used in this study are all available on GitHub. Please visit the GitHub page (https://github.com/markledwich2/Recfluence) for links to data or the data visualization. We welcome comments, feedback, and critique on the channel categorization as well as other methods applied in this study.

 

Notes

1. The study borrows a definition for the alt-right from Anti-Defamation League: “loose segment of the white supremacist movement consisting of individuals who reject mainstream conservatism in favor of politics that embrace racist, anti-Semitic and white supremacist ideology” (Ribeiro, et al., 2020, p. 132). The alt-light is defined to be a civic nationalist group rather than racial nationalism groups. The third category, “intellectual dark Web” (IDW), is defined as a collection of academics and podcasters who engage in controversial topics. The fourth category, the control group, includes a selection of channels for fashion magazine channels such as the Cosmopolitan and GQ to a set of left-wing and right-wing mainstream media outlets.

2. The Figure 2 and all the following Figures are applying the aggregated categories rather than the granular labels show in Figure 1 and discussed in Appendix A.2.

3. This group has a significant overlap with the intellectual dark Web group as described by Ribero, et al. (2019) and Munger and Phillips (2019).

 

References

N. Agarwal, R. Gupta, S.K. Singh, and V. Saxena, 2017. “Metadata based multi-labelling of YouTube videos,” 2017 Seventh International Conference on Cloud Computing, Data Science & Engineering — Confluence, pp. 586–590.
doi: https://doi.org/10.1109/CONFLUENCE.2017.7943219, accessed 25 February 2020.

S. Agarwal and A. Sureka, 2016. “Spider and the flies: Focused crawling on Tumblr to detect hate promoting communities,” arXiv 1603.09164 (30 March), at https://arxiv.org/abs/1603.09164, accessed 25 February 2020.

S. Agarwal and A. Sureka, 2015. “Topic-specific YouTube crawling to detect online radicalization,” In: W. Chu, S. Kikuchi, and S. Bhalla (editors). Databases in networked information systems. Lecture notes in computer science, volume 8999. Cham, Switzerland: Springer, pp. 133–151.
doi: https://doi.org/10.1007/978-3-319-16313-0_10, accessed 25 February 2020.

J. Alexander, 2019. “Pewdiepie pulls 50,000 pledge to Jewish anti-hate group after fan backlash,” Verge (September 2019) https://www.theverge.com/2019/9/12/20862696/pewdiepie-adl-donation-backlash-100-million-subscribers, accessed 27 December 2019.

V. Andre, 2012. “‘Neojihadism’ and YouTube: Patani militant propaganda dissemination and radicalization,” Asian Security, volume 8, number 1, pp. 27–53.
doi: https://doi.org/10.1080/14799855.2012.669207, accessed 25 February 2020.

Anti-Defamation League (ADL), 2019. “Despite YouTube policy update, anti-Semitic, white supremacist channels remain,” ADL Center on Extremism (15 August), at https://www.adl.org/blog/despite-youtube-policy-update-anti-semitic-white-supremacist-channels-remain, accessed 25 February 2020.

M.Z. Asghar, S. Ahmad, A. Marwat, and F.M. Kundi, 2015. “Sentiment analysis on YouTube: A brief survey,” arXiv 1511.09142 (30 November), at https://arxiv.org/abs/1511.09142, accessed 25 February 2020.

I. Awan, 2017. “Cyber-extremism: Isis and the power of social media,” Society, volume 54, number 2, pp. 138–149.
doi: https://doi.org/10.1007/s12115-017-0114-0, accessed 25 February 2020.

M.J. Bannister, D. Eppstein, M.T. Goodrich, and L. Trott, 2012. “Force-directed graph drawing using social gravity and scaling,” In: W. Didimo and M. Patrignani (editors). Graph drawing. Lecture Notes in Computer Science, volume 7704. Berlin: Springer, pp. 414–425.
doi: https://doi.org/10.1007/978-3-642-36763-2_37, accessed 25 February 2020.

A. Ben-David and A. Matamoros-Fernández, 2016. “Hate speech and covert discrimination on social media: Monitoring the Facebook pages of extreme-right political parties in Spain,” International Journal of Communication, volume 10, pp. 1,167–1,193, and at https://ijoc.org/index.php/ijoc/article/view/3697, accessed 25 February 2020.

H. Berghel and D. Berleant, 2018. “The online trolling ecosystem,” Computer, volume 51, number 8, pp. 44–51.
doi: https://doi.org/10.1109/MC.2018.3191256, accessed 25 February 2020.

C. Blaya, 2019. “Cyberhate: A review and content analysis of intervention strategies,” Aggression and Violent Behavior, volume 45, pp. 163–172.
doi: https://doi.org/10.1016/j.avb.2018.05.006, accessed 25 February 2020.

P. Burnap and M.L. Williams, 2015. “Cyber hate speech on Twitter: An application of machine classification and statistical modeling for policy and decision making,” Policy & Internet, volume 7, number 2, pp. 223–242.
doi: https://doi.org/10.1002/poi3.85, accessed 25 February 2020.

E. Chandrasekharan, U. Pavalanathan, A. Srinivasan, A. Glynn, J. Eisenstein, and E. Gilbert, 2017. “You can’t stay here: The efficacy of Reddit’s 2015 ban examined through hate speech,” Proceedings of the ACM on Human-Computer Interaction, article number 31.
doi: https://doi.org/10.1145/3134666, accessed 25 February 2020.

ChannelCrawler, 2019. “The YouTube channel crawler,” at https://channelcrawler.com/, accessed 27 December 2019.

D.V. Cicchetti, 1994. “Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology,” Psychological Assessment, volume 6, number 4, pp. 284–290.
doi: https://doi.org/10.1037/1040-3590.6.4.284, accessed 25 February 2020.

N. de Boer, H. Sütfeld, and J. Groshek, 2012. “Social media and personal attacks: A comparative perspective on co-creation and political advertising in presidential campaigns on YouTube,” First Monday, volume 17, number 12, at https://firstmonday.org/article/view/4211/3376, accessed 25 February 2020..
doi: https://doi.org/10.5210/fm.v17i12.4211, accessed 25 February 2020.

J.-M. Eberl, H.G. Boomgaarden, and M. Wagner, 2017. “One bias fits all? Three types of media bias and their effects on party preferences,” Communication Research, volume 44, number 8, pp. 1,125–1,148.
doi: https://doi.org/10.1177/0093650215614364, accessed 25 February 2020.

Feedspot, 2019. “Politicial YouTube channels,” at https://blog.feedspot.com/political_youtube_channels/, accessed 27 December 2019.

P. Ferdinand (editor), 2000. The Internet, democracy and democratization. London: Routledge.

P.S. Forscher, C.K. Lai, J.R. Axt, C.R. Ebersole, M. Herman, P.G. Devine, and B.A. Nosek, 2019. “A meta-analysis of procedures to change implicit measures,” Journal of Personality and Social Psychology, volume 17, number 3, pp. 522–559.
doi: https://doi.org/10.1037/pspa0000160, accessed 25 February 2020.

I. Gagliardone, D. Gal, T. Alves, and G. Martinez, 2015. Countering online hate speech. Paris: UNESCO Publishing, and at https://unesdoc.unesco.org/ark:/48223/pf0000233231, accessed 25 February 2020.

T. Gillespie, 2018. Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. New Haven, Conn.: Yale University Press.

A.G. Greenwald, D.E. McGhee, and J.L. Schwartz, 1998. “Measuring individual differences in implicit cognition: The implicit association test,” Journal of Personality and Social Psychology, volume 74, number 6, pp. 1,464–1,480.
doi: https://doi.org/10.1037//0022-3514.74.6.1464, accessed 25 February 2020.

G. Hussain and E.M. Saltman, 2014. Jihad trending: A comprehensive analysis of online extremism and how to counter it. London: Quilliam, at https://www.quilliaminternational.com/shop/e-publications/jihad-trending-a-comprehensive-analysis-of-online-extremism-and-how-to-counter-it-2/, accessed 25 February 2020.

M.N. Hussain, S. Tokdemir, N. Agarwal, and S. Al-Khateeb, 2018. “Analyzing disinformation and crowd manipulation tactics on YouTube,” ASONAM ’18: Proceedings of the 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 1,092–1,095.

International Telecommunication Union (ITU), 2019. “World telecommunication/ICT indicators database online 2019,” at hhttps://www.itu.int/en/ITU-D/Statistics/Pages/publications/wtid.aspx, accessed 25 February 2020.

J.M. Kayany, 1998. “Contexts of uninhibited online behavior: Flaming in social newsgroups on Usenet,” Journal of the American Society for Information Science, volume 49, number 12, pp. 1,135–1,141.

L. Knuttila, 2011. “User unknown: 4chan, anonymity and contingency,” First Monday, volume 16, number 10, at https://firstmonday.org/article/view/3665/3055, accessed 25 February 2020.
doi: https://doi.org/10.5210/fm.v16i10.3665, accessed 25 February 2020.

S.H. Lee, P.-J. Kim, and H. Jeong, 2006. “Statistical properties of sampled networks,” Physical Review E, volume 73, number 1, 016102.
doi: https://doi.org/10.1103/PhysRevE.73.016102, accessed 25 February 2020.

B. Mandel, 2017. “The Anti-Defamation League’s sad slide into just another left-wing pressure group,” Federalist (28 July), at https://thefederalist.com/2017/07/28/anti-defamation-leagues-sad-slide-just-another-left-wing-pressure-group/, accessed 27 December 2019.

P. Martineau, 2019. “YouTube removes more videos but still misses a lot of hate,” Wired (4 September), at https://www.wired.com/story/youtube-removes-videos-misses-hate/, accessed 27 December 2019.

P.J. Moor, A. Heuvelman, and R. Verleur, 2010. “Flaming on YouTube,” Computers in Human Behavior, volume 26, number 6, pp. 1,536–1,546.
doi: https://doi.org/10.1016/j.chb.2010.05.023, accessed 25 February 2020.

K. Munger and J. Phillips, 2019. “A supply and demand framework for YouTube politics,” at hhttps://osf.io/73jys/, accessed 25 February 2020.

L. Munn, 2019. “Alt-right pipeline: Individual journeys to extremism online,” First Monday, volume 24, number 6, at https://firstmonday.org/article/view/10108/7920, accessed 25 February 2020.
doi: https://doi.org/10.5210/fm.v24i6.10108, accessed 25 February 2020.

A. Nagle, 2017. Kill all normies: Online culture wars from 4chan and Tumblr to Trump and the alt-right. Alresford: John Hunt.

R. Ottoni, E. Cunha, G. Magno, P. Bernardina, W. Meira, Jr., and V. Almeida, 2018. “Analyzing right-wing YouTube channels: Hate, violence and discrimination,” WebSci ’18: Proceedings of the Tenth ACM Conference on Web Science, pp. 323–332.
doi: https://doi.org/10.1145/3201064.3201081, accessed 25 February 2020.

B. Pfaffenberger, 1996. “‘If I want it, it’s ok’: Usenet and the (outer) limits of free speech,” Information Society, volume 12, number 4. pp. 365–386.
doi: https://doi.org/10.1080/019722496129350, accessed 25 February 2020.

F.N. Ribeiro, L. Henrique, F. Benevenuto, A. Chakraborty, J. Kulshrestha, M. Babaei, and K.P. Gummadi, 2018. “Media bias monitor: Quantifying biases of social media news outlets at large-scale,” Twelfth International AAAI Conference on Web and Social Media, at https://people.mpi-sws.org/~gummadi/papers/ribiero_bias_monitor_ICWSM18.pdf, accessed 25 February 2020.

M.H. Ribeiro, R. Ottoni, R West, V.A.F. Almeida, and W. Meira, 2020. “Auditing radicalization pathways on YouTube,” FAT* ’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp 131–141.
doi: https://doi.org/10.1145/3351095.3372879, accessed 25 February 2020.

K. Roose, 2019. “The making of a YouTube radical,” New York Times (8 June), at https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html, accessed 27 December 2019.

J.B. Schmitt, D. Rieger, O. Rutkowski, and J. Ernst, 2018. “Counter-messages as prevention or promotion of extremism?! The potential role of YouTube: Recommendation algorithms,” Journal of Communication, volume 68, number 4, pp. 780–808.
doi: https://doi.org/10.1093/joc/jqy029, accessed 25 February 2020.

Q. Shen, M.M. Yoder, Y. Jo, and C.P. Rosé, 2018. “Perceptions of censorship and moderation bias in political debate forums,” Proceedings of the Twelfth International AAAI Conference on Web and Social Media.

SocialBlade, 2019. “Top 25 YouTube users tagged with politics sorted by video views,” https://socialblade.com/youtube/top/tag/politics/videoviews, accessed 27 December 2019.

A. Sureka, P. Kumaraguru, A. Goyal, and S. Chhabra, 2010. “Mining YouTube to discover extremist videos, users and hidden communities,” In: P.-J. Cheng, M.-Y. Kan, W. Lam, and P. Nakov (editors). Information retrieval technology. Lecture Notes in Computer Science, volume 6458. Berlin: Springer, pp. 13–24.
doi: https://doi.org/10.1007/978-3-642-17187-1_2, accessed 25 February 2020.

M. Thiessen, 2018. “The Southern Poverty Law Center has lost all credibility,” Washington Post (21 June), at https://www.washingtonpost.com/opinions/the-southern-poverty-law-center-has-lost-all-credibility/2018/06/21/22ab7d60-756d-11e8-9780-b1dd6a09b549_story.html, accessed 27 December 2019.

Z. Tufekci, 2018. “YouTube, the great radicalizer,” New York Times (10 March), at https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html, accessed 27 December 2019.

J.R. Vacca (editor), 2019. Online terrorist propaganda, recruitment, and radicalization. Boca Raton, Fla.: CRC Press.
doi: https://doi.org/10.1201/9781315170251, accessed 25 February 2020.

YouTube, 2019a. “Limited features for certain videos,” at https://support.google.com/youtube/answer/7458465?hl=en, accessed 27 December 2019.

YouTube, 2019b. “Policies and safety,” at https://www.youtube.com/about/policies/#community-guidelines, accessed 27 December 2019.

Z. Zhao, L. Hong, L. Wei, J. Chen, A. Nath, S. Andrews, A. Kumthekar, M. Sathiamoorthy, X. Yi, and E. Chi, 2019. “Recommending what video to watch next: A multitask ranking system,” RecSys ’19: Proceedings of the 13th ACM Conference on Recommender Systems, pp. 43–51.
doi: https://doi.org/10.1145/3298689.3346997, accessed 25 February 2020.

 

Appendix A

A.1. Channel views and formulas

We have used several formulas in order to capture the flow of recommendations. The main concept in our study is the impression. Impression is an estimate for the number of times a viewer was presented with a recommendation. We count each of the top 10 recommendations for a video as an ’impression‘. Only YouTube knows true impressions, so we use the following process create an estimate:

A.2. Tag aggregation

In order to create meaningful ideological categories, we have aggregated the tags assigned for each channel. In order to calculate the majority view, each soft tag is assessed independently. For each tag, the number of the reviewer with that rag must tally to more than half. Eighteen categories of soft tags, the soft tags defining left, center, and right, and the hard tags defining the media type, were aggregated for the visualization and data analysis. The following list informs which tags or tag combinations were aggregated to represent an ideology, rather than just a collection of tags.

A.3. Hard tags

Hard tags, presented in Table 3, are tags sources from external sources. Any combination of the following tags can be applied to a channel. Hard tags are for comparison between the categorization presented in this paper and other work, academic or otherwise, and also used to distinguish between YouTubers and TV or other mainstream media content.

 

Table 3: Terminology.
TermFormula
ImpressionsAn estimate for the number of times a viewer was presented with a recommendation. i.e., we count each of the top 10 recommendations for a video as an “impression”. Only YouTube knows true impressions, so we use the following process create an estimate: Consider each combination of videos (e.g., Video A to Video B)
(A to B impressions) = (recommendations from A to B) / (total recommendations from Video A) x (*As views) x (recommendations per video = 10)
Relevant impressions(A channel’s relevance %) x impressions
Channel viewsThe total number of video views since 1 January 2018
Daily channel views(channel views) * (days in the period videos have been recorded for the channel)
Relevant channel views (daily channel views) * (channel relevance %)

 

A.4. Soft tags

Soft tags, as presented in Table 4, are a natural category for U.S. YouTube content. Many traditional ways of dividing politics are not natural categories that would accurately describe the politics of YouTube channels. In general, YouTubers are providing reaction and sense-making on other channels or current events in the United States. We have created a list of categories that attempt to align the stands taken by the channels more naturally, expanding the categorization beyond the left, center, and right categories.

 

Table 4: Hard tabs.
TagExamples
Mainstream News Reporting on newly received or noteworthy information. Widely accepted and self-identified as news (even if mostly opinion). Appears in either https://www.adfontesmedia.com or https://mediabiasfactcheck.com. To tag they should have over 30% focus on politics & culture.Fox News, Buzzfeed News
TV content originally created for broadcast TV or cableCNN, Vice
Ribeiro, et al.’s alt-lite, alt-right, IDWAs listed in Auditing Radicalization Pathways on YouTube (Ribeiro, et al., 2019)

 

The tag needs to be engaging in some way to the current meta-discussion about YouTube’s influence on politics. Our list of categories intends to cover major cultural topics and label channels to the best of our abilities. We have tried to find specific positions that could be mixed and aggregate in order to create categories that would represent ideologies.

 

Table 5: Soft tags.
TagExamples
Conspiracy A channel that regularly promotes a variety of conspiracy theories. A conspiracy theory explains an event/circumstance as the result of a secret plot that is not widely accepted to be true (even though sometimes it is).
Example conspiracy theories:
  • Moon landings were faked
  • QAnon & Pizzagate
  • Trump colluding with Russia to win the election
X22Report, The Next News Network
Libertarian A political philosophy that has liberty as its main principle. Generally skeptical of authority and state power (e.g., regulation, taxes, government programs). Favors free markets and private ownership.
Note: To tag someone, this should be the primary driver of their politics. Does not include libertarian socialists who also are anti-state but are anti-capitalist and promote communal living.
Reason, John Stossel, Cato Institute
Anti-SJW Channel has to have a significant focus on criticizing “Social Justice” (see next category) with a positive view of the marketplace of ideas and discussing controversial topics. To tag a channel, this should be a common focus in their content.Sargon of Akkad, Tim Pool
Social Justice The channel promotes
  • Identity Politics & Intersectionality — narratives of oppression though the combination of historically oppressed identities: Women, Non-whites, Transgender
  • Identity Politics & Intersectionality — narratives of oppression though the combPolitical Correctness — the restriction of ideas and words you can say in polite society.
  • Identity Politics & Intersectionality — narratives of oppression though the combSocial Constructionism — the idea that the differences between individuals and groups are explained entirely by the environment. For example, sex differences are caused by culture, not by biological sex.
The channel content is often in reaction to Anti-SJW or conservative content rather than purely a promotion of social justice ideas.
The supporters of the content creator are active on Reddit in subreddit called r/Breadtube, and the creators often identify with this label. This tag only includes breadtuber’s if their content is criticizing anti-SJW (promoting socialism is its own, separate tag).
Peter Coffin, hbomberguy
White Identitarian Identifies-with/is-proud-of the superiority of “whites” and Western Civilization. An example of identifying with “Western heritage” would be to refer to the Sistine chapel or Bach as “our culture.”
Often will promote:
  • An ethnostate where residence or citizenship would be limited to “whites” OR a type of nationalist that seek to maintain a white national identity (white nationalism)
  • A historical narrative focused on the “white” lineage and its superiority
  • Essentialist concepts of racial differences
The content creators are very concerned about whites becoming a minority population in the U.S./Europe (the Great Replacement — theory)
NPIRADIX (Richard Spencer)
Educational Channel that mainly focuses on education material, of which over 30% is focused on making sense of culture or politics.TED, SoulPancake
Late Night Talk shows Channel with content presented humorous monologues about the daily news.Last Week Tonight, Trevor Noah
Partisan Left Focused on politics and exclusively critical of Republicans.The Young Turks, CNN
Partisan Right Channel mainly focused on politics and exclusively critical of Democrats. Must support Trump. Would agree with this statement: “Democratic policies threaten the nation.”Fox News, Candace Owens
Anti-theist The self-identified atheist who is also actively critical of religion. Also called New Atheists or Street Epistemologists. Usually combined with an interest in philosophy.CosmicSkeptic, Matt Dillahunty
Religious Conservative A channel with a focus on promoting Christianity or Judaism in the context of politics and culture.Ben Shapiro, PragerU
Socialist (Anti-Capitalist) Focus on the problems of capitalism. Endorse the view that capitalism is the source of most problems in society.
Critiques of aspects of capitalism that are more specific (i.e., promotion of fee healthcare or a large welfare system or public housing) don’t qualify for this tag.
Promotes alternatives to capitalism. Usually, some form of either Social Anarchist (stateless egalitarian communities) or Marxist (nationalized production and a way of viewing society through class relations and social conflict).
Richald Wolf, NonCompete
Revolutionary Endorses the overthrow of the current political system. For example, many Marxist and Ethno-nationalists are revolutionaries because they want to overthrow the current system and accept the consequences.Libertarian Socialist Rants, Jason Unruhe
Provocateur Enjoys offending and receiving any kind of attention (positive or negative). Takes extreme positions, or frequently breaks cultural taboos. Often it is unclear if they are joking or serious.StevenCrowder, MILO
MRA (Men’s Rights Activist) Focus on advocating for rights for men. See men as the oppressed sex and will focus on examples where men are currently oppressed.
Incels, who identify as victims of sex inequality, would also be included in this category.
Karen Straughan
Missing Link Media Channels funded by companies or venture capital, but not large enough to be considered “mainstream.”
They are generally accepted as more credible than independent YouTube content.
Vox, NowThis News
State Funded Channels funded by governments.PBS NewsHour, Al Jazeera, RT
Anti-Whiteness A subset of Social Justice that, in addition to intersectional beliefs about race, has a significant portion of content that essentializes race and disparages “whites” as a group. Channel should match most of the following:
  • Negative generalization about “whites”. E.g., “White folks are unemotional, they hardly even cry at funerals,” e.g., How To Play The Game w/WS 5 Daily Routines
  • Use of the word “whiteness” as a slur, or an evil force. e.g., “I try to be less white” (Robin DiAngelo)
  • Simplistic narratives about American history, where the most important story is of slavery and racism.
  • Dilute terms like racism or white supremacy so that they include most Americans while keeping the stigma and power of the word.
  • content exclusively framing current events into racial oppression. Usually in the form of police violence against blacks, x-while-black (e.g., swimming while black, walking while black) ...
African Diaspora News Channel

 

Our guiding principle is that, in order to apply one of these tags, one should be able to judge the channel by the channel content itself. It is important not to rely on an outside judgment about the channel’s content. It is also important to interpret the content with full context: there should be no mind-reading and no relying on a judgment from other sources. There should also be enough channels per each category. If the category is too niche, it should be excluded, unless it is essential for the radicalization pathway theory.

 

Appendix B: Detailed algorithmic advantages and disadvantages

We discuss algorithmic advantages and disadvantages at the higher level in Section 4. This appendix presents two additional figures that shows a breakdown of recommendation algorithm traffic channel by channel.

First, Figure 12 presents the relative portion of recommendations between groups. The diagonal column cutting across the chart shows the percentages of intra-category recommendations, i.e., the percentage of recommendations that are directed to the same category. In contrast, lower percentages in this diagonal that the majority of the traffic is directed outwards from the category. The other cells show the percentages each group is recommended in relation to other categories. For example, if one is to view a video that belongs to the Provocative Anti-SJW category, the bulk of the recommendations will suggest videos that belong to either Partisan Right or non-political channels.

 

Cross-category and intra-category recommendations
 
Figure 12: Cross-category and intra-category recommendations.
 

The non-political channels in this chart are channels that fall outside our labeled data categories. The Figure 12 illustrates that these channels are recommended in large numbers for categories that fall on the fringes, such as the White Identitarian and MRA channels, directing the traffic towards less contentious material.

Figure 13 presents the different advantages and disadvantages each group has due to the recommendation system in more detail. The Figure compares the daily net flow of recommendations for each group. The categories in the Figure are organized based on their algorithmic advantage, the most advantaged groups are at the top, and least advantaged groups are at the bottom. Categories in darkest shades of blue are most advantaged, whereas the categories on darker shades of red are at least advantage.

 

Algorithmic advantages/disadvantages in recommendation impressions
 
Figure 13: Algorithmic advantages/disadvantages in recommendation impressions.
 

Categories in grey are also at a disadvantage, but to a lesser extent than the categories in red — the small arrows in the image point towards the category, which is benefiting from the recommendations algorithm. Arrows are pointing towards the group that receives more recommendations that it is given by the algorithm, i.e., pointing towards the group, which is advantaged.

 


Editorial history

Received 27 December 2019; accepted 25 February 2020.


Creative Commons License
This paper is licensed under a Creative Commons Attribution 4.0 International License.

Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization
by Mark Ledwich and Anna Zaitsev.
First Monday, Volume 25, Number 3 - 2 March 2020
https://firstmonday.org/ojs/index.php/fm/article/download/10419/9404
doi: http://dx.doi.org/10.5210/fm.v25i3.10419