Content regulation and censorship of social media platforms is increasingly discussed by governments and the platforms themselves. To date, there has been little data-driven analysis of the effects of regulated content deemed inappropriate on online user behavior. We therefore compared Twitter — a popular social media platform that occasionally removes content in violation of its Terms of Service — to Gab — a platform that markets itself as completely unregulated. Launched in mid-2016, Gab is, in practice, dominated by individuals who associate with the “alt-right” political movement in the United States. Despite its billing as “The Free Speech Social Network,” Gab users display more extreme social hierarchy and elitism when compared to Twitter. Although the framing of the site welcomes all people, Gab users’ content is more homogeneous, preferentially sharing material from sites traditionally associated with the extremes of American political discourse, especially the far right. Furthermore, many of these sites are associated with state-sponsored propaganda from foreign governments. Finally, we discovered a significant presence of German language posts on Gab, with several topics focusing on German domestic politics, yet sharing significant amounts of content from U.S. and Russian sources. These results indicate possible emergent linkages between domestic politics in European and American far right political movements. Implications for regulation of social media platforms are discussed.
Gab displays less conversation, and more elitism, than Twitter
Gab users’ discourse is more homogenous; Focuses on politics, race, religion
Gab users’ shares are more politically homogeneous & contain more state-sponsored content
Gab is more linguistically homogenous
Largest non-English group exhibits foreign content, signs of BotNet activity
Discussion and conclusion
Widespread hate speech and disinformation on social media platforms have been linked to violent acts (Laub, 2019) and disease outbreaks (World Health Organization, 2019), leading governments around the world to impose restrictions on what content can be shared. In some cases, calls for regulation have even come from the platforms themselves (Zuckerberg, 2019). In the United States, opponents of this regulation point to freedom of speech protections offered by the Constitution’s First Amendment, which states that “Congress shall make no law ... abridging the freedom of speech, or of the press”. Despite this debate, there has been relatively little data-driven analysis of the actual impact that regulation might have on online behaviors.
We conducted a comparative analysis of two social media platforms: Twitter and Gab. Twitter is one of the most widely-used social media services (Perrin and Anderson, 2019) and has strengthened enforcement of bans on conduct it deems hateful, including promotion of “violence against ... people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” (Twitter, n.d.) In contrast, Gab is a relatively new platform with a similar technical structure but founded with an explicit agenda of allowing unencumbered free speech to all views. Prior studies (Lima, et al., 2018; Zannettou, et al., 2018; Zhou, et al., 2018) have shown that discourse on Gab is dominated by the “alt-right” — a new movement of the far right with anti-establishment positions focused on issues such as women’s rights, race, and religion, and with content frequently perceived as misogynistic, white supremacist, anti-Semitic, and Islamophobic. These positions entail hate speech that might incite imminent lawless action; indeed, the site drew widespread attention in October 2018 when it was reported that the alleged shooter in the Pittsburgh Tree of Life Synagogue attack posted to Gab shortly before the shooting began (Roose, 2018).
We explored the extent to which the dynamics of the Gab social network may lead some content to be amplified in a manner that privileges specific opinions over others; thus, creating the false appearance that fringe perspectives may, in fact, be mainstream. Specifically, we report the results of a retrospective observational study comparing 15 million messages from the Gab social media platform to an equivalent sample from Twitter. We compared the extent to which each of these platforms are associated with a diversity of perspectives, as measured by the extent to which a small number of influencers drive the content shared. In contradistinction to its emphasis on free speech, we found that a smaller number of users controlled a larger share of the content on Gab compared to Twitter.
Beyond the amplification enabled by online social dynamics, recent events have illustrated that social media platforms, such as Twitter, may be subject to systematic distortions of public opinion from networks of automated online accounts (Subrahmanian, et al., 2016; Varol, et al., 2016; Davis, et al., 2016), including those operated by state-sponsored “trolls” (Broniatowski, et al., 2018; Diehl, et al., 2019; Zelenkauskaite and Balduccini, 2017). We therefore compared the provenance of social media posts from Twitter and Gab, with a specific eye toward identifying content from non-U.S. state-sponsored sources. Specifically, content from Russian state-sponsored sources seems to be more widely-shared on Gab than on Twitter.
Finally, as a proxy for cultural diversity, we aimed to better understand the extent to which linguistic diversity was reflected on each platform. We therefore explored the proportions of users representing different languages on each platform, and whether common content was shared across groups. In the process, we identified a tightly-clustered group of German accounts primarily associated with far-right content in that country, but also sharing state-sponsored content from U.S. and Russian sources.
We used the dataset described in Zhou, et al. (2018). We briefly summarize the dataset here and refer the reader there for additional details. The dataset contains 16,771,455 messages from 146,188 users posted from 10 August 2016 to 23 February 2018. Since Gab uses a sequential message id system, messages were obtained by downloading every message starting from 1 until the data collection ended (on 23 February 2018). Each message contains several metadata fields, including message id, created time, repost status (i.e., retweet), the message text, and some author information.
Many crawled ids did not have a message, perhaps because the messages were deleted. We found a large number of missing messages, with peaks in the number of missing messages around December 2016, August 2017, and September 2017. This may suggest that Gab removed certain accounts or specific messages. Other studies used similar procedures to crawl the platform (Lima, et al., 2018; Zannettou, et al., 2018). Additionally, pushshift.io recently released a Gab corpus (pushshift.io, 2019).
In addition to messages, we also downloaded lists of accounts followed-by, and following, every Gab user in our dataset. We utilize these lists to conduct a social network analysis. We measure message shares based on the repost_count metadata field. We compared these messages to a parallel set of Twitter data gathered from the first day of every month in the same time span.
We compared properties of the Gab dataset to Twitter, perhaps the social media platform closest to Gab in design and function. We sampled Twitter data from the same time span as the Gab data. For efficiency, we sampled tweets from the first day of each month within this time period from the one percent Twitter API, giving us 86,568,694 tweets from 25,837,947 accounts. To obtain up to date estimates of retweets for these messages we redownloaded 6,322,088 tweets from this sample from the Twitter API. We measured retweets using the retweet_status.retweet metadata field. To compare Gab’s user network with Twitter, we obtained follower counts of each unique Twitter user in our dataset, using the user.followers metadata field. Since our sample includes only users who tweeted at least once, our Twitter data collection shares this bias with Gab.
Gab displays less conversation, and more elitism, than Twitter
Most social networks possess stark inequalities, with a small number of users wielding a disproportionately large amount of influence. We conducted an analysis to determine if Gab, a platform which claims that it is designed to promote free speech, is subject to the same dynamics.
Who generates Gab content?
Most Gab users do not post messages regularly, and the discrepancy between our user estimates and those by Gab suggest that around half may not post at all. Fifty-seven percent of Gab users had between one and four messages during our sample’s time frame. In contrast, only two percent of users with at least one post have posted more than 1,000 messages. As with other social networks, a relatively small number of users generate most of the content of Gab, with most users consuming or reacting to this content. However, compared to Twitter, users on Gab seem to use the platform as more of a broadcast medium rather than engaging in conversation, as reflected by fewer reposts, links, and especially @mentions (Table 1).
We examined whether individual messages were equally likely to be reposted across platforms. The distribution of reposts on social networks such as Gab and Twitter is known to approximate a power law distribution (Adamic, et al., 2001). In practice, this means that a very small number of messages get the vast majority of reposts (or retweets), whereas most messages get very little attention. Social networks whose power law exponents have large magnitudes exhibit more strict social hierarchy, with a smaller number of messages garnering a larger proportion of the total attention.
One might think a social network that promotes equality of address should be less hierarchical (although see Graham and Wright  for an extensive discussion of this point). We therefore fit power law distributions to data from both Twitter and Gab. Specifically, we calculated the slopes of regression lines that were fit to log-log plots of raw counts of the number of messages that were shared n times for 1<n<100 (Figure 2 top). We next compared these slopes as a measure of hierarchy in each network. Specifically, a steeper slope indicates that there are large differences in attention given to the least popular and most popular messages. Results demonstrate that Gab messages show disparities between popular and unpopular messages that are more than twice as large as those on Twitter: Gab’s slope is -2.25 (the dashed green line in Figure 2 top), compared to a slope of -1.05 for Twitter (the solid blue line; prior work examining Twitter data found even less extreme slopes between 0.6–0.7 [Lu, et al., 2014]). Furthermore, almost 93 percent of Gab posts receive 0 reposts. In contrast, only 52 percent of Twitter messages receive zero retweets.
We next performed a similar analysis for the number of followers for each account. Very few accounts on Gab have a large number of followers, as is typical in social networks. On Twitter, the modal account has zero followers, meaning that there are more accounts that are simply ignored or engage very little compared to other types of accounts. On Gab, the modal Gab account has 4 followers, a high number by comparison (Figure 2 bottom). However, the number of accounts with more than four followers declines almost three times as rapidly (slopes of -1.01 and -0.35 for Gab and Twitter, respectively, when plotted on a log-log axis). Thus, compared to Twitter, fewer Gab accounts have large numbers of followers. This means that fewer individuals command more attention. Despite its billing as a “Free Speech Social Network” Gab exhibits a more extreme social hierarchy — and therefore more elitism — when compared to Twitter.
Gab users’ discourse is more homogenous; Focuses on politics, race, religion
What do users talk about?
We next turn to the content of Gab posts: what do users on the platform talk about? To characterize a large corpus of text we turn to topic models (Blei, 2012), a popular machine learning tool for summarizing common themes in a text corpus. Topic models are probabilistic models of text that assume a generative process of observed words based on unobserved topics. The output of a topic model are lists (probability distributions) over words, where each list corresponds to a topic. The model uses co-occurrence statistics — which words most often occur together in a message — to infer word groupings into topics. We can then examine each word list, and messages most closely associated with the topic, to characterize common themes on the platform. We favor topic models since they are a data-driven process, discovering themes in a corpus without any a priori list of topics.
We begin with the popular Latent Dirichlet Allocation model (Blei, et al., 2003) and utilize model changes that allow for creating structured priors over the topics (Benton, et al., 2016; Paul and Dredze, 2015). We trained a model with 50 topics and used hyper-parameter optimization. We use all of the messages and treat one single message as a document. We reviewed the resulting model output and assigned a label to each topic.
As is common in topic model output, we identified several distinct topics that corresponded to a single theme. We grouped together similar topics to produce 33 meaningful topic groups. We computed the number of messages associated with each group to obtain the prevalence of each topic. Furthermore, we labeled each topic group as either related to politics, or not related to politics.
Table 2 shows these topics. Political topics comprised 56 percent of all Gab messages. Clearly, the platform is heavily focused on political content. Additionally, several of the topics that are not necessarily political do take on a political context on Gab, e.g., “Men and women.”
Finally, we identify topics that were prevalent specifically during the two periods when sign-ups surged on the platform. This may help us to better understand factors that drive users to the platform. In November 2016, the top topics were “2016 Election and contemporary political debates”, “MAGA”, “sex scandals”, and “Christmas and New Year.” This suggests that sign-ups were driven by the U.S. presidential election results. During the August and September 2017 surge, top topics included “Right and left”, “terrorism”, “state related issues” (related to the storms in Texas and Florida) and “Ideology, religion, and race.” These correlate with major events happening at that time, including the protests in Charlottesville and the hurricanes.
What is the most shared content?
Another way to understand content on social media platforms is to examine what type of posts get the most attention. As discussed above, a small set of users can drive a lot of the conversation. By comparing the most shared messages on both Gab and Twitter, we can further understand differences in content and conversations between the two.
For each of our two collections — Gab and Twitter — we identified the most shared messages. Since we have a large percentage of all Gab data, these likely represent the 25 most shared messages of all time (up to the point we ceased data collection). For Twitter, our posts are messages that were retweeted in our sample time period, which are not necessarily the most shared in the history of Twitter.
On Twitter, the most retweeted posts are largely from celebrities or other entertainers, such as Harry Styles, Ellen DeGeneres, Zayn Malik, Louis Tomlinson, and the Korean band BTS. Several are also from political figures, including two from Hillary Clinton, one from Barack Obama (from 2017 after his presidency ended), and one from Donald Trump. Examining further down the list to the 100 most retweeted posts, 78 were from celebrities.
In contrast, Gab’s top posts are overwhelmingly about politics and racial issues. Nine of the top 50 shared messages are about ideology, religion, or race, two are about Donald Trump, and eight are specifically about Twitter’s purported censorship of right-wing views and why Gab is a superior platform. There are also posts about gun rights, Muslims, criticisms of European leaders like Theresa May and Angela Merkel, the Anti-Defamation League, and school shootings. This supports our observation that Gab is far more focused on political topics, and specifically from the far right.
Gab users’ shares are more politically homogeneous & contain more state-sponsored content
A major purpose of Twitter is to share information, often in the forms of links to external content, such as news articles. Table 1 demonstrates that Gab has a similar function; 36 percent of Gab messages contained links. To understand the nature of the information being shared, we analyzed the most frequently shared URL domains on Gab compared to Twitter.
We extracted the link appearing in each Gab and Twitter message in our collection, and extracted the domain name, e.g., from www.yahoo.com/news/MAGA-news-story we extracted yahoo.com. We then characterized each of these domains as to whether or not they were a news site, and the nature of the site based on information about the site contained in Wikipedia.
Of the 200 most commonly shared domains on Gab, 74 percent are news Web sites. Most of these were far-right news websites according to Wikipedia. Web sites associated with conspiracy theories and the state-sponsored messaging (e.g., Russian government sites) were also observed. In comparison, the types of links shared on Twitter are more diverse. Table 3 shows part of the top 50 domains shared on Gab and their percentages of all URLs. We highlight those domains that are common to Gab but relatively uncommon on Twitter. These include links to other Gab posts (gab.ai) and many right-wing news Web sites.
Gab is more linguistically homogenous
Twitter is a diverse platform used in many countries around the world. As a result, it contains tweets from over 70 languages (@tm, 2015). In contrast, the popular perception of Gab is that it is a platform focused on the United States.
In order to assess the accuracy of this perception, we measured the languages used on Gab as reflected in our collected messages. We used an automatic language identification tool (Lui and Baldwin, 2012) which uses a statistical natural language processing algorithm to identify the language associated with a message. The tool supports the identification of 97 different languages. Table 4 shows the top five languages used on Gab, which account for nearly 95 percent of all messages. Note that since language identification is an automated process, it often produces incorrect identifications for a small number of messages, which can include identification of non-standard languages. We then identified the primary language for each user as the language most often used in their messages.
Largest non-English group exhibits foreign content, signs of BotNet activity
We found that while 89 percent of all Gab messages were written in English, with the remaining 11 percent of messages written in other languages. We observed that German messages constituted roughly half of all non-English messages. Similarly, users with German as their dominant language made up the largest non-English group of users. A manual examination of these messages confirmed a sizable German speaking user community. We sought to better understand this community.
Compared to the modal English-speaking Gab user, who had four followers, German-speaking Gab users had considerably more followers (approximately 40; Figure 2 shows the distribution of followers for each German-language account). Furthermore, most of these users primarily follow, or are followed-by other German-speaking users (Figure 3). This suggests that German-speaking accounts are structured as a cohesive unit, largely following one another. This may be driven by a small sub-group seeking to create tighter in-group connections or may suggest the presence of a coordinated network of (possibly automated) accounts.
We repeated our topic model analysis on all German messages to determine which topics were most discussed within this community (Table 5). A German speaker provided us with an analysis of these topics. As with English messages, German messages about politics make up the majority of the data (59.31 percent). However, German-language accounts pay more attention to German and other European politics whereas English messages are primarily about U.S. politics. This suggests that the German messages are either from residents of Germany, or individuals with a strong interest in German politics. As with the English topics, the German political topics were associated with the German far-right.
We repeated our URL domain analysis for domains appearing in German messages. This yielded 357,469 URLs from 8,017 domains. Table 6 shows the top 20 most shared domains in German language messages. Although right-wing German domains are strongly represented, several domains are evident in both English and German sources, including those associated with the American far right (e.g., breitbart.com, ussanews.com) and pro-Russian sources (i.e., rt.com, sputniknews.com, and zerohedge.com).
Discussion and conclusion
Although the vast majority of posts on Gab — over 90 percent — garnered no attention (in the form of reposts), a small number of posts from a small number of users went viral, suggesting that discourse on Gab may be driven by a small number of elite superparticipants (Graham and Wright, 2014) who shape the alt-right agenda. The connections to German language speakers, and significant content originating from Russian state-sponsored sources seem to suggest emergent linkages between online discussions among the alt-right and various foreign entities. We did not find evidence of any other state-sponsored entities in the top 50 domains shared by Gab users.
Far from promoting “the free flow of information online”, the Gab platform seems to exhibit features associated with less information sharing compared to Twitter. Although specific types of content are not explicitly prohibited by Gab’s Terms of Service, its users do not seem to take advantage of this permissive environment to engage in deliberative discussion, as is arguably the intent of the First Amendment of the Bill of Rights (although see Ingber, 1984). Rather, users seem to primarily broadcast more to, but engage less with, one another.
Content shared by Gab users reflects this structure. A large share of the site’s users is there for the purpose of engaging in political discussions and sharing politically-oriented news, largely to those with connections to the alt-right (such as Breitbart) or at minimum to conservative-leaning sites (such as Fox News). While Gab is dominated by political topics, previous studies of Twitter have found that although tweets do include political topics, they are a small percentage compared to the wide variety of other topics discussed on the platform (Zhao, et al., 2011). This particular emphasis on sites that cater to the alt-right belies the argument of Gab’s founders that they are simply running a site for unlimited free expression with no political agenda. Although politics, race, and religion are certainly important topics in public discourse, they seem to dominate discussion on Gab to the relative exclusion of all others. Furthermore, users’ perspectives on these topics seem to be broadly consistent with those of the platform’s elites — individuals who routinely express positions associated with the American alt-right movement, white supremacy, anti-Semitism, Islamophobia, and other forms of hate speech. Ironically, state-sponsored content from regimes widely considered to be autocratic, are also widely shared on Gab. Although the absence of regulation seems to have led to the creation of a “safe space” for these otherwise fringe opinions, other perspectives — even those on the opposite extremes — do not appear in any meaningful quantities. Thus, the data are consistent with the appearance of a strict social hierarchy that polices content through informal, rather than formal, means.
Absent explicit regulation, one might still expect the existence of elites; indeed, power-law dynamics are widespread across social networks (Adamic, et al., 2001). However, on other platforms, these elites act to promote deliberation and participation by users from a wide range of perspectives (Graham and Wright, 2014). On Gab, the data suggest the opposite — Gab’s structure reflects significantly more elitism, with a smaller number of super-users controlling a larger share of the “wealth” — i.e., attentional resources — of the platform’s constituency, with more homogeneity in the topics discussed. Rather than a free “marketplace of ideas”, Gab’s structure is that of an oligopoly.
Gab’s lack of linguistic diversity reflects its user base and power structure. Alt-right topics are primarily of interest to citizens of the United States; thus, it is not surprising that the vast majority of Gab’s messages are English. However, we were surprised to find a large quantity of German-language content which, upon further analysis, reflects prior findings regarding the heavy usage of social media platforms by far-right German political parties (Morstatter, et al., 2018). This prior work has also found that these parties use significant automation to amplify their messages, giving the false impression of widespread support for their positions. Indeed, the abnormally high clustering among German-language accounts may be indicative of coordinated automated activity — i.e., a “botnet” (Feily, et al., 2009) — designed to perform this amplification. Furthermore, although many German-language posts focus on domestic politics, the most popular domain — breitbart.com — originates in the U.S. and is generally associated with U.S. domestic political issues. Additionally, Russian state-sponsored English language sites are shared heavily by both U.S. and German users, suggesting potential transnational coordination between far-right groups (see also Diehl, et al., 2019; Morstatter, et al., 2018). Both alt-right and German users seem to share messages from Russian sources at rates higher than would be expected on Twitter. The alt-right is clearly spreading across Europe in many ways, including through Steve Bannon’s efforts to propagate his ideas in other nations (such as Italy).
Our analyses have implications for attempts by governments to regulate social media platforms, and by platforms to regulate their own content. Although more research is needed to determine the most effective means to ensure a vibrant, yet safe, exchange of ideas online, our data indicate that the complete absence of regulation cannot be expected to result in a fairer and more inclusive discourse, nor in a free “marketplace of ideas”. Given the recently observed associations between hate speech on such platforms and mass violence events (Laub, 2019), some regulation may be needed to prevent the emergence of imminent lawless action.
Specifically, our results provide insights into the failure modes of Gab as an unregulated platform designed to promote free speech. Platforms such as Twitter and Gab both rely on algorithms to promote content that is likely to be of interest to users. These algorithms are designed to maximize value for both the platforms and their users: platforms benefit from the promotion of viral content because attention drives ad revenue, whereas celebrity users on each platform benefit from these algorithms because their content creates a self-reinforcing cycle — they are popular because of their celebrity, and their celebrity leads their content to be popular. One possible target for regulation may be to encourage redesign of these algorithms in a manner that can balance this cycle. This would not in any way ban content but would rather allow a more diverse group of voices to be heard.
These same algorithms allow automated content to spread quickly because a network of coordinated users could create the appearance of popularity. A second regulatory approach might therefore be to remove automated content from promotion algorithms, e.g., by requiring that users provide some evidence of whether accounts are operated by humans or automatons.
Gab was formed in response to the perceived crackdown on hateful content by Twitter. Thus, our data add to the growing evidence that non-comprehensive action taken by a small number of social media platforms may, in fact, trigger the emergence of “dark pools” of hate speech, exhibiting “global hate highways” between otherwise disconnected groups (Johnson, et al., 2018). Our study thus emphasizes the need for coordination between social media platforms, and not just between governments. If users’ content is banned from one platform, these users may simply migrate to another platform or provide links to the second platform from the first platform. Indeed, one of the top domains shared on Gab was twitter.com, suggesting that these are not at all separate ecosystems. Rather than regulating content, governments may have a role in regulating how platforms interact with one another. For example, one potential approach may focus on incentivizing platforms to reach a voluntary consensus on how to effectively self-regulate, thus encouraging them to exercise a form of corporate social responsibility (Sandoval, 2014).
In conclusion, our work provides insight into the specific ways that totally unregulated platforms evolve, and point the way for future research to explore specific problems that regulation should aim to solve — namely, the “rich get richer” dynamics of social networks; the presence of artificial amplification; the formation of “dark pools” of users whose content is banned from other platforms; and, the prevalence of state-sponsored trolls that seek to use democratic values against those who believe in them.
About the authors
Yuchen Zhou is a research assistant in the Center for Language and Speech Processing at Johns Hopkins University in Baltimore, Maryland.
E-mail: yzhou111 [at] jhu [dot] edu
Mark Dredze is the John C. Malone Associate Professor in the Center for Language and Speech Processing in the Department of Computer Science at Johns Hopkins University in Baltimore, Maryland.
E-mail: mdredze [at] cs [dot] jhu [dot] edu
David A. Broniatowski is Associate Professor in the Department of Engineering Management and Systems Engineering at George Washington University in Washington, D.C.
E-mail: broniatowski [at] gwu [dot] edu
William D. Adler is Associate Professor in the Department of Political Science at Northeastern Illinois University in Chicago, Illinois.
E-mail: williamadler [at] gmail [dot] com
Lada A. Adamic, Rajan M. Lukose, Amit R. Puniyani, and Bernardo A. Huberman, 2001. “Search in power-law networks,” Physical Review E, volume 64, number 4, 046135.
doi: https://doi.org/10.1103/PhysRevE.64.046135, accessed 16 August 2019.
Adrian Benton, Michael J. Paul, Braden Hancock, and Mark Dredze, 2016. “Collective supervision of topic models for predicting surveys with social media,” AAAI’16: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pp. 2,892–2,898.
David M. Blei, 2012. “Probabilistic topic models,” Communications of the ACM, volume 55, number 4, pp. 77–84.
doi: https://doi.org/10.1145/2133806.2133826, accessed 16 August 2019.
David M. Blei, Andrew Y. Ng, and Michael I. Jordan, 2003. “Latent dirichlet allocation,” Journal of Machine Learning Research, volume 3, pp. 993–1,022, and at http://www.jmlr.org/papers/volume3/blei03a/blei03a.pdf, accessed 16 August 2019.
David A. Broniatowski, Amelia M. Jamison, SiHua Qi, Lulwah AlKulaib, Tao Chen, Adrian Benton, Sandra C. Quinn, and Mark Dredze, 2018. “Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate,” American Journal of Public Health, volume 108, number 10, pp. 1,378–1,384.
doi: https://doi.org/10.2105/AJPH.2018.304567, accessed 16 August 2019.
Clayton Allen Davis, Onur Varol, Emilio Ferrara, Alessandro Flammini, and Filippo Menczer, 2016. “BotOrNot: A system to evaluate social bots,” WWW ’16 Companion: Proceedings of the 25th International Conference Companion on World Wide Web, pp. 273–274.
doi: https://doi.org/10.1145/2872518.2889302, accessed 16 August 2019.
Jörg Diehl, Roman Lehberger, Ann-Katrin Müller, and Philipp Seibt, 2019. “Facebook frenzy: How the German right wing dominates social media,” Spiegel Online (29 April), at https://www.spiegel.de/international/germany/germany-afd-populists-dominate-on-facebook-a-1264933.html, accessed 7 June 2019.
M. Feily, A. Shahrestani, and S. Ramadass, 2009. “A survey of botnet and botnet detection,” 2009 Third International Conference on Emerging Security Information, Systems and Technologies, pp. 268–273.
doi: https://doi.org/10.1109/SECURWARE.2009.48, accessed 16 August 2019.
Todd Graham and Scott Wright, 2014. “Discursive equality and everyday talk online: The impact of ‘superparticipants’,” Journal of Computer-Mediated Communication, volume 19, number 3, pp. 625–642.
doi: https://doi.org/10.1111/jcc4.12016, accessed 16 August 2019.
Stanley Ingber, 1984. “The marketplace of ideas: A legitimizing myth,” Duke Law Journal, volume 33, number 1, at https://scholarship.law.duke.edu/dlj/vol33/iss1/1/, accessed 16 August 2019.
N.F. Johnson, R. Leahy, N. Johnson Restrepo, N. Velasquez, M. Zheng, and P. Manrique, 2018. “Social media cluster dynamics create resilient global hate highways,” arXiv (8 November), at http://arxiv.org/abs/1811.03590, accessed 15 May 2019.
Zachary Laub, 2019. “Hate speech on social media: Global comparisons,” Council on Foreign Relations (7 June), at https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons, accessed 7 June 2019.
Lucas Lima, Julio C.S. Reis, Philipe Melo, Fabricio Murai, Leandro Araújo, Pantelis Vikatos, and Fabrcio Benevenuto, 2018. “Inside the right-leaning echo chambers: Characterizing Gab, an unmoderated social system,” arXiv (10 July), at https://arxiv.org/abs/1807.03688, accessed 16 August 2019.
Yao Lu, Peng Zhang, Yanan Cao, Yue Hu, and Li Guo, 2014. “On the frequency distribution of retweets,” Procedia Computer Science, volume 31, pp. 747–753.
doi: https://doi.org/10.1016/j.procs.2014.05.323, accessed 16 August 2019.
Marco Lui and Timothy Baldwin, 2012. “Langid.Py: An off-the-shelf language identification tool,” ACL ’12: Proceedings of the ACL 2012 System Demonstrations, pp. 25–30.
Fred Morstatter, Yunqiu Shao, Aram Galstyan, and Shanika Karunasekera, 2018. “From alt-right to alt-rechts: Twitter analysis of the 2017 German federal election,” WWW ’18 Companion: Proceedings of the The Web Conference 2018, pp. 621–628.
doi: https://doi.org/10.1145/3184558.3188733, accessed 7 June 2019.
Michael J. Paul and Mark Dredze, 2015. “SPRITE: Generalizing topic models with structured priors,” Transactions of the Association for Computational Linguistics (TACL), volume 3, pp. 43–57.
doi: https://doi.org/10.1162/tacl_a_00121, accessed 7 June 2019.
Andrew Perrin and Monica Anderson, 2019. “Share of U.S. adults using social media, including Facebook, is mostly unchanged since 2018,” Pew Research Center (10 April), at https://www.pewresearch.org/fact-tank/2019/04/10/share-of-u-s-adults-using-social-media-including-facebook-is-mostly-unchanged-since-2018/, accessed 7 June 2019.
pushshift.io, 2019. “Directory contents,” at https://files.pushshift.io/gab/, accessed 7 June 2019.
Kevin Roose, 2018. “On Gab, an extremist-friendly site, Pittsburgh shooting suspect aired his hatred in full,” New York Times (28 October), at https://www.nytimes.com/2018/10/28/us/gab-robert-bowers-pittsburgh-synagogue-shootings.html, accessed 7 June 2019.
Marisol Sandoval, 2014. From corporate to social media: Critical perspectives on corporate social responsibility in media and communication industries. London: Routledge.
V.S. Subrahmanian, Amos Azaria, Skylar Durst, Vadim Kagan, Aram Galstyan, Kristina Lerman, Linhong Zhu, Emilio Ferrara, Alessandro Flammini, and Filippo Menczer, 2016. “The DARPA Twitter bot challenge,” Computer, volume 49, number 6, pp. 38–46.
doi: https://doi.org/10.1109/MC.2016.183, accessed 7 June 2019.
@tm, 2015. “Evaluating language identification performance,” Twitter (16 November), at https://blog.twitter.com/engineering/en_us/a/2015/evaluating-language-identification-performance.html, accessed 7 June 2019.
Twitter, n.d. “Hateful conduct policy,” at https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy, accessed 7 June 2019.
Onur Varol, Emilio Ferrara, Clayton A. Davis, Filippo Menczer, Alessandro Flammini, V.S. Subrahmanian, Amos Azaria, Skylar Durst, Vadim Kagan, and Aram Galstyan, 2016. “Online human-bot interactions: Detection, estimation, and characterization,” Proceedings of the Eleventh International AAAI Conference on Web and Social Media, at https://aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/view/15587/14817, accessed 16 August 2019.
World Health Organization, 2019. “Ten health issues WHO will tackle this year,” at https://www.who.int/emergencies/ten-threats-to-global-health-in-2019, accessed 7 June 2019.
Savvas Zannettou, Barry Bradlyn, Emiliano De Cristofaro, Haewoon Kwak, Michael Sirivianos, Gianluca Stringhini, and Jeremy Blackburn, 2018. “What is Gab? A bastion of free speech or an alt-right echo chamber?” WWW ’18: Proceedings of The Web Conference 2018, pp. 1,007–1,014.
doi: https://doi.org/10.1145/3184558.3191531, accessed 16 August 2019.
Asta Zelenkauskaite and Marcello Balduccini, 2017. “‘Information warfare’ and online news commenting: Analyzing forces of social influence through location-based commenting user typology,” Social Media + Society (17 July).
doi: https://doi.org/10.1177/2056305117718468, accessed 16 August 2019.
Wayne Xin Zhao, Jing Jiang, Jiangshu Weng, Jing He, Ee-Ping Lim, Hongfei Yan, and Xiaoming Li, 2011. “Comparing Twitter and traditional media using topic models,” In: Paul Clough, Colum Foley, Cathal Gurrin, Gareth J.F. Jones, Wessel Kraaij, Hyowon Lee, and Vanessa Mudoch (editors). Advances in information retrieval. Lecture Notes in Computer Science, volume 6611. Berlin: Springer-Verlag, pp. 338–349.
doi: https://doi.org/10.1007/978-3-642-20161-5_34, accessed 16 August 2019.
Yuchen Zhou, Mark Dredze, David A. Broniatowski, and William Adler, 2018. “Gab: The alt-right social media platform,” at https://pdfs.semanticscholar.org/04f5/48097b166eefddef2815ccc83ca71ce09463.pdf, accessed 16 August 2019.
Mark Zuckerberg, 2019. “Mark Zuckerberg: The Internet needs new rules. Lets start in these four areas.” Washington Post (30 March), at https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504-521a-11e9-a3f7-78b7525a8d5f_story.html, accessed 7 June 2019.
Received 30 April 2019; revised 30 June 2019; revised 4 August 2019; accepted 6 August 2019.
“Elites and foreign actors among the alt-right: The Gab social media platform” by Yuchen Zhou, Mark Dredze, David Broniatowski, and William D Adler is licensed under a Creative Commons Attribution 4.0 International License.
Elites and foreign actors among the alt-right: The Gab social media platform
by Yuchen Zhou, Mark Dredze, David A. Broniatowski, and William D. Adler.
First Monday, Volume 24, Number 9 - 2 September 2019