First Monday

Social bots distort the 2016 U.S. Presidential election online discussion by Alessandro Bessi and Emilio Ferrara



Abstract
Social media have been extensively praised for increasing democratic discussion on social issues related to policy and politics. However, what happens when this powerful communication tools are exploited to manipulate online discussion, to change the public perception of political entities, or even to try affecting the outcome of political elections? In this study we investigated how the presence of social media bots, algorithmically driven entities that on the surface appear as legitimate users, affect political discussion around the 2016 U.S. Presidential election. By leveraging state-of-the-art social bot detection algorithms, we uncovered a large fraction of user population that may not be human, accounting for a significant portion of generated content (about one-fifth of the entire conversation). We inferred political partisanships from hashtag adoption, for both humans and bots, and studied spatio-temporal communication, political support dynamics, and influence mechanisms by discovering the level of network embeddedness of the bots. Our findings suggest that the presence of social media bots can indeed negatively affect democratic political discussion rather than improving it, which in turn can potentially alter public opinion and endanger the integrity of the Presidential election.

Contents

Introduction
Methodology
Data analysis
Conclusions

 


 

Introduction

Various computational social science studies demonstrated that social media have been extensively used to foster democratic conversation about social and political issues: From the Arab Spring (González-Bailón, et al., 2011; Howard, et al., 2011), to Occupy Wall Street (Conover, et al., 2013a; Conover, et al., 2013b) and many other civil protests (Varol, et al., 2014; González-Bailón, et al., 2013) (Bastos, et al., 2014), Twitter and other social media seemed to play an instrumental role to involve the public in policy and political conversations, by collectively framing the narratives related to particular social issues, and coordinating online and off-line activities. The use of digital media to discuss politics during election times has also been the subject of various studies, covering the last four U.S. Presidential elections (Adamic and Glance, 2005; Diakopoulos and Shamma, 2010; Bekafigo and McBride, 2013; Carlisle and Patton, 2013; DiGrazia, et al., 2013; Wang, et al., 2016), and other countries like Australia (Gibson and McAllister, 2006; Bruns and Burgess, 2011; Burgess and Bruns, 2012), and Norway (Enli and Skogerbø, 2013). Findings that focused on the positive effects of social media such as incrementing voting turnout (Bond, et al., 2012) or exposure to diverse political views (Bakshy, et al., 2015) contributed to the general praise of these platforms as a tool to foster democracy and civil political engagement (Shirky, 2011; Loader and Mercea, 2011; Effing, et al., 2011; Tufekci and Wilson, 2012; Tufekci, 2014; Yang, et al., 2016).

However, as early as 2006, Philip Howard raised concerns regarding the possibility of manipulating public opinion and spreading political misinformation through social media (Howard, 2006). These issues have been later proved true by several studies (Ratkiewicz, et al., 2011a; Ratkiewicz, et al., 2011b) (Metaxas and Mustafaraj, 2012) (El-Khalili, 2013; Ferrara, 2015; Woolley and Howard, 2016; Shorey and Howard, 2016). Of particular concern is the fact social media have been demonstrated effective in influencing individuals (Aral and Walker, 2010). One way to perform such type of manipulation is by using social bots, algorithmically controlled accounts that emulate the activity of human users but operate at much higher pace (e.g., automatically producing content or engaging in social interactions), while successfully keeping their artificial identity undisclosed (Hwang, et al., 2012; Messias, et al., 2013; Ferrara, et al., 2016).

Evidence of the adoption of social media bots to attempt manipulating political communication dates back half a decade: during the 2010 U.S. midterm elections, social bots were employed to support some candidates and smear others, by injecting thousands of tweets pointing to Web sites with fake news (Ratkiewicz, et al., 2011a). The research community reported another similar case around the time of the 2010 Massachusetts special election (Metaxas and Mustafaraj, 2012). Campaigns of this type are sometimes referred to as astroturf or Twitter bombs. Unfortunately, most of the times, it has proven impossible to determine who's behind these types of operations (Kollanyi, et al., 2016; Ferrara, et al., 2016). Governments, organizations, and other entities with sufficient resources, can obtain the technological capabilities to deploy thousands of social bots and use them to their advantage, either to support or to attack particular political figures or candidates. Indeed, it has become increasingly simpler to deploy social bots, so that, in some cases, no coding skills are required to setup accounts that perform simple automated activities: tech blogs often post tutorials and ready-to-go tools for this purposes [1], [2], [3]. Various source codes for sophisticated social media bots can be found online as well, ready to be customized and optimized by the more technical savvy users (Kollanyi, 2016). We inspected several of these readily available bots and this is a (non-comprehensive) list of the capabilities that they provide: Search Twitter for phrases/hashtags/keywords and automatically retweet them; Automatically reply to tweets that meet a certain criteria; automatically follow any users that tweet something with a specific phrase/hashtag/keyword; Automatically follow back any users that have followed the bot; Automatically follow any users that follow a specified user; Automatically add users tweeting about something to public lists; Search Google (and other engines) for articles/news according to specific criteria and post them, or link them in automatic replies to other users; Automatically aggregating public sentiment on certain topics of discussion; Buffer and post tweets automatically. Most of these bots can run in cloud services or infrastructures like Amazon Web Services (AWS) or Heroku, making it more difficult to block them. Finally, a very recent trend is that of providing Bot-As-A-Service (BaaS): companies like RoboLike (https://robolike.com/) provide “Easy-to-use Instagram/Twitter auto bots” performing certain automatic activities for a monthly price. Advanced conversational bots powered by more sophisticated Artificial Intelligences are provided by companies like ChatBots.io that allow anyone to “Add a bot to services like Twitter, Hubot, Facebook, Skype, Twilio, and more” (https://developer.pandorabots.com/).

Much research has been devoted recently to reverse-engineer social bot strategies from observed activities, to understand who they target, how they generate content, when they take action, and what topics they talk about (Yang, et al., 2014; Freitas, et al., 2015; Ferrara, et al., 2016; Subrahmanian, et al., 2016; Davis, et al., 2016). Ultimately, this may lead to the identification of their controllers, namely the bot masters.

In this paper, we describe the investigation that brought us to unveil the pervasive presence and activity of social bots involved in the 2016 U.S. Presidential election conversation ongoing on social media. We collected Twitter data for an extensive period prior to the election that includes all three Presidential debates. By continuously polling the Twitter Search API for relevant, election-related, content using hashtag- and keyword-based queries, we obtained a large dataset of over 20 million tweets generated between 16 September and 21 October 2016 by about 2.8 million distinct users. Advanced machine learning techniques used to discover social bots developed by our group in the past (Yang, et al., 2014; Ferrara, et al., 2016; Subrahmanian, et al., 2016; Davis, et al., 2016), allowed us to detect the bots that populate the social media election-related conversation: our estimate is that over 400,000 accounts are likely bots (i.e., nearly 15 percent of the total population under study), and most importantly that they are responsible for roughly 3.8 million tweets (nearly 19 percent of the total conversation). We investigated the temporal dynamics of the social media conversation, to study how it reflects shocks from external events (e.g., debates, news releases, etc.), and how endogenous dynamics (e.g., who supports to whom and how) are affected by the observed pervasiveness of social bots. We analyzed the geographical dimension as well, by leveraging Twitter metadata available for a subset of tweets, verifying that bots and humans exhibit very different geographical provenance. We finally investigated what influence social bots have on the structure of the network and on communication dynamics, assessing their degree of embeddedness by means of k-core decomposition analysis.

 

++++++++++

Methodology

Data collection: We manually crafted a list of hashtags and keywords that relate to the 2016 U.S. Presidential elections. The list is compiled so that to contain a roughly equal number of hashtags/keywords associated with each major Presidential candidate: we selected 23 terms total, including five specific for Republican Party nominee Donald Trump (#donaldtrump, #trump2016, #neverhillary, #trumppence16, #trump), four terms for Democratic Party nominee Hillary Clinton (#hillaryclinton, #imwithher, #nevertrump, #hillary), and several relative to the debates. To make sure our query list is comprehensive, we also added a few search terms for two third party candidates, namely Libertarian Party nominee Gary Johnson (one term), and Green Party nominee Jill Stein (two terms). The full list of search terms is reported in Table 1, along with the total number of tweets containing each keyword term (note that a single tweet may contain more than one key term, therefore some overlap exists). No significant number of tweets is being generated for the third party candidates, therefore in the following we will focus our analysis only on Trump and Clinton.

 

List of search terms we continuously monitored via the Twitter Search API during the period between 16 September and 21 October 2016

 

By querying the Twitter Search API at regular intervals of 10 seconds, continuously and without interruptions in three periods between 16 September and 21 October 2016, we collected a large dataset constituted by 20.7 million tweets posted by nearly 2.8 million distinct users. Table 2 reports some aggregate statistics of the dataset. The data collection infrastructure ran inside an Amazon Web Services (AWS) instance to ensure resilience and scalability. We chose to use the Twitter Search API (https://dev.twitter.com/rest/public/search) to make sure that we obtained all tweets that contain the search terms of interest posted during the data collection period, rather than a sample of unfiltered tweets: this precaution we took avoids incurring in the issues reported in literature related to collecting sampled data using the Twitter Stream API (https://dev.twitter.com/streaming/overview) instead (Morstatter, et al., 2013).

 

Descriptive statistics of our dataset

 

Bot detection: Determining whether either human or a bot controls a social media account has proven a very challenging task (Ferrara, et al., 2016; Subrahmanian, et al., 2016). Our prior efforts produced an openly accessible solution called BotOrNot (Davis, et al., 2016), consisting of both a public Web site (https://truthy.indiana.edu/botornot/) and a Python API (https://github.com/truthy/botornot-python), which allow for making this determination. BotOrNot is a machine-learning framework that extracts and analyses a set of over one thousand features, spanning content and network structure, temporal activity, user profile data, and sentiment analysis to produce a score that suggests the likelihood that the inspected account is indeed a social bot. Extensive analysis revealed that the two most important classes of feature to detect bots are, maybe unsurprisingly, the metadata and usage statistics associated with the user accounts. The following indicators provide the strongest signals to separate bots from humans: (i) whether the public Twitter profile looks like the default one or it is customized (it requires some human efforts to customize the profile, therefore bots are more likely to exhibit the default profile setting); (ii) absence of geographical metadata (humans often use smartphones and the Twitter iPhone/Android App, which records as digital footprint the physical location of the mobile device); and, (iii) activity statistics such as total number of tweets and frequency of posting (bots exhibit incessant activity and excessive amounts of tweets), proportion of retweets over original tweets (bots retweet contents much more frequently than generating new tweets), proportion of followers over followees (bots usually have less followers and more followees), account creation date (bots are more likely to have recently-created accounts), randomness of the username (bots are likely to have randomly-generated usernames). We point the reader interested in further technical details to our prior work (Ferrara, et al., 2016; Davis, et al., 2016).

BotOrNot has been trained with thousands of instances of social bots, from simple to sophisticated, and an accuracy of above 95 percent (Davis, et al., 2016). Typically, BotOrNot yields likelihood scores above 50 percent only for accounts that look suspicious to a scrupulous analysis. We adopted the Python BotOrNot API to systematically inspect the most active users in our dataset. The Python BotOrNot API queries the Twitter API to extract the most 300 tweets and all the publicly available account metadata, and feed this features to an ensemble of machine learning classifiers, which produce a bot score.

To label accounts as bots, we use the fifty-percent threshold — which has proven effective in prior studies (Davis, et al., 2016) — an account is considered to be a bot if the bot score is above 0.5. Figure 1 shows the distribution of bot scores yielded by BotOrNot: most of the probability mass is in the range between 0.2 and 0.5. This suggests that no significant difference in bot classification would occur if we were to increase the threshold used to label accounts as bots. Interestingly, a mild bimodality is visible, with a clear bump in the bot scores around 0.7, suggesting that a significant amount of accounts exhibit very clear bot characteristics (Ferrara, et al., 2016; Davis, et al., 2016).

 

Distribution of the probability density of bot scores
 
Figure 1: Distribution of the probability density of bot scores assigned to the top 50,000 Twitter accounts in our dataset (ranked by activity) by the Python BotOrNot API (https://github.com/truthy/botornot-python).
Note: Larger version of figure available here.

 

Since the Python BotOrNot API incurs in the query limitations imposed by the Twitter API (https://dev.twitter.com/rest/public/search), it would be impossible to test all the 2.78 million accounts. Therefore, we tested the top 50,000 accounts ranked by activity volume. Although these top 50 thousand users account for roughly only two percent of the entire population, it is worth noting that they are responsible for producing over 12.6 million tweets, which is about 60 percent of the total conversation. This choice gives us sufficient statistical power to extrapolate the distribution of bots and humans for the entire population without the need to test accounts that are only marginally involved in the conversation.

Out of the top 50 thousand accounts, BotOrNot assigned a bot score greater than the established 0.5 threshold, and therefore classified as likely bots, to a total of 7,183 users, responsible for 2,330,252 tweets [4]. A total of 40,163 users (responsible for 10.3 million tweets) were labeled as humans. BotOrNot labeled the remainder 2,654 users as unknown/undecided, either because their scores does not significantly diverge from the classification threshold of 0.5, or because the accounts have been suspended/deleted. Even if all the 2,654 users were bots, and Twitter suspended their accounts for violating the terms of service, this would suggest that roughly 70 percent of the total bot population (the remainder 7,183 accounts) was still active on the platform at the time of our verification.

By extrapolating for the entire population, we estimate the presence of at least 400 thousand bots, accounting for roughly 15 percent of the total Twitter population active in the U.S. Presidential election discussion, and responsible for about 3.8 million tweets, roughly 19 percent of the total volume. These statistics are summarized in Table 3.

 

Bot-specific statistics for the top 50 thousand users (ranked by activity) and extrapolation for the entire user population

 

Sentiment analysis: To understand how bots and humans discuss about the Presidential candidates we will rely upon sentiment analysis. To attach a sentiment score to the tweets in our dataset, we use SentiStrength (Thelwall, et al., 2010). SentiStrength is a sentiment analysis algorithm which has been specifically designed to annotate social media data. This design choice provides some desirable advantages: first, it is optimized to annotate short, informal texts, like tweets, that contain abbreviations, slang, and other non-orthodox language features; second, SentiStrength employs additional linguistic rules for negations, amplifications, booster words, emoticons, spelling corrections, etc. Applications of SentiStrength to social media data found it particularly effective at capturing positive and negative emotions with, respectively, 60.6 percent and 72.8 percent accuracy (Thelwall, 2013). We tested it extensively and also used it in prior studies to validate the effect of sentiment on the diffusion of information in social media (Ferrara and Yang, 2015).

The algorithm assigns to each tweet t a positive S+(t) and negative S-(t) polarity score, both ranging between 1 (neutral) and 5 (strongly positive/negative). Starting from the polarity scores, we capture the sentiment of each tweet t with one single measure, the sentiment score S(t), defined as the difference between positive and negative sentiment scores: S(t) = S+(t) - S-(t).

The above-defined score ranges between -4 and +4. The former score indicates an extremely negative tweet, and occurs when S+(t)=1 and S-(t)=5. Vice versa, the latter identifies an extremely positive tweet labeled with S+(t)=5 and S-(t)=1. In the case S+(t)=S-(t) — positive and negative sentiment scores for a tweet t are the same — the polarity S(t)=0 of tweet t is considered as neutral (note that neutral class represent the majority, by construction, since it contains all tweets that have equal number of positive and negative words, as well as all tweets with no sentiment-labeled terms).

 

++++++++++

Data analysis

Our analysis is aimed at investigating three directions, discussed separately in the following: first, we analyze the spatial and temporal dynamics of information production and consumption by humans and bots during our observation period, trying to highlights differences between organic and artificial political support to the two candidates; second, we investigate how bots differ from human in their activities, interactions (among each other and with humans), and in their support to the two candidates; finally, we unveil the degree of embeddedness of the bots in the social network, as a proxy for their influence and visibility.

Spatio-temporal dynamics: In Figure 2, we visualize the timeline of volume of tweets present in our dataset, during three periods between 16 September and 21 October 2016, during which we collected data from Twitter. The figure also provides annotation of the four political debates occurred during this period.

 

Timeline of the volume of tweets generated during our observation periods
 
Figure 2: Timeline of the volume of tweets generated during our observation periods (grey area = no data). Presidential debates are annotated and largely anticipate spikes in the online discussion.
Note: Larger version of figure available here.

 

The first week (16 September to 24 September) serves as a baseline to monitor the baseline political discussion occurred prior to the debates weeks. The baseline period is followed by one-day break (25 September) prior to the first debate, in which we maintained our data collection infrastructure. The second observation period spans 26 September through 10 October 2016, and it captures three debates (first Presidential debate of 26 September, Vice Presidential debate of 4 October, and second Presidential debate of 9 October). Our system infrastructure required additional maintenance, and we chose the period between 10 October and 16 October 2016 for this purpose given the absence of off-line events during that week. We restarted our data collection for the conclusive period between 16 October and 21 October in time to capture the third and last Presidential debate of 19 October. We decided to conclude our data collection prior to 22 October 2016 when Twitter, along with several other online platforms, was targeted by a large-scale distributed denial of service attack and was down for the majority several hours, making the usage of the platform (and thus the data collection) impossible.

The baseline observation period (16 September to 25 September) shows the circadian activity and weekly cycles typical of social media chatter (Golder and Macy, 2011), without particular bursts or spikes related to shocks from external events. Between 5,000 and 10,000 tweets are generated hourly, every day, by users annotated as humans, and roughly 1,000–2,000 are instead generated by accounts labeled as bots, constituting about 10 percent of the total tweets.

The second, and longest, observation window (26 September to 10 October) exhibits significantly different communication dynamics if compared to the baseline: intense spikes of activities, both human- and bot-generated, characterize three days. We observe systematic spikes of activity as a consequence of the first two debates, respectively on 27 September (after the first Presidential debate) and on 4 and 5 October (after the Vice Presidential debate). Differently from what stated by some other study that analyzed only the second Presidential debate (Kollanyi, et al., 2016), during these three bursts, proportionally more tweets have been generated by humans than by bots. Although there is an increase in the volume of bot-generated tweets, which peaks at about 10,000 tweets/hour, humans are still responsible for peaks of 60,000–80,000 tweets/hour during these bursts of discussion. What is concerning, however, is the volume of tweets that appear to be consistently and continuously produced by the bots. This extrapolates to a total of roughly 3.8 million tweets across the three observation windows, in other words nearly 19 percent of the total tweets.

There is an intuitive explanation, supported by the data, to the fact that humans contribute more than bots during bursts, or shocks induced by exogenous events: sophisticated bots are designed to systematically and continuously pushing their agenda, irrespectively of the circumstances. Humans on the other side, get engaged more easily in online political discussion as a consequence of the occurrence of political events in the off-line world, such as Presidential debates or news releases (Effing, et al., 2011; Bond, et al., 2012; Bakshy, et al., 2015).

We then considered the geographical dimension of the conversation. Sophisticated bots can make credible accounts by faking profile information, and other metadata, including the geographical provenance, using techniques like gps spoofing (Ferrara, et al., 2016; Subrahmanian, et al., 2016). In Figure 3 we plotted the U.S. map reporting the volume of tweets generated by each state, respectively for bots (left) and human (right) accounts. The two maps tell significantly different stories: a very strong support from bots is evident in the Midwest and South of the United States, in particular in Georgia; the picture for human-generated tweets’ provenance is very different, and it shows that the drivers of the conversation are the most populated states, such as California, Texas, Florida, Illinois, New York state, and Massachusetts. This is strongly aligned with prior findings about the geographic distribution of political discussion in the U.S. (Conover, et al., 2013a).

 

Geocoded sources for bot-generated tweets Geocoded sources for human-generated tweets
Note: Larger version of here. Note: Larger version of here.
 
Figure 3: Geocoded sources for bots (left) and human-generated (right) tweets.

 

Partisanship and supporting activity: We next inferred the partisanship of the users in our dataset. We used the five Trump-supporting hashtags (#donaldtrump, #trump2016, #neverhillary, #trumppence16, #trump) and the four Clinton-supporting (#hillaryclinton, #imwithher, #nevertrump, #hillary) to attribute partisanships. In detail, we employed a simple heuristics based on hashtag adoption: for each user, we calculated the top 10 hashtags that appear in the tweets posted by that user. If the majority of hashtags support one particular candidate, we assigned the given user to that political faction (Clinton- or Trump-supporter). This is a very strict and conservative partisanship assignment, likely less prone to misclassification that may be yield by automatic machine-learning techniques not based on manual validation, e.g., Conover, et al., 2011. Our procedure yielded a small, high-confidence, annotated dataset constituted by 7,112 Clinton supporters (590 bots and 6,522 humans) and 17,202 Trump supporters (1,867 bots and 15,335 humans).

Figure 4 and Figure 5 show the Complementary Cumulative Distribution Functions (CCDFs) of the interactions respectively replies and retweets, initiated by bot and human users. Each plot disaggregates the interactions in three categories: (i) within group (for example bot-bot, or human-human); (ii) across groups (e.g., bot-human, or human-bot); and, (iii) total (i.e., bot-all and human-all). Both figures exhibit broad distributions typical of social media activity. What interestingly emerges from contrasting the two figures, is that humans are engaging in replies interactions significantly more (one order of magnitude difference) with other humans than with bots (see right panel of Figure 4). Conversely, bots fail to substantially engage humans and end up interacting via replies with other bots significantly more than with humans. Given that bots by design are intended to engage in interactions with humans, our observation goes against what we would have intuitively expected — similar paradoxes have been already previously highlighted in our prior work (Ferrara, et al., 2016). One intuitive explanation to this phenomenon is that bots that are not sophisticated enough, cannot produce engaging-enough questions to foster meaningful discussions with humans. Figure 5, however, demonstrates that rebroadcasting is a much more effective channel of information spreading: there is no significant difference in the amounts of retweets that humans generate by rebroadcasting content produced by other humans or by bots. In fact, humans and bots retweet each other substantially at the same rate. This suggests that bots are being very effective at spreading information in the human population, which could have some nefarious consequences in the cases when humans fail at verifying the correctness and accuracy of such information and information sources.

 

Complementary cumulative distribution function (CCDF) of replies interactions generated by bots (left) and humans (right)
 
Figure 4: Complementary cumulative distribution function (CCDF) of replies interactions generated by bots (left) and humans (right).
Note: Larger version of figure available here.

 

 

Complementary cumulative distribution function (CCDF) of retweets interactions generated by bots (left) and humans (right)
 
Figure 5: Complementary cumulative distribution function (CCDF) of retweets interactions generated by bots (left) and humans (right).
Note: Larger version of figure available here.

 

To further understand how social media users (both bots and humans) are talking about the two Presidential candidates, we explore the sentiment that the tweets convey. To this purpose, we rely upon sentiment analysis and in particular on SentiStrength (as explained earlier in the Methodology section). Figure 6 shows four panels: the top two panels illustrate the sentiment of the tweets produced by the bots, while the bottom two panels show the same information for tweets generated by humans. Furthermore, the two left panels show the support to Hillary Clinton (respectively by bots and humans), whereas the two right panel show the support to Donald Trump (respectively by bots and humans). The main histograms in each panel show the volume of tweets about Clinton or Trump, separately, whereas the insets show the difference between the two (this to illustrate the disproportion in support of the candidate of one’s factions, as opposed to the other candidate).

 

Distributions of the sentiment of bots (top) and humans (bottom) supporting the two Presidential candidates
 
Figure 6: Distributions of the sentiment of bots (top) and humans (bottom) supporting the two Presidential candidates.
Note: Larger version of figure available here.

 

What appears evident from contrasting the left and right panels is that, on average, the tweets produced by Trump’s supporters are significantly more positive than that of Clinton’s supporters, regardless of whether the source is human or bot. If we focus on Trump’s bot supporters, we note that they generate almost no negative tweets; they indeed produce the most positive set of tweets in the entire dataset — a very significant fraction of these non-negative bot-generated tweets (about 200,000 or nearly two-third of the total) are in support of Donald Trump. This generates a stream of support that is at staggering odds with respect to the overall negative tone that characterizes the 2016 Presidential election campaigns. The fact that bots produce systematically more positive content in support of a candidate can bias the perception of the individuals exposed to it, suggesting that there exists an organic, grassroots support for a given candidate, while in reality it’s all artificially generated.

Some interesting insights emerge also from the analysis of Clinton’s supporters: on average, human-generated tweets show slightly more positive sentiment toward the candidate than the bot-generated ones. Overall, a more natural distribution of tweets’ sentiment emerges from the two groups of bots and human supporters, with a roughly equal number of positive and negative tweets being present in the pro-Clinton discussion.

To further understand these dynamics, we manually analyzed two hashtags, namely #NeverTrump and #NeverHillary, as emblematic examples of campaigns explicitly devoted to target the candidate of one’s opposing political leaning. The hashtag #NeverTrump, used by supporters of the Democratic candidate Hillary Clinton, accrued 105,906 positive tweets, and 118,661 negative ones, roughly an equal split; on the other hand, the hashtag #NeverHillary pushed by Trump’s supporters generated significantly more negative tweets (204,418) than positive ones (171,877). Tables 4 to 7 show various examples of tweets generated by bots, and the candidate they support (detected with our method). This should illustrate the ability of our framework to study the phenomena at hand.

 

Table 4: Examples of tweets talking about Trump posted by Trump-supporting bots.
Bots — Trump supporters — Talking about Trump.
@pexykuzuregi: RT @CrowdFundGurus: Check out "Donald Trump Your President" #Trump2016 #TrumpTrain by Rick Poppe — https://t.co/mW0YLUk6aZ
@cj_panirman: RT @realDonaldTrump: Time to #DrainTheSwamp in Washington, D.C. and VOTE #TrumpPence16 on 11/8/2016. Together, we will MAKE AMERICA SAFE ...
@suohuu: RT @LindaSuhler: ��Gov Mike Pence Rally TUESDAY #Virginia
�Williamsburg, VA 7:30 PM ET #TrumpPence16 #MAGA #Jobs #AmericaFirst Reg: https:/...
@Marycar08639249: RT @AlwaysActions: Powerful response to obama's "insult" speech by Donald Trump supporting @USArmy @AdBell45 #VoteTrump2016 #Trump2016 htt ...

 

 

Table 5: Examples of tweets talking about Clinton posted by Trump-supporting bots.
Bots — Trump supporters — Talking about Clinton.
@dreamedofdust: #NeverHillary ! https://t.co/pmuRci7RhL Without A Doubt That FBI Director James Comey Covered Up Hillary Clinton’s Lies, Gave Immunity To ...
@PatDollard: RT @DesertRiver: Oh Goody... #Hillary wants open borders & to immediately bring in 650,000 more muslims, none of which are vetted https://t ...
@pavegecko01: #Hillary can't walk down the stairs by herself https://t.co/8DOZSwaHNm
@WareButch: RT @IDontMissdotcom: Hacked Docs From Clinton Foundation Show Dems Used Tax Dollars for Political Campaigns — https://t.co/1MIA2ZInFB #tcot

 

 

Table 6: Examples of tweets talking about Clinton posted by Clinton-supporting bots.
Bots — Clinton supporters — Talking about Clinton.
@diaz_mldiaz9: RT @peterdaou: We've reached a point in 2016 where rampant gender bias and double standards against #Hillary are totally suppressed as a le ...
@u_edilberto: RT @WeNeedHillary: Polls Are All Over the Place. Keep Calm & Hillary On! https://t.co/XwBFfLjz7x #p2 #ctl #ImWithHer #TNTweeters https://t ...

 

 

Table 7: Examples of tweets talking about Trump posted by Clinton-supporting bots.
Bots — Clinton supporters — Talking about Trump.
@natespuewell: #NeverTrump Those fake, nonsense polls are actually real, good polls, Trump's spokesman insists — Campaign of lies https://t.co/Mvja0PPeaH
@routeofthesun: RT @hermanbutler1: FactChecking The #VPDebate https://t.co/pQDyBpuwCt #Gop #TNTweeters #USLatino #LibCrib #NeverTrump #ImWithHer #StongerTo ...
@CTO1ChipNagel: RT @mmpadellan: Just wanted 2 share a look at the GOP Derangement Syndrome up close. They actually *think* #Trump was *in control* Shhh...d ...

 

A final consideration emerges when contrasting the pro-Clinton and pro-Trump factions: the former focuses much more on their candidate, with a significant number of tweets referring to Clinton. Conversely, pro-Trump supporters (humans and bots) devote a significant number of tweets to their opponent: in fact, the majority of negative tweets generated by both humans and bots are addressing Hillary Clinton. This is strikingly different from the Clinton supporters, whose negative tweets address in large majority the candidate herself, rather than her opponent.

Bots embeddedness: Our final analysis explores the degree of embeddedness of the bots in the social network. To do so, we adopt the k-core decomposition technique, which aims at identifying cores (subgroups) of nodes all with degree larger than a parameter k. For example, a 50-core is a subset of nodes in the network, all with degree larger than 50. The intuition is that nodes in cores associated with larger k are more deeply embedded in the network, and therefore sit in more central, or influential, position. Since we are interested in information diffusion in particular, we created a directed network from the retweets that users exchange one another. If user u retweets user v, we draw a directed link going from u to v. Therefore, users with very large in-degree will correspond to those who get retweeted a lot. Starting from this network, we extracted the k-cores, for values of k ranging between 10 and 100. Figure 7 (right panel) shows the number of users as function of the k-core. Afterwards, for each k-core we calculated the proportion of users that are human, bot, or unknown. The left panel of Figure 7 shows the results of such analysis: it appears that, as k grows, the fraction of bots steadily increases, as that of humans does, whereas the proportion of unknown accounts drastically decreases. The growth of the two labeled classes follows the intuition that, as k becomes larger, accounts are more active in the conversation, and therefore BotOrNot has more information to classify the accounts. However, what is interesting is that the fraction of bots that are increasingly better connected and more deeply embedded in the social network grows fourfold, from roughly three percent to above 12 percent. This insight suggests that bots become more and more central in the rebroadcasting network, and a significant fraction of accounts in high k-cores is indeed a social bot.

 

Fraction of users as a function of the k-core Fraction of total number of users as a function of the k-core
Note: Larger version of here. Note: Larger version of here.
 
Figure 7: Fraction of users (left) and total number of users (right) as a function of the k-core.

 

 

++++++++++

Conclusions

The diffusion of information and the mechanisms of democratic discussion have radically changed since the advent of online social media. Platforms like Twitter have been extensively praised for their contribution to democratization of discussions about policy, politics, and social issues. However, many studies have also highlighted the perils associated with the abuse of these platforms. Manipulation of information, and the spreading of misinformation and unverified information are among those risks.

In this work, we investigated the role and effects of social bots, automatic accounts that are mostly used to manipulate online conversations. In particular, we showed that bots are pervasively present and active in the online political discussion about the 2016 U.S. Presidential election. We collected tweets posted during the period between 16 September and 21 October 2016 related to the election using the Twitter Search API and a manually compiled list of keywords and hashtags. This procedure yielded over 20 million tweets generated by nearly 2.8 million distinct users. By adopting state-of-the-art detection techniques developed by our group in the past, we estimated that about 400,000 bots are engaged in the political discussion about the Presidential election, responsible for roughly 3.8 million tweets, about one-fifth of the entire conversation.

The presence social bots in online political discussion can create three tangible issues: first, influence can be redistributed across suspicious accounts that may be operated with malicious purposes; second, the political conversation can become further polarized; third, the spreading of misinformation and unverified information can be enhanced. Various studies in policy and political sciences are currently investigating the consequences of such phenomena (Woolley and Howard, 2016; Shorey and Howard, 2016; Maréchal, 2016). We plan to explore in particular the issue of factual information and misinformation spreading in the context of political and social issues.

Furthermore, the observation period of our study is rather short, encompassing just about one month of activity. It would be very interesting to study how the behavior of bots evolve over time to adapt to human increasing ability to recognize them. We highlighted how bots are already not succeeding at engaging with humans with reply: we plan to study the ability of humans to recognize social media bots in the future.

Concluding, it is important to stress that, although our analysis unveiled the current state of the political debate and agenda pushed by the bots, it is impossible to determine who operates such bots. State- and non-state actors, local and foreign governments, political parties, private organizations, and even single individuals with adequate resources (Kollanyi, 2016), could obtain the operational capabilities and technical tools to deploy armies of social bots and affect the directions of online political conversation. Therefore, future efforts will be required by the machine learning research community do develop more sophisticated detection techniques capable of unmasking the puppet masters. End of article

 

About the authors

Alessandro Bessi is a Visiting Assistant Researcher at the University of Southern California Information Sciences Institute, and a Ph.D. candidate in Economics and Social Sciences at IUSS Institute for Advanced Study in Pavia, Italy. His research interests include statistical modeling and causal inference in online social systems, in particular the effects of misinformation in influencing public opinion and agenda setting.
E-mail: bessi [at] isi [dot] edu

Emilio Ferrara is Research Assistant Professor at the University of Southern California and Research Leader at the USC Information Sciences Institute. His research interests include characterizing information diffusion and campaigns in online social networks, detecting and predicting abuse in such environments. He was named 2015 IBM Watson Big Data Influencer, he is recipient of the 2016 Complex System Society Junior Scientific Award, and he received the 2016 DARPA Young Faculty Award.
E-mail: emiliofe [at] usc [dot] edu

 

Contributions

EF and AB conceived the study and analyzed the data. EF wrote and revised the manuscript.

 

Acknowledgements

We are grateful to Simrat Singh Chhabra (USC) for his help with the Twitter data collection and account partisanship assignment. This work has not been supported by any funding agency, private organization, or political party.

 

Notes

1. http://sts10.github.io/blog/2014/12/23/guide-create-markov-twitter-bot/.

2. http://readwrite.com/2014/06/20/random-non-sequitur-twitter-bot-instructions/.

3. http://www.pygaze.org/2016/03/how-to-code-twitter-bot/.

4. It is worth noting that earlier versions of the BotOrNot API used to classify organization accounts as likely to be bots. This happened mostly because of the large weight associated to the volume of tweets posted by a user: since several people use organization accounts at the same time, they usually exceed regular users’ tweet volumes. This issue has been addressed in latest version of BotOrNot, the one adopted in this study. As for verification, we manually checked the list of the top few hundred accounts with the highest bot scores, and we did not identify any recognizable organization, such as news agencies, political party accounts, etc.

 

References

L.A. Adamic and N. Glance, 2005. “The political blogosphere and the 2004 US election: Divided they blog,” LinkKDD ’05: Proceedings of the Third International Workshop on Link Discovery, pp. 36–43.
doi: http://doi.org/10.1145/1134271.1134277, accessed 1 November 2016.

S. Aral and D. Walker, 2012. “Identifying influential and susceptible members of social networks,” Science, volume 337, number 6092 (20 July), pp. 337–341.
doi: http://doi.org/10.1126/science.1215842, accessed 1 November 2016.

E. Bakshy, S. Messing, and L.A. Adamic, 2015. “Exposure to ideologically diverse news and opinion on Facebook,” Science, volume 348, number 6239 (5 June), pp. 1,130–1,132.
doi: http://doi.org/10.1126/science.aaa1160, accessed 1 November 2016.

M. Bastos, R. Recuero, and G. Zago., 2014. “Taking tweets to the streets: A spatial analysis of the Vinegar Protests in Brazil,” First Monday, volume 19, number 3, at http://firstmonday.org/article/view/5227/3843, accessed 1 November 2016.
doi: http://dx.doi.org/10.5210/fm.v19i3.5227, accessed 1 November 2016.

M.A. Bekafigo and A. McBride, 2013. “Who tweets about politics? Political participation of Twitter users during the 2011 gubernatorial elections,” Social Science Computer Review, volume 31, number 5, pp. 625–643.
doi: http://doi.org/10.1177/0894439313490405, accessed 1 November 2016.

R.M. Bond, C.J. Fariss, J.J. Jones, A.D. Kramer, C. Marlow, J.E. Settle, and J.H. Fowler, 2012. “A 61-million-person experiment in social influence and political mobilization,” Nature, volume 489, number 7415 (13 September), pp. 295–298.
doi: http://doi.org/10.1038/nature11421, accessed 1 November 2016.

A. Bruns and J. Burgess, 2011. “#ausvotes: How Twitter covered the 2010 Australian federal election,” Communication, Politics & Culture, volume 44, number 2, 37–56.

J. Burgess and A. Bruns, 2012. “(Not) the Twitter election: The dynamics of the #ausvotes conversation in relation to the Australian media ecology,” Journalism Practice, volume 6, number 3, pp. 384–402.
doi: http://dx.doi.org/10.1080/17512786.2012.663610, accessed 1 November 2016.

J.E. Carlisle and R.C. Patton, 2013. “Is social media changing how we understand political engagement? An analysis of Facebook and the 2008 Presidential election,” Political Research Quarterly, volume 66, number 4, pp. 883–895.
doi: http://dx.doi.org/10.1177/1065912913482758, accessed 1 November 2016.

M.D. Conover, C. Davis, E. Ferrara, K. McKelvey, F. Menczer, and A. Flammini, 2013a. “The geospatial characteristics of a social movement communication network,” PloS ONE, volume 8, number 3 (6 March), e55957.
doi: http://dx.doi.org/10.1371/journal.pone.0055957, accessed 1 November 2016.

M.D. Conover, E. Ferrara, F. Menczer, and A. Flammini, 2013b. The digital evolution of Occupy Wall Street,” PloS ONE, volume 8, number 5 (29 May), e64679.
doi: http://dx.doi.org/10.1371/journal.pone.0064679, accessed 1 November 2016.

M.D. Conover, B. Gonçalves, J. Ratkiewicz, A. Flammini, and F. Menczer, 2011. “Predicting the political alignment of Twitter users,” 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust (PASSAT) and 2011 IEEE Third Inernational Conference on Social Computing (SocialCom).
doi: http://dx.doi.org/10.1109/PASSAT/SocialCom.2011.34, accessed 1 November 2016.

C.A. Davis, O. Varol, E. Ferrara, A. Flammini, and F. Menczer, 2016. “BotOrNot: A system to evaluate social bots,” Developers Day Workshop at World Wide Web Conference (Montreal); version at https://arxiv.org/abs/1602.00975, accessed 1 November 2016.

N.A. Diakopoulos and D.A. Shamma, 2010. “Characterizing debate performance via aggregated Twitter sentiment,” CHI ’10: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1,195–1,198.
doi: http://dx.doi.org/10.1145/1753326.1753504, accessed 1 November 2016.

J. DiGrazia, K. McKelvey, J. Bollen, and F. Rojas, 2013. “More tweets, more votes: Social media as a quantitative indicator of political behavior,” PloS ONE, volume 8, number 11 (27 November), e79449.
doi: http://dx.doi.org/10.1371/journal.pone.0079449, accessed 1 November 2016.

R. Effing, J. van Hillegersberg, and T. Huibers, 2011. “Social media and political participation: Are Facebook, Twitter and YouTube democratizing our political systems?” In: E. Tambouris, A. Macintosh, and H. de Bruijn (editors). Electronic participation: Third IFIP WG 8.5 international conference, ePart 2011, Delft, The Netherlands, August 29 — September 1, 2011. Proceedings. Berlin: Springer-Verlag, pp. 25–35.
doi: http://dx.doi.org/10.1007/978-3-642-23333-3_3, accessed 1 November 2016.

S. El-Khalili, 2013. “Social media as a government propaganda tool in post-revolutionary Egypt,” First Monday, volume 18, number 3, at http://firstmonday.org/article/view/4620/3423, accessed 1 November 2016.
doi: http://dx.doi.org/10.5210/fm.v18i3.4620, accessed 1 November 2016.

G.S. Enli and E. Skogerbø, 2013. “Personalized campaigns in party-centred politics: Twitter and Facebook as arenas for political communication,” Information, Communication & Society, volume 16, number 5, pp. 757–774.
doi: http://dx.doi.org/10.1080/1369118X.2013.782330, accessed 1 November 2016.

E. Ferrara, O. Varol, C. Davis, F. Menczer, and A. Flammini, 2016. “The rise of social bots,” Communications of the ACM, volume 59, number 7, pp. 96–104.
doi: http://dx.doi.org/10.1145/2818717, accessed 1 November 2016.

E. Ferrara, 2015. “Manipulation and abuse on social media,” ACM SIGWEB Newsletter, Spring issue, article number 4.
doi: http://dx.doi.org/10.1145/2749279.2749283, accessed 1 November 2016.

E. Ferrara and Z. Yang, 2015. “Quantifying the effect of sentiment on information diffusion in social media,” PeerJCompSci, at https://peerj.com/articles/cs-26/, accessed 1 November 2016.

C. Freitas, F. Benevenuto, S. Ghosh, and A. Veloso, 2015. “Reverse engineering socialbot infiltration strategies in Twitter,” ASONAM ’15: Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015, pp. 25–32.
doi: http://dx.doi.org/10.1145/2808797.2809292, accessed 1 November 2016.

R.K. Gibson and I. McAllister, 2006. “Does cyber–campaigning win votes? Online communication in the 2004 Australian election,” Journal of Elections, Public Opinion and Parties, volume 16, number 3, pp. 243–263.
doi: http://dx.doi.org/10.1080/13689880600950527, accessed 1 November 2016.

S.A. Golder and M.W. Macy, 2011. “Diurnal and seasonal mood vary with work, sleep, and daylength across diverse cultures,” Science, volume 333, number 6051 (30 September), pp. 1,878–1,881.
doi: http://dx.doi.org/10.1126/science.1202775, accessed 1 November 2016.

S. González-Bailón, J. Borge-Holthoefer, and Y. Moreno, 2013. Broadcasters and hidden influentials in online protest diffusion, American Behavioral Scientist, volume 57, number 7, pp. 943–965.
doi: http://dx.doi.org/10.1177/0002764213479371, accessed 1 November 2016.

S. González-Bailón, J. Borge-Holthoefer, A. Rivero, and Y. Moreno, 2011. “The dynamics of protest recruitment through an online network,” Scientific Reports, volume 1, article number 197, at http://www.nature.com/articles/srep00197, accessed 1 November 2016.
doi: http://dx.doi.org/10.1038/srep00197, accessed 1 November 2016.

T. Hwang, I. Pearce, and M. Nanis, 2012. “Socialbots: Voices from the fronts,” Interactions, volume 19, number 2, pp. 38–45.
doi: http://dx.doi.org/10.1145/2090150.2090161, accessed 1 November 2016.

P.N. Howard, 2006. New media campaigns and the managed citizen. New York: Cambridge University Press.

P.N. Howard, A. Duffy, D. Freelon, M.M. Hussain, W. Mari, and M. Maziad, 2011. “Opening closed regimes: What was the role of social media during the Arab Spring?” Project on Information Technology and Political Islam Data, Memo 2011.1; version at https://www.library.cornell.edu/colldev/mideast/Role%20of%20Social%20Media%20During%20the%20Arab%20Spring.pdf, accessed 1 November 2016.

B. Kollanyi, 2016. “Where Dd bots come from? An analysis of bot codes shared on GitHub,” International Journal of Communication, volume 10, pp. 4,932–4,951, and at http://ijoc.org/index.php/ijoc/article/view/6136, accessed 1 November 2016.

B. Kollanyi, P.N. Howard, and S.C. Woolley, 2016. “Bots and automation over Twitter during the first U.S. Presidential debate,” COMPROP Data Memo 2016.1 (14 October), at http://politicalbots.org/wp-content/uploads/2016/10/Data-Memo-First-Presidential-Debate.pdf, accessed 1 November 2016.

B.D. Loader and D. Mercea, 2011. “Networking democracy? Social media innovations and participatory politics, Information,” Communication & Society, volume 14, number 6, pp. 757–769.
doi: http://dx.doi.org/10.1080/1369118X.2011.592648, accessed 1 November 2016.

N. Maréchal, 2016. “When bots tweet: Toward a normative framework for bots on social networking sites,” International Journal of Communication, volume 10, at http://ijoc.org/index.php/ijoc/article/view/6180, accessed 1 November 2016.

J. Messias, L. Schmidt, R. Oliveira, and F. Benevenuto, 2013. “You followed my bot! Transforming robots into influential users in Twitter,” First Monday, volume 18, number 7, at http://firstmonday.org/article/view/4217/3700, accessed 1 November 2016.
doi: http://dx.doi.org/10.5210/fm.v18i7.4217, accessed 1 November 2016.

P.T. Metaxas and E. Mustafaraj, 2012. “Social media and the elections,” Science, volume 338, number 6106 (26 October), pp. 472–473.
doi: http://dx.doi.org/10.1126/science.1230456, accessed 1 November 2016.

F. Morstatter, J. Pfeffer, H. Liu, and K.M. Carley, 2013. “Is the sample good enough? Comparing data from Twitter’s Streaming API with Twitter’s Firehose,” Proceedings of the Seventh International AAAI Conference on Weblogs and Social Media, at https://www.aaai.org/ocs/index.php/ICWSM/ICWSM13/paper/view/6071/6379, accessed 1 November 2016.

J. Ratkiewicz, M.D. Conover, M. Meiss, B. Gonçalves, A. Flammini, and F. Menczer, 2011a. “Detecting and tracking political abuse in social media,” Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media, at https://www.aaai.org/ocs/index.php/ICWSM/ICWSM11/paper/view/2850, accessed 1 November 2016.

J. Ratkiewicz, M. Conover, M. Meiss, B. Gonçalves, S. Patil, A. Flammini, and F. Menczer, 2011b. “Truthy: Mapping the spread of astroturf in microblog streams,” WWW ’11: Proceedings of the 20th International Conference Companion on World Wide Web, pp. 249–252.
doi: http://dx.doi.org/10.1145/1963192.1963301, accessed 1 November 2016.

C. Shirky, 2011. “The political power of social media: Technology, the public sphere, and political change,” Foreign Affairs, volume 90, number 1, pp. 28–41, at https://www.foreignaffairs.com/articles/2010-12-20/political-power-social-media, accessed 1 November 2016.

S. Shorey and P.N. Howard, 2016. “Automation, big data and politics: A research review,” International Journal of Communication, volume 10, at http://ijoc.org/index.php/ijoc/article/view/6233, accessed 1 November 2016.

V.S. Subrahmanian, A. Azaria, S. Durst, V. Kagan, A. Galstyan, K. Lerman, L. Zhu, E. Ferrara, A. Flammini, and F. Menczer, 2016. “The DARPA Twitter bot challenge,” Computer, volume 49, number 6, pp. 38–46.
doi: http://dx.doi.org/10.1109/MC.2016.183, accessed 1 November 2016.

M. Thelwall, 2013. “Heart and soul: Sentiment strength detection in the social Web with SentiStrength,” at http://sentistrength.wlv.ac.uk/documentation/SentiStrengthChapter.pdf, accessed 1 November 2016.

M. Thelwall, K. Buckley, G. Paltoglou, D. Cai, and A. Kappas, 2010. “Sentiment strength detection in short informal text,” Journal of the American Society for Information Science and Technology, volume 61, number 12, pp. 2,544–2,558.
doi: http://dx.doi.org/10.1002/asi.21416, accessed 1 November 2016.

Z. Tufekci, 2014. “Engineering the public: Big data, surveillance and computational politics,” First Monday, volume 19, number 7, at http://firstmonday.org/article/view/4901/4097, accessed 1 November 2016.
doi: http://dx.doi.org/10.5210/fm.v19i7.4901, accessed 1 November 2016.

Z. Tufekci and C. Wilson, 2012. “Social media and the decision to participate in political protest: Observations from Tahrir Square,” Journal of Communication, volume 62, number 2, pp. 363–379.
doi: http://dx.doi.org/10.1111/j.1460-2466.2012.01629.x, accessed 1 November 2016.

O. Varol, E. Ferrara, C.L. Ogan, F. Menczer, and A. Flammini, 2014. “Evolution of online user behavior during a social upheaval,” WebSci ’14: Proceedings of the 2014 ACM Conference on Web Science, pp. 81–90.
doi: http://dx.doi.org/10.1145/2615569.2615699, accessed 1 November 2016.

S.C. Woolley and P.N. Howard, 2016. “Political communication, computational propaganda, and autonomous agents — Introduction,” International Journal of Communication, volume 10, at http://ijoc.org/index.php/ijoc/article/view/6298, accessed 1 November 2016.

Y. Wang, Y. Li, and J. Luo, 2016. “Deciphering the 2016 US Presidential campaign in the Twitter sphere: A comparison of the Trumpists and Clintonists,” arXiv (9 March), at https://arxiv.org/abs/1603.03097, accessed 1 November 2016.

X. Yang, B.–C. Chen, M. Maity, and E. Ferrara, 2016. “Social politics: Agenda setting and political communication on social media,” arXiv (22 July), at https://arxiv.org/abs/1607.06819, accessed 1 November 2016.

Z. Yang, C. Wilson, X. Wang, T. Gao, B.Y. Zhao, and Y. Dai, 2014. “Uncovering social network sybils in the wild,” IMC ’11: Proceedings of the 2011 ACM SIGCOMM Conference on Internet Measurement Conference, pp. 259–268.
doi: http://dx.doi.org/10.1145/2068816.2068841, accessed 1 November 2016.

 


Editorial history

Received 24 October 2016; revised 31 October 2016; accepted 2 November 2016.


Copyright © 2016, Alessandro Bessi and Emilio Ferrara. All Rights Reserved.

Social bots distort the 2016 U.S. Presidential election online discussion
by Alessandro Bessi and Emilio Ferrara.
First Monday, Volume 21, Number 11 - 7 November 2016
https://firstmonday.org/ojs/index.php/fm/article/download/7090/5653
doi: http://dx.doi.org/10.5210/fm.v21i11.7090