Over the last several years political actors worldwide have begun harnessing the digital power of social bots — software programs designed to mimic human social media users on platforms like Facebook, Twitter, and Reddit. Increasingly, politicians, militaries, and government-contracted firms use these automated actors in online attempts to manipulate public opinion and disrupt organizational communication. Politicized social bots — here ‘political bots’ — are used to massively boost politicians’ follower levels on social media sites in attempts to generate false impressions of popularity. They are programmed to actively and automatically flood news streams with spam during political crises, elections, and conflicts in order to interrupt the efforts of activists and political dissidents who publicize and organize online. They are used by regimes to send out sophisticated computational propaganda. This paper conducts a content analysis of available media articles on political bots in order to build an event dataset of global political bot deployment that codes for usage, capability, and history. This information is then analyzed, generating a global outline of this phenomenon. This outline seeks to explain the variety of political bot-oriented strategies and presents details crucial to building understandings of these automated software actors in the humanities, social and computer sciences.Contents
Introduction: Bots, social media, and politics
Literature review
Research questions
Methodology
Findings and analysis
Conclusion
Introduction: Bots, social media, and politics
In August 2014, Twitter filed a U.S. Securities and Exchange Commission report revealing that over 23 million active user accounts on the company’s social networking site were actually social bots — a particular type of automated software agent written to gather information, make decisions, and both interact with and imitate real users online. Security experts believe that bots generate more than 55 percent of all traffic online (Zeifman, 2014). The ubiquity of these programs on social platforms, and throughout the Web at large, is of pressing concern to academic and civil communities interested in understanding how digital automation effects particular aspects of culture, society, and — most central to this study — politics.
Social bots are distinct from more general Web bot software. The average bot is used for information gathering. These ‘spiders’ and ‘scrapers’ dominate many mundane facets of the Internet. They aid in the generation of personalized online news preferences and advertisements. They facilitate the organization of search engines and help maintain Web pages. This variety of bot doesn’t engage in discourse with human users. These bots can, however, be used for political purposes. Governments, corporations, and other actors use monitoring bots in intelligence gathering, social listening and scanning for copyright infringement (Desouza, 2001; Barford and Yegneswaran, 2007; Stinson and Mitchell, 2007). The key feature of this variety of bots is not where they live, i.e., on a particular platform, but what they do, i.e., gather and sort information.
Social bots have direct communication with human users on social media platforms and elsewhere in comments sections of online news sites, forums, etc. Early chatbots, computer programs able to engage in conversation with human users, were a rudimentary incarnation of this technology. Both spambots — bots used to spread marketing information on a variety of online communication platforms — and joke or commentary bots — social media bots used to send out jokes or comment on social issues — are newer, and sometimes better designed, examples of social bots.
The ways in which this variety of automated social software are being deployed, and the groups behind deployment, are changing. Computer science researchers have found that social bots can be used beyond simple human-bot interaction and towards large scale mining of users’ data and actual manipulation of public opinion on sites like Facebook and Twitter (Boshmaf, et al., 2011; Hwang, et al., 2012).
Until roughly six years ago, technologically adept marketers used social bots to send blatant spam in the form of automatically proliferated social media advertising content (Chu, et al., 2010). A growing collection of recent research reveals, however, that political actors worldwide are beginning to make use of these automated software programs in subtle attempts to manipulate relationships and opinion online (Boshmaf, et al., 2011; Ratkiewicz, et al., 2011a; 2011b; Metaxas and Mustafuraaj, 2012; Alexander, 2015; Abokhodair, et al., 2015). Politicians now emulate the popular twitter tactic of purchasing massive amounts of bots to significantly boost follower numbers (Chu, et al., 2012). Militaries, state contracted firms, and elected officials use political bots to spread propaganda and flood newsfeeds with political spam (Cook, et al., 2014; Forelle, et al., 2015).
Political bots are among the latest, and most unique, technological advances situated at intersection of politics and digital strategy. Numerous news outlets worldwide have covered government and military bot deployments, paying special attention to the rapid rise in usage of such software. Journalists, bloggers, and citizen reporters have worked to explain how governments and those vying for power have used the software in specific contexts. According to media reports political bots have been deployed in several countries: Argentina (Rueda, 2012), Australia (Peel, 2014), Azerbaijan (Pearce, 2013), Bahrain (York, 2011), China (Krebs, 2011), Iran (York, 2011), Italy (Vogt, 2012), Mexico (Orcutt, 2012), Morocco (York, 2011), Russia (Krebs, 2011), South Korea (Sang Hung, 2013), Saudi Arabia (Freedom House, 2013), Turkey (Poyrazlar, 2014), the United Kingdom (Downes, 2012), the Unites States (Coldeway, 2012), and Venezuela (Howard, 2014) among them. The New York Times (Urbina, 2013) and New Yorker (Dubbin, 2013) have published comprehensive articles about the rise of social bot technology, giving very mainstream exposure to the important new political tool.
Many computer scientists and policy makers treat bot-generated traffic as a nuisance to be detected and managed. System administrators at companies like Twitter work to simply shut down accounts that appear to be running via automatic scripts. These approaches are too simplistic and avoid focusing on the larger, and systemic, problems presented by political bot software. Political bots suppress free expression and civic innovation through the demobilization of activist groups and the suffocation of democratic free speech. They subtly work to manipulate public opinion by giving false impressions of candidate popularity, regime strength and international relations. The disruption to public life caused by political bots is enhanced by innovations in parallel computation and innovations to algorithm construction.
Political bots must, therefore, be better understood for the sake of free speech and the future of digitally mediated civic engagement. The information that exists on political bots is disjointed and often isolated to specific, country or election-oriented events. This paper helps to comparatively plot out the evolutionary trajectory of this new medium of interest in the fields of computer mediated communication, political communication, information science, science, technology, and society (STS), and computer science.
Many studies in social science map the relationship of contemporary politics and new and evolving technologies by analyzing media reports about events and tools in question (Earl, et al., 2004; Krippendorff, 2004; Edwards, 2013; Strange, et al., 2013). This paper takes up this method, conducting a content analysis of credible news articles on political bot usage in order to construct a global event dataset that codes for political bot location, proliferation, and strategy. From this information, a working description of both political bot use and state-specific tactics is presented.
The conceptual foundation of this project lies at the convergence of recent literature from two subfields of communication: political communication and science, technology, and society studies. Work on tracking and unmasking social bots, mostly from computer and information science researchers, is also of concern for this project.
The ideas at the center of such research are important to work on political bots insofar as they uniquely cover arguments related to how the networked sphere’s political potential, and the affordances of the technology and algorithms associated with the Internet, have been realized on a scale running from democratically liberating to civically repressive. This said, relatively little academic work — especially empirical qualitative research focusing on critical socio-political considerations — has been done on social bots within either communication or the social scientific fields writ large.
Scholars of media have been concerned with the study of propaganda at least since the seminal work of Lippman (1922) and Lasswell (1938). This research has continued in disciplines concerned with politics and communication and has been extended to the spread of propaganda via digital media and the Internet (Kalathil and Boas, 2001, 2003; Hood and Margetts, 2007). The study of political bots picks up this line of research in order to ascertain the ways in which this particular online, and automated, social technology is used in efforts to promote state-sponsored messages worldwide. This research follows a contemporary thread of media studies scholarship focused on relationship of algorithms upon news consumption, political awareness, and cultural understanding (Gillespie, 2014; Gillespie, et al., 2014).
Many scholars have explored the idea that digital media have the potential to afford significant changes to political landscapes both local and global (Howard and Hussein, 2013; Joyce, et al., 2013). Contemporary communication has taken up the idea that the networked sphere supports a new type of mediated communication, the many-to-many model of ‘new’ media. Developments in the study of the Web have, however, led scholars away from the cyber-utopianism sentiment that dominated the 1990s and early 2000s and towards an approach that considers the possibilities of how particular uses of technology and media systems can concurrently bolster democratic liberty, promote authoritarian control, and support positions of power and freedom in between (Turner, 2010).
Benkler (2006) argues that the networked sphere presents a previously unavailable space for decentralized individual action, but he does not discuss the dangerous political and economic potentials of digital bot technology. He suggests, however, that “it is important to understand that any consideration of the democratizing effects of the Internet must measure its effects as compared to the commercial, mass-media-based public sphere, not as compared to an idealized utopia that we embraced a decade ago of how the Internet might be.” [1] In other words, the commercial sphere has very real effects upon the Internet and vice versa. Bots are one example of technology that rose out of commercial enterprise, primarily spam marketing, that has been harnessed for socio-political manipulation.
Research has demonstrated that the liberating affordances of the internet are indeed more bounded by traditional power actors such as states and corporate firms (Hindman, 2008). In fact, some argue that widespread political normalization has occurred online — that a small number of powerful elites control online politics just as they do off-line (Wright, 2012). Indeed, states that exercised firm control over Internet development from the beginning have proven successful in controlling discourse on the Web (Kalathil and Boas, 2003). Political bots are the latest version of digital technology to be harnessed by states and other powerful political actors — including militaries, intelligence services, politicians, and government contractors — in attempts to exert control over the public and one another. The fact that bot technology uses sophisticated, automated, and parallel computational power is particularly concerning. Such capabilities afford the ability to massively disrupt online political conversations and rapidly gather and parse information about citizens.
Specific arguments regarding how political actors are using political bots to influence public opinion online are central to understanding the increasingly pervasive role of the political bot. Computer science researchers have noted that social bots are effective in attacking, hijacking, and altering discourse on social networking sites (Wagner, et al., 2012). Metaxas and Mustafaraj (2010) effectively outline an exemplary, and U.S.-based, case of this sort of propaganda dissemination. Their research analyzes the role of social bots, or ‘sock puppets’, in spreading biased and flawed political information during the 2010 Brown and Coakley Massachusetts Senate race. They found that Astroturf political groups [2] that supported Brown used Bots to carry out significant attacks on the Coakley campaign over social media.
Ratkiewicz, et al. (2011a; 2011b) have done notable work in revealing how such ‘Astroturf’ movements are propagated through the use of automated software programs. Their work has been instrumental in revealing the ways in which online conversation is being interrupted and manipulated by abusive, computationally enabled and driven, behavior. The group’s system, “Truthy”, works to uncover computational propaganda — in the form of social media misinformation — via analysis of online content. Ferrara, et al. (2014) outline the way new iterations of social bots are used and note that bots can and are used in efforts to deceive and defame human users. They suggest that modern social bots are increasingly sophisticated and that pernicious use of them threatens social spheres on and off-line.
Scholars of information science have mapped out the use of political bots in the ongoing Syrian conflict and argue that this case suggests that the technology is becoming more effective at targeting victims and co-opting believable human behavior (Abokhodair, et al., 2015). Social network analyses have revealed the Russian government’s massive ‘troll army,’ a huge collection of bots used to spread disinformation both within Russia and abroad (Alexander, 2015; Borthwick, 2015). Research by Forelle, et al. (2015) reveals that bots have a small but notable hand in shaping social media conversations on politics in Venezuela. Verkamp and Gupta (2014) examine several incidents of spam diffusion during moments of activism online and determine that political actors used bots in these circumstances in order to drown out dissent.
How are social bots deployed by powerful political elites during global political events, especially elections?
What sorts of government actors deploy political bots and why?
What particular political strategies are used in bot-driven manipulation campaigns?
This project employs qualitative content analysis of a corpus of English language news articles on political bot usage. This method of gathering and qualitatively content analyzing available media reports in order to build understandings of new and evolving political technology is well established in the social sciences and is particularly useful in generating socially specific understandings of political action, media innovation, and digital control (Herring, 2009; Strange, et al., 2013).
Lexus Nexus and the three largest search engines— Google, Yahoo, and Bing — were used in combination to gather and build a sample of all available news oriented articles on political bots. The searches were conducted using purposive combinations of the following words and phrases related to the subject at hand: automation, algorithm, bot, Facebook, fake social media account, government, intelligent agent, persona management software, politics, propaganda, Reddit, social bot, social media, sockpuppet, troll, Tumblr, Twitter, Twitter bomb, Twitter bot.
The construction of this dataset is modeled on a two-part study conducted by Joyce, et al. (2013) and Edwards, et al. (2013) known as the “Global Digital Activism Dataset Project.” This research initiative produced data for Howard and Hussain (2013). This project uses purposive sampling and qualitative content analysis to build a coded spreadsheet of specific variables that occur in news articles over networks during elections and political crises.
A sample of 41 news articles made up final dataset. This sample was drawn from a corpus of 58 articles, from 44 unique sources, that were coded for credibility. Because the use of political bots is an emergent phenomena, the amount of English language articles available on the subject was small. This leads to a smaller sample than is usual for traditional content analysis. That said, the articles available were thoroughly vetted for media bias and were rich in content. This report is similar methodologically to the work of Joyce, et al. (2013), here substituting research on digital activism for political bot campaigns:
Unlike many content analyses, in which unit of analysis and unit of observation are one in the same, in this study the two are different. The unit of analysis is the digital activism [here, political bot] campaign while the unit of observation is the news report about that campaign. This means that we studied our digital [political bot] campaigns indirectly. Media bias is a legitimate concern that was mitigated by relying on a variety of news outlets and on amateur as well as professional sources, but could not be completely nullified. Using this method means that we can only say that there is or is not evidence of a particular finding in the sources that were reviewed. The final group of articles was then coded for identifiers treating media reports as cases in the strain of content analysis research used by Herring (2009) and continued by Edwards, et al. (2013). Because of the nature of political bots as an emergent phenomenon the sample was treated as theoretical. This means it was purposeful and exploratory in nature.
Cases were purposively, as opposed to randomly, coded by the author of the paper for this trial exercise in order to develop a working understanding of political bots. This is by no means intended to be a definitive or picture-perfect exercise. The goal here is, rather, to establish early patterns of political bot usage for a broader research endeavor on this phenomenon. The Computational Propaganda Research Team at the University of Washington will use this working evidence as a seed for a forthcoming more sophisticated, larger, and triple-coded, content analysis based research endeavor examining over 100 cases of political bot use. Here codes include political bot production history, capability, and use. These characteristics are then code cases by country and year of occurrence. This report classifies the distinct ways particular governments have deployed bots on social media sites during times of political crises or election.
Articles were given credibility scores of one, two, or three — one being most credible and three least credible. This ranking system was also drawn from the methods of a similar study, on global digital activism, completed by the Digital Activism Research project (Joyce, et al., 2013; Edwards, et al., 2013). Articles ranked with a score of one came from major English language news outlets including ABC, Al Jazeera, Associated Press, BBC News, Forbes, Guardian, Los Angeles Times, NBC, New York Times, ProPublica, Sydney Morning Herald, Times, Wall Street Journal, Washington Post, and Wired. Those ranked at two came from smaller, more commentary-oriented, English language news sites and prominent blogs, generally those maintained by journalists, academics, or security experts. These articles were found at the Atlantic, Dailydot.com, Gizmodo.com, GlobalVoices.org, InsideCroydon.com, KrebsonSecurity.com, Mashable.com, MIT Technology Review, Motherboard.Vice.com, New Yorker, Slate.com, ThinkProgress.org, and theVerge.com. Those ranked at three came from social media mentions — mostly from Facebook, Reddit, and Twitter — they also came from content farms, clickbait pages, and personal or partisan blogs. Only articles ranked at one or two were incorporated into the final sample of articles on political bots.
In a concerted effort to avoid selection and description bias, and in order to understand the potential for specific instances of media bias, this article adheres to the work of Earl, et al. (2004) and Joyce, et al. (2013) in choosing a corpus of articles coded as both professional and amateur media reports. To strengthen the results garnered from searches, established techniques of online content analyses are employed to account for coding and accessing media examined via the Web (Herring, 2009). Specifically, this research makes use of techniques for coding articles from blogs and uses relevant hyperlinks within these credible sources as referents for further study (Joyce, et al., 2013).
In setting up the best instances of political bot usage it was useful to sort these cases by country. Table 1, at the end of this document, uses analytical conclusions drawn from media accounts to list selected instances wherein governments or other powerful political actors are alleged to have used bots. This chart notes some pertinent political details about each country.
Table 1 identifies unique instances of political bot usage by country. Unique instances identify political bot use in a particular country where two or more credible hard news articles on the country specific bot usage were available. The unique instances of bot usage are located in the first column. Column one identifies the country of bot usage, in alphabetical order, column two the year in which bots were used. Column three identifies the type of government in each country, for instance, is the country more democratic (10) or more autocratic (-10). This data comes from Polity IV state authority status study (2012). Column four notes the actor believed to have deployed the political bots. Column five notes particular sources from whence the information came. This table serves as a general display of the comparable attributes of country specific bot usage.
There are some instances that bear additional explanation. Chinese bot usage is focused only on cases of Chinese political bots being used in and outside of China by Chinese government actors. The Tibetan instance focuses on Chinese bots being used in the Tibetan conflict. It is also worth noting that Russia and Venezuela are listed as in between Democratic and Autocratic in column three due their midrange Polity IV scores.
Table 1: Selected incidents of political bot usage, by country. Country Year of bot usage Polity IV score Suspected deployer Source Argentina
Australia2012
20138
10State
StateRueda, 2012
Peel, 2013Azerbaijan 2012 −8 State Pearce, 2013 Bahrain 2011 −8 State, outsourced to firm York, 2011 China 2012 −8 State Krebs, 2011 Iran
Italy2011
2012−6
10State, outsourced to firm
PoliticianYork, 2011
Vogt, 2012Mexico 2011 8 Political parties Herrera, 2012 Morocco 2011 −6 State, outsourced to firm York, 2011 Russia 2011 4 State Krebs, 2011 Saudi Arabia 2013 −10 State Freedom House, 2013 South Korea 2012 8 State Sang-Hun, 2013 Syria 2011 −8 State, outsourced to firm York, 2011 Tibet
Turkey2012
2014−8
9State
StateKrebs, 2011
Poyrazlar, 2014United Kingdom 2012 10 State Downes, 2012 United States 2011 10 State, outsourced to firm Coldewey, 2012 Venezuela 2012 2 State Shields, 2013
There is a cohesive nature to how authors report on the ways political bots were used from country to country. Governments and other political actors most generally deployed political bots during elections or moments of distinct, and country-specific, political conversation or crisis. It is worth noting that some articles also spoke of instances in which political bots were used for preemptive online security purposes. The Syrian government, for example, has reportedly used bots to generate pro-regime propaganda targeted at both in state and external targets on Twitter during the ongoing revolution (Abokhodair, et al., 2015). Venezuelan political bots described focus solely on attempts to manipulate public opinion in state (Forelle, et al., 2015). Several journalists reported that politicians in Australia, Italy, the U.K., and U.S. bought fake, bot-driven, social media followers in attempts to seem more popular to constituents.
The distinct ways in which political bots have been used varies from country to country and political instance to political instance. During elections political bots have been used to demobilize an opposing party’s followers. In this case, the deployer sends out Twitter “bombs:” barrages of tweets from a multitude of bot-driven accounts. These tweets co-opt tags commonly used by supporters of the opposing party and re-tweet them thousands of times in an attempt to prevent organization amongst detractors. For instance, if a political actor notices that their opponent’s supporters consistently use the tag #freedomofspeech in organizational messages, then that actor might make an army of bots to prolifically re-tweet this specific tag. The effect of this is that the opponent’s supporters have a very difficult time searching common tags in attempts to organize and communicate with their fellows.
Many cases of political bot use occur when governments target perceived cyber-security threats or political-cultural threats from other states. Several articles mention state-sanctioned Russian bot deployment. In these articles, Russian bots were allegedly used to promote regime ideals or combat anti-regime speech against targets abroad. Alexander (2015) and Borthwick (2015) outline examples with particular attention to networks and target detail. Chinese political bots have attack various other countries and commercial entities throughout Asia and the West (Krebs, 2011). Political actors in Azerbaijan, Iran, Morocco reportedly used bots in attempts to combat anti-regime speech and promote the ideals of the state (York, 2011; Pearce, 2013).
Governments, politicians and contractors in their employ also use political bots to attack in-state targets on social media. Descriptions of bot usage in Mexico are particularly representative of this automated strategy. According to numerous sources, the Mexican government has used Twitter bot armies to stifle public dissent and effectively silence opposition through spam tactics. Peñabots, named after the Mexican President Enrique Peña Nieto, have also been used to send out pro-government propaganda. In Turkey, journalists report that both President Recep Tayyip Erdogan’s government and actors from the opposition Republican People’s Army have used political bots against one another in efforts to both spread propaganda and fight criticism. In China, and in the Chinese administrative regions of Tibet and Taiwan, bots have been used to quash sovereignty movements while promoting state ideals. According to Krebs, “Tibetan sympathizers [...] noticed that several Twitter hashtags related to the conflict — including #tibet and #freetibet — are now so constantly inundated with junk tweets from apparently automated Twitter accounts that the hashtags have ceased to become a useful way to track the conflict.”
Political bots have been used during elections to send out pro-government or pro-candidate microblog messages. A New York Times (Sang-Hun, 2013) article points to South Korean state prosecutors’ allegations that “agents from the National Intelligence Service of South Korea posted more than 1.2 million Twitter messages last year to try to sway public opinion in favor of Park Geun-hye, then a presidential candidate, and her party ahead of elections in 2012.” Guen-hye eventually won the presidency, but the intelligence chief in charge of the bot-driven effort was jailed.
Political bots have also been used during elections to pad politicians’ social media follower lists. In this case, Politicians buy bot followers — which mimic real human users — in attempts to look more popular or relevant. There are several prominent examples, particularly in Western states. According to Downes (2012), U.K. political candidate Lee Jasper used bots to boost the number of his Twitter followers in order “to give a false impression of the popularity of his campaign.” Coldeway (2012) details a similar bid by former U.S. presidential candidate Mitt Romney in which political bots were used for social media follower padding. According to Coldeway, “[in] over 24 hours starting July 21, the presumptive Republican nominee acquired nearly 117,000 followers — an increase of about 17 percent.” This rapid and huge rise in supporters was immediately noted by bloggers. Opponents attributed the boost to bots deployed by campaign-oriented reputation management or marketing firms. Supporters of the Romney campaign said the bot-driven inflation came from detractors in a bid to discredit the candidate.
The ways political bots have been used in other instances of civil disobedience and security crises are strikingly similar to the ways they have been used during elections. York’s (2011) Guardian article notes that certain governments being protested during Arab Spring movements used political bots in combinations of the previously mentioned ways. Not only did governments in Syria, Bahrain, Iran, and Morocco use bots to prevent organization by Twitter bombing the opposition with spam, they also used them to send out masses of pro-government tweets.
Table 2 identifies the situations and ways in which political bots have been used in each country detailed. Column one identifies the country where online articles suggest political bots were used. Column two identifies the situation in which the political bots were used. The situations here are loosely categorized into elections, protest, security, and politician support. Political actors in countries including China, Russia, and Turkey have used political bots for more than one purpose or with more than one strategy. Column three lists whether political bots were used to demobilize or silence political opposition. Column four identifies whether bots were used to send out propaganda: pro-government tweets, etc. Column five lists whether bots were used to pad the number of social media followers.
Table 2: Instances and ways political bots have been used. Country Instance bots used Demobilization Pro-government messages Follower number padding Argentina Politician support/protest X X Australia Election X Azerbaijan Protest X Bahrain Protest X X China Election/protest/security X X Iran Protest X X Italy Politician support X Mexico Election X X X Morocco Protest X X Russia Election/protest/security X X South Korea Election X X Syria Protest X X Tibet Protest X Turkey Political support/protest X X X United Kingdom Election X United States Election/security X Venezuela Election/protest X X
An interrogation of this table suggests that government actors in countries with a longer history of democracy — Australia, Italy, the U.S., and U.K. — are more likely to only, or exclusively, use bots for social media follower padding. Countries that polity rates as mostly democratic, such as Argentina, Mexico or South Korea, host actors that also use bots for demobilization of opposition and to spread pro-government or candidate messages. Actors in countries ranked as more authoritarian, Russia, China, and Venezuela, also engage in this type of political bot usage. Firmly authoritarian countries, Azerbaijan, Bahrain, and Saudi Arabia, tend not to use political bots for social media follower number padding. Actors within, or related to, these governments tend to use bots to send out pro-government messages and demobilize opposition.
This research project demonstrates how media articles frame the ways in which bots impact the social systems, and particular countries, in which they are deployed. It details how specific news accounts of this computational propaganda, proliferated by political actors using political bots, enables control globally. The ways in which particular state oriented political actors make use of political bots are explored herein.
There are many potential avenues for continued research in this arena. Plans for further study might examine how certain cases of political bot usage in one country may have affected implementation and usage in other countries. Another project could lie in the building of a prediction model of bot usage in upcoming international elections. Each year sees numerous moderately or highly contested international elections. Several of these take place in countries with authoritarian regimes and emerging democracies. It would be interesting to work towards predicting political bot usage in these upcoming elections and determine what potential impact such use has on electoral outcomes. Continued study of political bots is, undoubtedly, a rich and necessary area for continued academic research.
About the author
Samuel C. Woolley is a Ph.D. student in the Department of Communications at the University of Washington (UW). His research is primarily concerned with digital media, politics, and culture. He is the project manager of the National Science Foundation supported ‘Political Bots Research Project’ at UW and project manager of the European Research Council supported ‘Computational Propaganda Research Project’ at the Oxford Internet Institute. He is a researcher on the Digital Activism Research Project and research and undergraduate learning community coordinater at the Center for Communication and Civic Engagement.
E-mail: samwooll [at] uw [dot] edu
Notes
1. Benkler, 2006, p. 10.
2. “Astroturfing is the practice of masking the sponsors of a message or organization (e.g., political, advertising, religious or public relations) to make it appear as though it originates from and is supported by grassroots participant(s).” From “Astroturfing,” at https://en.wikipedia.org/wiki/Astroturfing, accessed 8 March 2016.
References
N. Abokhodair, D. Yoo, and D.W. McDonald, 2015. “Dissecting a social botnet: Growth, content and influence in Twitter,” CSCW ’15: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, pp. 839–851.
doi: http://dx.doi.org/10.1145/2675133.2675208, accessed 8 March 2016.L. Alexander, 2015. “Social network analysis reveals Kremlin’s role in Twitter bot campaign,” Global Voices (2 April), at https://globalvoicesonline.org/2015/04/02/analyzing-kremlin-twitter-bots/, accessed 4 June 2015.
Y. Benkler, 2006. The wealth of networks: How social production transforms markets and freedom. New Haven, Conn.: Yale University Press.
J. Borthwick, 2015. “Media hacking,” Medium (7 March), at https://medium.com/in-beta/media-hacking-3b1e350d619c#.i1abxxm7d, accessed 28 August 2015.
Y. Boshmaf, I. Muslukhov, K. Beznosov, and M. Ripeanu, 2011. “The socialbot network: When bots socialize for fame and money,” ACSAC ’11: Proceedings of the 27th Annual Computer Security Applications Conference, pp. 93–102.
doi: http://dx.doi.org/10.1145/2076732.2076746, accessed 8 March 2016.Z. Chu, S. Gianvecchio, H. Wang, and S. Jajodia, 2012. “Detecting automation of Twitter accounts: Are you a human, bot, or cyborg?” IEEE Transactions on Dependable and Secure Computing, volume 9, number 6, pp. 811–824.
doi: http://dx.doi.org/10.1109/TDSC.2012.75, accessed 8 March 2016.Z. Chu, S. Gianvecchio, H. Wang, and S. Jajodia, 2010. “Who is tweeting on Twitter: Human, bot, or cyborg?” ACSAC ’10: Proceedings of the 26th Annual Computer Security Applications Conference, pp. 21–30.
doi: http://dx.doi.org/10.1145/1920261.1920265, accessed 8 March 2016.D. Coldewey, 2012. “Romney Twitter account gets upsurge in fake followers, but from where?” NBC News (8 August), at http://www.nbcnews.com, accessed 8 March 2016.
D. Cook, B. Waugh, M. Abdipanah, O. Hashemi, and S.A. Rahman, 2014. “Twitter deception and influence: Issues of identity, slacktivism, and puppetry,” Journal of Information Warfare, volume 13, number 1, at https://www.jinfowar.com/twitter-deception-influence-issues-identity-slacktivism-puppetry/, accessed 8 March 2016.
K.C. Desouza, 2001. “Intelligent agents for competitive intelligence: Survey of applications,” Competitive Intelligence Review, volume 12, number 4, pp. 57–63.
doi: http://dx.doi.org/10.1002/cir.1032, accessed 8 March 2016.S. Downes, 2012. “Jasper admits to using Twitter bots to drive election bid,” Inside Croydon (26 November), at http://insidecroydon.com/2012/11/26/jasper-admits-to-using-twitter-bots-to-drive-election-bid/, accessed 27 October 2014.
R. Dubbin, 2013. “The rise of the Twitter bots,” New Yorker (14 November), at http://www.newyorker.com/tech/elements/the-rise-of-twitter-bots, accessed 20 May 2015.
J. Earl, A. Martin, J.D. McCarthy, and S.A. Soule, 2004. “The use of newspaper data in the study of collective action,” Annual Review of Sociology, volume 30, pp. 65–80.
doi: http://dx.doi.org/10.1146/annurev.soc.30.012703.110603, accessed 8 March 2016.F. Edwards, P.N. Howard, and M. Joyce, 2013. “Digital activism and non–violent conflict,” Digital Activism Research Project, University of Washington, at http://digital-activism.org/2013/11/report-on-digital-activism-and-non-violent-conflict/, accessed 8 March 2016.
E. Ferrara, O. Varol, C. Davis, F. Menczer, and A. Flammini, 2014. “The rise of social bots,” arXiv (19 July), at http://arxiv.org/abs/1407.5225, accessed 8 March 2016.
M. Forelle, P. Howard, A. Monroy-Hernández, and S. Savage, 2015. “Political bots and the manipulation of public opinion in Venezuela,” arXiv (25 July), at http://arxiv.org/abs/1507.07109, accessed 8 March 2016.
Freedom House, 2013. “Freedom on the Net 2013: Saudi Arabia,” at http://www.freedomhouse.org/report/freedom-net/2013/saudi-arabia, accessed 27 October 2014.
T. Gillespie, 2014. “The relevance of algorithms,” In: T. Gillespie, P.J. Boczkowski, and K.A. Foot (editors). Media technologies: Essays on communication, materiality, and society. Cambridge, Mass.: MIT Pres, pp. 167–194.
T. Gillespie, P.J. Boczkowski, and K.A. Foot (editors), 2014. Media technologies: Essays on communication, materiality, and society. Cambridge, Mass.: MIT Press.
S.C. Herring, 2009. “Web content analysis: Expanding the paradigm,” In: J. Hunsinger, L. Klastrup, and M. Allen (editors). International handbook of Internet research. Berlin: Springer, pp. 233–249.
doi: http://dx.doi.org/10.1007/978-1-4020-9789-8_14, accessed 8 March 2016.M. Hindman, 2009. The myth of digital democracy. Princeton, N.J.: Princeton University Press.
C.C. Hood and H.Z Margetts, 2007. The tools of government in the digital age. Basingstoke: Palgrave Macmillan.
P.N. Howard and M.M Hussain, 2013. Democracy’s fourth wave? Digital media and the Arab Spring. New York: Oxford University Press.
T. Hwang, I. Pearce, and M. Nanis, 2012. “Socialbots: Voices from the fronts,” Interactions, volume 19, number 2, pp. 38–45.
doi: http://dx.doi.org/10.1145/2090150.2090161, accessed 8 March 2016.M. Joyce, A. Rosas, and P.N. Howard, 2013. “Global digital activism data set, 2013,” Ann Arbor, Mich.: Inter-university Consortium for Political and Social Research, at http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/34625/version/2, accessed 4 August 2015.
doi: http://doi.org/10.3886/ICPSR34625.v2, accessed 8 March 2016.S. Kalathil and T. Boas, 2003. Open networks, closed regimes: The impact of the Internet on authoritarian rule. Washington, D.C.: Carnegie Endowment for International Peace.
S. Kalathil and T.C. Boas, 2001. “The Internet and state control in authoritarian regimes: China, Cuba and the counterrevolution,” First Monday, volume 6, number 8, at http://firstmonday.org/article/view/876/785, accessed 8 March 2016.
B. Krebs, 2011. “Twitter bots drown out anti-Kremlin tweets,” Krebs on Security (11 December), at http://krebsonsecurity.com/2011/12/twitter-bots-drown-out-anti-kremlin-tweets/, accessed 27 October 2014.
K. Krippendorff, 2004. Content analysis: An introduction to its methodology. Second edition. Thousand Oaks, Calif.: Sage.
H.D. Lasswell, 1938. Propaganda technique in the World War. New York: P. Smith.
W. Lippmann, 1922. Public opinion. New York: Harcourt, Brace.
P.T. Metaxas and E. Mustafaraj, 2012. “Social media and the elections,” Science, volume 338, number 6106 (26 October), pp. 472–473.
doi: http://doi.org/10.1126/science.1230456, accessed 8 March 2016.E. Mustafaraj and P.T. Metaxas, 2010. “From obscurity to prominence in minutes: Political speech and real-time search,” Proceedings of the WebSci10, at http://journal.webscience.org/317/, accessed 8 March 2016.
M. Orcutt, 2012. “Twitter mischief plagues Mexico’s election,” MIT Technology Review (21 June), at http://www.technologyreview.com/news/428286/twitter-mischief-plagues-mexicos-election/, accessed 5 May 2015.
K. Pearce, 2013. “Adventures in research,” at http://www.katypearce.net, accessed 27 October 2014.
T. Peel, 2013. “The Coalition’s Twitter fraud and deception,” Independent Australia (26 August), at http://www.independentaustralia.net/politics/politics-display/the-coalitionstwitterfraud-and-deception,5660, accessed 27 October 2014.
E. Poyrazlar, 2014. “Turkey’s leader bans his own Twitter bot army,” Vocativ (26 March), at http://www.vocativ.com/world/turkey-world/turkeys-leader-nearly-banned-twitter-bot-army/, accessed 4 June 2015.
J. Ratkiewicz, M.D. Conover, M. Meiss, B. Goncalves, A. Flammini, and F.M. Menczer, 2011a. “Detecting and tracking political abuse in social media,” Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media, at http://www.aaai.org/ocs/index.php/ICWSM/ICWSM11/paper/view/2850, accessed 8 March 2016.
J. Ratkiewicz, M. Conover, M. Meiss, B. Goncalves, S. Patil, A. Flammini, and F. Menczer, 2011b. “Truthy: Mapping the spread of astroturf in microblog streams,” WWW ’11: Proceedings of the 20th International Conference Companion on World Wide Web, pp. 249–252.
doi: http://doi.org/10.1145/1963192.1963301, accessed 8 March 2016.M. Rueda, 2012. “2012’s biggest social media blunders in LatAm politics,” ABC News (26 December), at http://abcnews.go.com/ABC_Univision/ABC_Univision/2012s-biggest-social-media-blunders-latin-american-politics/story?id=18063022, accessed 4 June 2015.
C. Sang-Hun, 2013. “Prosecutors detail attempt to sway South Korean election,” New York Times (21 November), at http://www.nytimes.com/2013/11/22/world/asia/prosecutors-detail-bid-to-sway-south-korean-election.html, accessed 27 October 2014.
A. Strange, B.C. Park, M.J. Tierney, A. Fuchs, A. Dreher, and V. Ramachandran, 2013. “China’s development finance to Africa: A media-based approach to data collection,” Center for Global Development Working Paper, number 323, at http://www.cgdev.org/publication/chinas-development-finance-africa-media-based-approach-data-collection, accessed 8 March 2016.
F. Turner, 2006. From counterculture to cyberculture: Stewart Brand, the Whole Earth Network, and the rise of digital utopianism. Chicago: University of Chicago Press.
Twitter, Inc., 2014. “Form 10-Q: Quarterly report pursuant to Section 13 or 15(d) of the Securities Exchange Act of 1934,” at http://www.sec.gov/Archives/edgar/data/1418091/000156459014003474/twtr-10q_20140630.htm, accessed 13 August 2015.
I. Urbina, 2013. “I flirt, I tweet. Follow me at #Socialbot,” New York Times (10 August), at http://www.nytimes.com/2013/08/11/sunday-review/i-flirt-and-tweet-follow-me-at-socialbot.html, accessed 20 May 2015.
J.-P. Verkamp and M. Gupta, 2014. “Five incidents, one theme: Twitter spam as a weapon to drown voices of protest,” Third USENIX Workshop on Free and Open Communications on the Internet, at https://www.usenix.org/conference/foci13/workshop-program/presentation/verkamp, accessed 4 June 2015.
A. Vogt, 2012. “Hot or bot? Italian professor casts doubt on politician’s Twitter popularity,” Guardian (22 July), at http://www.theguardian.com/world/2012/jul/22/bot-italian-politician-twitter-grillo, accessed 4 June 2014.
C. Wagner, S. Mitter, C. Körner, and M. Strohmaier, 2012. “When social bots attack: Modeling susceptibility of users in online social networks,” Proceedings of the WWW ’12 Workshop on ‘Making sense of microposts,’ pp. 41–48, and at http://ceur-ws.org/Vol-838/#MSM2012, accessed 8 March 2016.
S. Wright, 2012. “Politics as usual? Revolution, normalization and a new agenda for online deliberation,” New Media & Society, volume 14, number 2, pp. 244–261.
doi: http://doi.org/10.1177/1461444811410679, accessed 8 March 2016.J.C. York, 2011. “Syria’s Twitter spambots,” Guardian (21 April), at http://www.theguardian.com/commentisfree/2011/apr/21/syria-twitter-spambots-pro-revolution, accessed 13 July 2015.
I. Zeifman, 2014. “2014 bot traffic report: Just the droids you were looking for,” Incapsula (18 December), at https://www.incapsula.com/blog/bot-traffic-report-2014.html, accessed 13 August 2015.
Editorial history
Received 2 September 2015; revised 22 February 2016; revised 16 March 2016; accepted 18 March 2016.
“Automating power: Social bot interference in global politics” by Samuel Woolley is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.Automating power: Social bot interference in global politics
by Samuel C. Woolley.
First Monday, Volume 21, Number 4 - 4 April 2016
https://firstmonday.org/ojs/index.php/fm/article/download/6161/5300
doi: http://dx.doi.org/10.5210/fm.v21i4.6161