This paper examines social media ads purchased by the Russian Internet Research Agency (IRA). Using a public dataset, we employ a mixed method to analyze the thematic and strategic patterns of these ads. Due to Facebook and Instagram’s promotional features, IRA managed to microtarget audiences mostly located in the United States fitting its messages to suit the audiences’ political, racial, gendered, and in some cases religious backgrounds. The findings reveal the divisive nature and topics that are dominant in the dataset including: race, immigration, and police brutality. By expanding on the theoretical conceptualization of astroturfing strategy that focuses on carefully concealing the identity and intention of actors behind social media activities, we argue that IRA has added political astroturfing as a powerful tool at a low cost contributing to the broader Russian geopolitical disinformation campaign strategies. The IRA made use of the business model of Facebook and Instagram in an attempt to further divide its targeted audiences and by highlighting mostly negative issues with a potential goal of fuelling political rage.
Results and analysis
Discussion and conclusion
This paper analyzes the social media ads from the viewpoint of ‘political astroturfing,’ a practice of masking the sponsors of a political message or events to make it appear as though it originates from or is supported by grassroots participants. A sheer number of news media articles, a vast amount of government released public data, reports, and only a handful of academic studies are available to make sense of the trolling and disinformation activities by the Russian Internet Research Agency (IRA). While these sources are useful to inform the public about the IRA’s online activities, there is a need for a conceptual framework that can be effectively used to specifically point towards the correlation between the deceptive purpose of the IRA’s exploitation of Western digital media platforms with its concealed and ground-based activities that enabled them to take advantage of two prominent social media’s Pages, Events, and Messenger features to reach out to specific segments of the population by race, sex, religion, location, and political orientation. Indeed, previous reports on IRA’s activities discussed general trends on Twitter and other social media outlets (see, for example, Badawy, et al., 2019). Our study solely focuses on the analysis of all the publicly available Facebook and Instagram ads purchased by the IRA in order to identify the patterns and major issues that the organization was financially invested in. Second, the study uses a unique mixed method that involves topic modelling and interpretative measures. Further, it expands on the theory of astroturfing by elaborating and applying the concept of political astroturfing to the IRA’s activities.
In the early 2014, the (Russian) IRA, a St. Petersburg based organization of trained and paid pro-government Internet trolls, often referred to as the “troll factory” , turned most of its operation towards the 2016 U.S. presidential election, as a part of a state-sponsored disinformation warfare (Zannettou, et al., 2019). Some of the strategies include planting and spreading deceptive information along the ideological fault lines, e.g., racial, ethnic, class, and religious divisions, wrapping it up with some elements of truth, concealing the source of lies, and denying any involvement when exposed. The IRA activities signify an increasing scale of operation and use of identity politics in promoting messages that are designed to influence political opinion from abroad. In this complex context, we depart from deterministic debate whether the Russians had actually affected the U.S. voters emotionally or had influenced the voting result to a significant extent and focus our attention to the patterns we identified in social media political advertising .
In 2013, the IRA employed about 600 people and had an estimated annual budget of US$10 million. Employees were tasked with posting comments on news articles 50 times a day, maintaining several Facebook accounts, and publishing at least three posts daily. Each employee was assigned with specific targets and goals for the number of followers they needed to attract . Oxford University’s Project on Computational Propaganda revealed that the IRA operators as well as the right-wing media outlets in the U.S. used Twitter bots or automated accounts to systemically intervene in online debates and strategically microtarget groups and individuals with disinformation, aiming at swaying voters, sowing confusion, defaming opposition, and spreading fake news (Woolley and Guilbeault, 2017). Disguised as concerned American citizens to enhance their political astroturfing activities, the IRA created 129 Facebook events in the United States between 2015 and 2017, sometimes bringing opposing groups together at the same day like what once happened in Texas due to agitative messages posted on the Facebook pages of “Heart of Texas” and “United Muslims of America”. Other IRA’s Facebook pages supported rallies and protests in 2016 like the cases of “BlackMattersUS”, “Blacktivist”, and “Being Patriotic”. Unlike the above strategy, the “United Muslims of America” Facebook group was impersonating an actual organization, the United Muslims of America (UMA), a California-based non-profit that promotes interfaith dialogue and political participation (Guadagno and Guttieri, 2019; Etudo, et al., 2019).
Other studies, for instance, Ribeiro, et al., (2019) focused on the effectiveness in targeting of the socially divisive ads by the IRA. Contributing further to this line of study our work focuses on the deceptive political marketing strategies as political astroturfing and delineates deeper insights into the advertised themes and targeted groups. While astroturfing is a well-known tool among stealth-marketers (i.e. in advertising and public relations industries), we focus solely on its political use, which is a relatively an underresearched area in communication, politics, and international relations. In doing so, the paper foregrounds a multimodal interpretation of political astroturfing that cuts across a few disciplines. By separating political astroturfing from lobbying networks and legitimate movements, the paper highlights the stealth nature of IRA’s microtargeting through social media. While the IRA’s disinformation activities spread well beyond Facebook to other sites like Twitter, Reddit, Instagram, 4chan, and Imgur (Zannettou, et al., 2019), this study particularly pays attention to Facebook and Instagram ads purchased by the IRA, as it allows us to critique Facebook’s overt vulnerability of algorithmic governance vis-à-vis its ads on the one hand, and analyze the IRA’s covert interest in identity-based microtargeting on the other hand. It also allows us to reach a broader conclusion that social media political advertisement has become an integral tool for Russia’s wider geopolitical disinformation strategies, which has major implications for the political engagements in liberal democracies.
With the unfolding events surrounding Cambridge Analytica scandal and subsequent Facebook hearing at the U.S. Senate and Robert Mueller’s investigation into Russian interference in the U.S. election, a growing number of reports scrutinize the IRA’s disinformation tactics along with the deployment of advertisements, organic contents, and microtargeted demographics by the IRA trolls across different platforms (see Howard, et al., 2018; New Knowledge, 2018; Timberg and Room, 2018). For instance, Howard and his colleagues’ report (2018) reveals that compared to a limited number of ads, organic posting was the key activity that reached most users, which originally started with Twitter, prior to leveraging Facebook and Instagram. While smaller in volume, we posit that the ads purchased by the IRA, albeit at a very low cost, were vital to promote some divisive issues as well as to multiply and optimize the multiplatform circulation of fake news and accompanied organic contents. In this paper, our task is to conceptualize these coordinated measures by the IRA through the lens of political astroturfing.
Over three decades, with the decline of traditional advertising and the astronomical rise of digital marketing, astroturfing has emerged as a global phenomenon. Originally coined by U.S. senator Lloyd Bentsen in a public statement in 1985 (Kim, 2013), astroturf is a derivation of AstroTurf, a brand of artificial carpet manufactured, designed, and installed to simulate natural grass. In practice astroturfing refers to “a political, public relations, or advertising campaign that deceptively and artificially creates an impression of widespread grassroots support for a product, policy, or concept, when in actuality limited support exists” . At its core, astroturfing implies a narrative of something that is not what it seems. The key goal here is to influence public policy and manipulate public opinion by fabricating a bottom-up campaign, therefore stimulating an artificial demand for performing a real action. Astroturfing can be conceived, funded and executed by corporations, trade associations, political interests or public relation firms . For instance, Comcast, Monsanto, Microsoft, Walmart, Sony, and Belkin have also been caught astroturfing consumers online, generating fake positive reviews, hiding paid endorsements, and building fake consumer support-base to increase their profit and cultivate favourable public opinion (Goldschein, 2011; Leiser, 2016). An early example of Internet-based astroturfing was sought during 2004 U.S. senate election, when political candidates used online campaign mechanisms to promote a form of “plagiarized participation”, which was small but significant (Klotz, 2007).
Among two main types, commercial astroturf is centered around accumulation of profit, whereas political astroturf is concerned with attaining political objectives by creating the false impression of grassroots consensus and spreading disinformation to undermine the competitors’ campaign . Modern astroturfing campaigns in the U.S. often involve a smart combination of analog and digital measures, e.g., analyzing voter and consumer lifestyle databases; organizing public opinion polling, focus groups, media events, and press conferences. It can also involve paying people “to fill rooms at town hall meetings” or funding “research studies supporting the effort”  as well as to send “astroturf letters” to newspaper editors . The digital strategies can match the analogue measures, e.g., by filling up the online polls with ‘sock puppets’ , sponsoring fake news and video campaigns or ‘deep faking’ . While political astroturfing works well as a means of spreading disinformation , it goes well beyond spreading fake news. It can include deliberate strategies of staging events that require recruiting or cultivating unsuspecting support base as well as staging real conflicts. A common ground between commercial and political astroturfing is that unlike authentic grassroots campaigns, astroturfers use “front groups” to conceal the sponsor’s true interests and identities. The presence of front groups makes fabricated grassroots campaigns appear as independent movements in the public eye (Kim, 2013).
Despite some similarities with corporate or commercial astroturfing, which involves a variety of ‘stealth’ or ‘guerilla’ marketing techniques , political astroturfing can be subtler and more complex, making it difficult to quantify its outcomes. Political astroturfing might acquire the nature of unethical ‘lobbying’, in contrast to transparent, accountable, and socially responsible strategic communication practices (Lock, et al., 2016). Because of its deceptive and unethical nature, astroturfing practice is prohibited by the Public Relations Society of America (PRSA) code of ethics (Kim, 2013). Notwithstanding the prohibition, modern political astroturfing has become an increasingly profitable and sophisticated public relations tool and a genuine threat to deliberative democracy in the West precisely because it can strategically deceive stakeholders by faking support from the bottom up. In this respect, Lock, et al. (2016) go on to argue that political astroturfing fundamentally undermines democratic participation and transparency — while seeming to do the opposite, stating that “rather than simulating grassroots, it [astroturfing] actually metaphorically ‘cuts’ the roots by upholding the positive image of authentic roots” .
While the above cases are helpful to understand how astroturfing has emerged and advanced within the U.S. market, they proved to be inadequate to address the international or cross-national phenomenon of astroturfing which reflects the increasing geopolitical desire for influence in between nation-states rather than within a nation-state. Zelenkauskaite and Niezgoda (2017) focus on the ideological dimension of astroturfing, in which the “Kremlin trolls” are often called out by users as paid astroturfers exerting pro-Russian influence in the online political debates in European countries, such as Lithuania. However, the influence of such trolls on public opinion can be questionable (Rožukalne and Sedlenieks, 2017). There can be also uncertainty of identification/evidence who the trolls really are and whether they are paid or unpaid. With such ambivalent nature of trolling, the documentation of the IRA-paid Facebook posts (U.S. House of Representatives, 2018a, 2018b) presents a unique opportunity to analyze how astroturfing is executed at an international stage by an well-identified organization. The IRA, as an unofficial or semi-official proxy of the Russian government, took a geopolitical advantage of the political divisions embedded within liberal democracies as well as vulnerabilities of the political campaign that rely extensively on microtargeted advertising. In this case, the Russian trolls demonstrated the power of executing a large-scale and precision-guided non-military operation in the US soil at a relatively low cost with a privilege of deniability and a possible immunity from international scrutiny. As expected, the IRA’s existence is “roundly denied” or disavowed by the Russian government . The IRA’s ability to conceal its identity and purpose from the social media users, therefore, was very crucial for its success in giving it a genuine look.
Finally, a significant number of IRA activities fit well with the practices of astroturfing, such as: a) the social media pages and groups were falsely claimed to be run by U.S. activists; b) the IRA trolls used the stolen identities of real U.S. persons; c) The IRA harvested their Page audiences to develop human assets, some of whom were unaware of the role they played; d) they procured and used computer infrastructure, based partly in the United States, to hide the Russian origin of their activities; and, e) the IRA operators also alleged to commit bank wire fraud and identity theft, which were necessary to carry out their online operations, especially to pay for thousands of ads on social media. They even set up PayPal accounts to receive donations (U.S. Department of the Treasury, 2018; New Knowledge, 2018; Howard, et al., 2018).
Among these range of applications of astroturfing strategies, we narrow down our focus to a specific set of activities by the IRA: political advertising on Facebook and Instagram, without which it would have been otherwise very difficult for the IRA to reach out its target audience. Notably, the Facebook advertising mechanism enabled these astroturfing strategists to bypass the uncertainty of user engagement and directly reach out to target groups based on their emotion, sentiment, and political attitude, contributing to the users’ “emotional and affective feedback loop” (Boler and Davis, 2018), and in return the ads facilitated “digital wildfires and online forms of moral panic” . The targeted ads also allowed (Bucher and Helmond, 2018) the IRA, just like any sponsor, to capitalize on Facebook’s “culture of likes” while soliciting positive or negative emotional reactions features, i.e., Like, Love, Haha, Wow, Sad, and Angry . In the next section, therefore, we attempt to answer the following research questions:
RQ1: What are IRA’s microtargeted elements in their social media ads and what is their strategic significance?
RQ2: What are the main messages and topics of the IRA’s ads?
The raw data used in this study is taken from all 3,517 IRA files that were publicly shared by the U.S. House of Representatives, Permanent Select Committee on Intelligence (2018a, 2018b) containing details of all Facebook and Instagram ads paid by the Russian Internet Research Agency. A C++ written software called pdftotext was used to convert PDFs into plain text files. Then, a customized Perl script, whose task is identifying structured and unstructured metadata embedded in the files, was employed to parse the text files and extract all available information. The data was further cleaned by organizing categories into columns, and pivot tables were later generated with Microsoft Excel. By doing so we were able to find patterns of demographic groups that were targeted by the IRA, including age, sex, and locations, as well as timing of the ads. Second, NVivo 12 qualitative software was used to generate descriptive tables on the most frequent words found in the ads’ text and users’ interest categories (Driedger and Garvin, 2016; Thelwall, 2017; Wiedemann, 2016). In addition, some parts of the data were processed using Tableau, a public visualizing software to better present the study’s findings. Then, QDA Miner 5-Word Stat 8 software was used to identify the main topics in the dataset. Topic modelling is a text mining tool, identifying topics in an unstructured corpus (Jacobi, et al., 2016). It is based on an algorithm built on machine learning techniques . The QDA Miner software uses Factor Analysis in which the latent semantic concepts are examined with an Eigen Value, a method that has been used since the 1960s in topics retrieval (Borko and Bernick, 1963; Harway and Iker, 1964). Factor analysis (FA) accounts for every word in the corpus and indicates “the strength of the relationship of each word to each topic”, while the data “dimensionality reduction in FA is based on the idea that each word is the representation of a linear combination of underlying hidden variables (i.e., topics)” (Péladeau and Davoodi, 2018). This enabled us to identify the main topics of the ads that the IRA used for microtargeting. With the use of QDA Miner, we also employed proximity plots which statistically measure the strength of the connection between one word and another. Pertaining to this, we used the association coefficient which “measures the co-occurrence of items taking into account the possibility that two items will sometimes co-occur by chance” . This association strength is also called proximity index or probabilistic affinity index (van Eck, et al., 2010), and this coefficient calculates the association between two words or terms as a range between 1 for a perfect correlation and 0 for no correlation. “The association strength of two concepts can be interpreted as the ratio between on the one hand the co- occurrence frequency of the concepts and on the other hand the expected co-occurrence frequency of the concepts obtained under the assumption that occurrences of the concepts are statistically independent” . Finally, a deeper assessment of the ads’ texts and images is followed in order to provide more insight into the different aspects of IRA’s astroturfing activities.
Results and analysis
As stated above, a total of 3,517 ads were examined in this study which included 136 Instagram ads, while the rest was posted on Facebook. Figure 1 shows the timeline distribution of these ads spanning from 9 June 2015 to 2 February 2017. The highest number of purchased paid ads were posted in May 2016, a few months before the 2016 U.S. elections. However, we could not find a clear pattern in the ads’ timeline distribution as many other ads were posted after the end of the U.S. election. Even when qualitatively examining the IRA’s ads purchased after the announcement of Donald Trump as the U.S. election winner in early November 2016, there were no obvious patterns as the ads continued targeting different and diverse groups similar to the way they were employed before the election. According to Facebook’s own statistics, 44 percent of total ad impressions (number of times ads were displayed) were before the U.S. election on 8 November 2016, whereas 56 percent of the purchased ads were after the election (Schrage, 2017).
Figure 1: Timeline of IRA’s ads distribution.
To answer the first research question, we found that the key targets of IRA’s astroturfing activities on social media can be categorized based on age, sex, and location (Table 1). First, we studied the targeted users’ age distribution and found 2,824 ads containing age specifications that constituted 80.2 percent of the total ads. The majority of targeted users were between 18–65+ (83.7 percent) followed by 16–65+ (12.6 percent), and 18–50 (2 percent). As Table 1 shows, the users mostly targeted are adults in the legal age of voting which is important for those who intend to influence the public opinion and the political process.
Table 1: Strategic microtargeting of IRA’s social media ads. Category Targeted groups Proportion Strategic significance Age
18–65+ 83.7% Legal age of voting 16–65+ 12.6% 18–50 2% Sex
Male 73.1% Ads targeting males were often political and more serious in nature Female 26.8% Ads targeting females focused on young girls with interest in music Location
Baltimore, Maryland; Ferguson, St. Louis, Missouri 145 ads Ads targeting Black-majority locations with higher trends of race-related protests focused more on police brutality against Black-Americans New York 134 ads Atlanta, Georgia; New Orleans 105 ads Texas 74 ads Primarily focused on White-majority users with presenting anti-Muslim and anti-Mexican immigrant messages Outside U.S. 64 ads Ads targeting European countries mostly focused on immigration issues presenting the immigrants in a highly negative manner
In terms of sex, only 149 ads contained this microtargeted specification with males constituting the majority (73.1 percent) unlike females (26.8 percent). In fact, ads targeting males were often political and more serious in nature dealing with important issues like gun ownership, veterans, and immigration, but it was not the case with ads targeting females since most of them (n=28) were specifically focused on young girls (14–17 and 15–25 years) who have music interests. For example, such music ads encouraged Facebook users to download FaceMusic, an unknown Chrome extension, stating: “A tiny browser extension will help you to browse, listen and share any kind of music! Free online music library! musicfb.info ... .” The extension was later removed by Google for its suspicious activity as it started posting messages on Facebook’s timelines and spamming friends, and it took permission to do the following: “read and change all your data on the websites you visit, display notifications, and modify data you copy and paste.” (Lapowsky, 2018) While our primary focus is Facebook ads, this finding inadvertently reveals that the Chrome extension is another weaponized online tool and is part of IRA’s astroturfing activities whose goal is to stealthily collect more information on users and target them with customized political ads. Although on the surface attracting young girls with music ads might seem apolitical and entertaining, the IRA might have done so to cultivate users as “front groups” with concealing the IRA’s real interests and identities.
The other ads targeting females (14–40 years) include funny memes and pictures posted on Memopolis Facebook page (n=8), and a few other ones posted on Black4Black Facebook page (n=4) that targeted females between 18–65+ years old and promoting support for Black American businesses (Figure 2). As can be seen above, the sex distribution is important because it provides an indication of the way IRA’s trolls viewed their targets, for there is a clear misogynistic perception in play; males are sent ads dealing with more serious issues unlike the case of white females except for very few ads related to Black American women (Figure 2). Depending on the nature of the Facebook page used, there is a microtargeting strategy followed even when the issue of gender is addressed in order to appeal to different target groups. In brief, ads targeting men were more serious and largely different from those that targeted women, yet all young people were encouraged to download a spying tool to monitor their social media activities which formed an important part of the IRA’s political astroturfing activities.
Figure 2: IRA ads targeting females from two Facebook pages.
In relation to the microtargeted locations, the overwhelming majority of the IRA’s ads (n=3,505) contained specific locations, while 12 had the following designation: “Lookalike (US, 1%) — newtestaudit”. Only one ad contained the following designation: “Recently In: United States: Palo Alto (+10 mi) California” targeting newcomers, immigrants and/or refugees in the U.S. As Table 1 displays, ads were mostly targeting users in the United States (n=3,508) with the following locations were the most frequently cited in the ads: Baltimore, Maryland; Ferguson, St. Louis, Missouri (n=145); (2) New York (n=134); (3) Atlanta, Georgia; New Orleans (n=105); (4) Texas (n=74). From the specific U.S. locations, a clear emphasis on targeting Black Americans can be discerned, since the majority living in Baltimore, Maryland as well as Ferguson and St. Louis, Missouri are Black (U.S. Census Bureau, 2011). These areas witnessed popular unrests and protests such as in Ferguson in August 2014 and Baltimore in April 2015. By disguising the true identity of the ads’ sponsors, the IRA’s political astroturfing actions managed to give the false impression that grassroot organizations advocate for Black American rights and pride in Black history in the United States. This was done to win more support and followers on social media who can be potentially targeted or possibly mobilized at later stages.
An important reminder that the systematic application of ethnically targeted disinformation of the IRA is not exclusive to U.S. residents. Several studies have identified it as a common geostrategic measure along with other coordinated efforts by Russian troll factories, hackers, and bots that appeared in other countries such as Ukraine, Lithuania, Latvia, Germany, Netherlands, U.K., and Norway (see Bennett and Livingston, 2018; Chesney and Citron, 2019; Mejias and Vokuev, 2017; Oxford Internet Institute, 2017; Rožukalne and Sedlenieks, 2017; Zelenkauskaite and Niezgoda, 2017). Chesney and Citron , for example, inform that similar to the operation executed in the United States in 2016, the Russians mounted a covert-action campaign program blending cyber-espionage and information manipulation in an effort to prevent the election of Emmanuel Macron as president of France in 2017. Also, with the rise of the radical-right movement in Sweden, Russian fake news teams offered cash to some Stockholm residents to stage a riot against Muslim immigrants, but it was abruptly foiled when a news crew arrived at the scene . Consistent with these studies and reports, our analysis further reveals that in total, IRA’s 64 ads targeted a few other countries including the U.K. (n=46) coming second followed by Germany (n=12), France (n=12), Canada (n=5), Russia (n=5), Netherlands, (n=4), China (n=3), and Turkmenistan (n=1). Here, all of the ads targeting Western European countries deal with anti-immigration issues and sentiments tackling them in very antagonistic manner (see Figure 3). In contrast, all the ads targeting Canada called for welcoming refugees (Figure 4) which is done to align the ads’ with public opinion towards immigration especially following the death of the Syrian refugee child, Alan Kurdi (Tyyskä, et al., 2017) . In brief, IRA’s ads were designed to address public sentiments in different locations and countries by focusing on a few divisive identity issues and heated topics like immigration and race because of their sensitive nature and the potential of using them to enhance anger and the sense of perceived injustice among different publics. Since no disclosure was made, the IRA employed political astroturfing as a weaponized tool to enhance such social division and discord .
Figure 3: A racist IRA ad targeting Facebook users in the U.S. and Netherlands in relation to illegal immigration. The ad was posted in the U.S. and Netherlands on 5 May 2016, targeting people aged between 18–65 years old.
Figure 4: Two IRA ads targeting Facebook users in Germany (top) and Canada (bottom) regarding refugees.
To answer the second research question, we ranked IRA’s ads based on their costs, which we find to be an important indicator of the importance assigned to certain issues by Russian trolls. Indeed, the above measure provides better insight into the attention given by the ads’ producers themselves unlike the case of examining the ads that received the most clicks and impressions, as both are audience-generated metrics . Table 2 provides the details on the top 10 most expensive ads, and we can see once more that the first one is related to promoting anti-immigration stances, stating “Every man should stand for our borders! Join!” This ad was posted five times and the payment was made only once as two other lower payments were made afterwards totalling 110,825.67 rubles (about $US1,660). In general, the top 10 ads contain six targeting conservative, white, and/or Republican Americans who support Trump and/or the confederation and the Second Amendment, while the other four top ones targeted Black Americans or those interested in the social and political lives of Black Americans. In other words, there is a clear emphasis on dividing users based on their ideological beliefs and racial identities.
Table 2: Ranking of IRA ads based on their costs in Russian rubles. Number Text Platform Targets Locationn Cost 1 Every man should stand for our borders! Join! Secured Borders Excluded Connections: Exclude people who like Secured Borders United States 92,711.37 2 Do you support Trump? Wanna #MakeAmericaGreatAgain? Wanna see your kid on our account? Take pictures, make videos, send them to us via DM or tag #KIDS4TRUMP and we’ll make a patriotic team of young Trump supporters here! I’m sure it’ll be great! It would be nice if you tell us why your kid supports Donald Trump and what are his or her thoughts about #Elections2016. Let’s have some fun! Don’t stay aside! Tea Party News (@tea Teapartypat Instagram: Tea_party_news The Tea Party, Donald Trump, donald j trump, Donald Trump for President, lvanka Trump Fine Jewelry or Donald Trump Jr. United States 9,970.58 3 Join us, if you proud to be who you are and believe that your roots give you the power! Black Excellence iloveblackexcellence My Black is Beautiful or Black is beautiful
And Must Also Match: Interests: Black Girls Rock!
United States 9,926.34 4 When you live in Texas you know that you are the chosen one! Time to Secede Independence or Patriotism
Excluded Connections: Exclude people who like Heart of Texas
United States: Texas 9,512.72 5 Heritage, not hate. The South will rise again! South United South United Excluded Connections: Exclude people who like South United
Confederate Flag, Confederate States Army or Southern Pride
United States 9,370.63 6 Protect the 2nd. Without it, you won't have any others! Join! Defend The 2nd Savethe2a [Amendment] The Second Amendment, Anything About Guns, Guns & Patriots, Second Amendment Sisters, Gun Rights, American Guns, Guns & Ammo, Second Amendment Supporters or Guns.com
And Must Also Match: Interests: Right to keep and bear arms
Excluded Connections: Exclude people who like Defend the 2nd Amendment
United States 9,351.38 7 Don’t shoot we want to grow up! Don’t Shoot Cop Block
Excluded Connections: Exclude people who like Don’t Shoot
United States: Atlanta (+25 mi) Georgia;
New Orleans (+25 mi)
9,259.36 8 A rally in loving memory of Aiyana Jones May 16 will mark the sixth anniversary of the br The 6th Anniversary Of The Aiyana Jones’ Death Black Matters Behaviors: African American (US) United States: Detroit (+70 km) Michigan 9,192.47 9 Join us to study your real history and get the power from your roots. Stay woke and natural! Nefertiti’s Community Nefertitis Community Black history or Black (Color)
And Must Also Match: Interests: Pan-Africanism
Excluded Connections: Exclude people who like Nefertiti’s Community
United States 9,162.75 10 The community of 2nd Amendment supporters, guns lovers & patriots Defend The 2nd Savethe2a [Amendment] Right to keep and bear arms, 2nd Amendment, National Rifle Association, Gun Owners of America, National Association for Gun Rights or From my cold, dead hands
And Must Also Match: Interests: Anything About Guns, Open carry in the United States, Guns & Patriots, Gun Rights or Guns.com
Excluded Connections: Exclude people who like Defend the 2nd Amendment
United States 9,150.72
To have a deeper examination of the major issues tackled in the IRA’s ads, we used QDA Miner software which generates the main topics found in a text corpus. We found this method to be efficient especially when combining its results with the most recurrent words in the corpus with the use of NVivo. Figure 5 displays the top five topics of the IRA’s social media ads based on their eigenvalue. The top one “Follow on Twitter” (eigenvalue = 7.12) clearly reveals the cross-platform nature of the IRA’s astroturfing strategy, since users are encouraged to connect to other venues like Instagram and Twitter to increase the audience outreach. The second prominent topic is “FaceMusic” (eigenvalue = 5.77) which refers to the IRA spying tool mentioned above, while “Police Brutality” (eigenvalue = 4.40) comes in the third place which references incidents, videos, and stories related to the killing of Blacks by police forces in the U.S. Some of the associated words that are linked to this topic include: “death”, “kill”, “shot”, “shooting”, and “BM” [Blacklives Matter]. The proximity plot of the topic “Police Brutality” shows that it is firstly associated with the term “Staten Island” (coefficient = 0.103) in reference to the death of Eric Garner who was killed by the police there. The second associated term is “National Day of Protest” (coefficient = 0.093) which is recurrent due to IRA’s interest in mobilizing the African American community using appealing messages (Figure 6).
Figure 5: Top five topics of IRA’s social media ads based on their eigenvalues.
Figure 6: Proximity plot of “police brutality”.
The fourth topic “Feel Safe” (eigenvalue = 4.05) is used in association with a Facebook ad on “Fit Black” page that promotes free defense and martial arts classes for Black American women, stating: “Join the event, bring your friends, feel safe with us! It’s free!” The fifth topic is “Black People” (eigenvalue = 2.91) that is mostly used to defend the causes of Blacks which is evident in the other associated words with this topic such as “rights”, “racism”, “hate”, and “justice” (Figure 5). In this case, the proximity plot shows that this topic is firstly associated with the following combination of hashtags that are jointly used “#Africanandproud #Blackandproud #Blackpride #Blackpower #Blacklivesmatter” followed by other series of hashtags “#Africanunity #Blacklove #Justiceorelse #Problack #Regrann”. In brief, there is a clear emphasis on Black American issues as the IRA’s political astroturfing activities stressed the social and political grievances of this demographic. Consequently, the negative issues discussed are often related to exaggerating the impact of certain perceived threats, dangers, and risks, and the main reason behind highlighting this negativity is to agitate different parties and possibly make audiences feel upset, angry, and/or unfairly treated by other social groups or communities.
Finally, by ranking the top 100 most frequent words used in these ads’s texts with NVivo, more insight into the IRA’s intended messages can be gathered. The larger the circle indicates the more frequent the word is in the dataset. Here, the largest circles in Figure 7 shows that the word “Black” is the most frequent one (n=1,711) followed by “Police” (n=882) in reference to cases of police brutality against Black Americans. Other frequent words include “Stop” (n=413) regarding the refugee influx in Europe and the United States as well as “Matters” (n=324) mostly from “Black Matters US” and “Black Lives Matters” as well as “White” (n=272) in relation to White Americans. Again, the issues are very divisive but once more the most recurrent topic discussed is Black Americans and their concerns regarding police brutality, mistreatment, and human rights. This is followed by the topic of White Americans who are associated with gun ownership, anti-immigration, Second Amendment, and support for Donald Trump. The above results are corroborated by the examination of the most recurrent words used in the users’ “Interests” category, for there is emphasis, again highly prominently, on African [Americans] (n=2,784), Black [people] (n=2,243), and [Black Lives Matter] movement (n=1,407).
Figure 7: The top 100 words in the text of Facebook ads.
By examining the top posts that received the most clicks and impressions, we can corroborate the above claim. The because the majority of the top posts, which were well received by these audiences, reveal they are related to joining a group, calling for actions, and giving assurances on the virtues of being part of minority groups and communities. For example, 7 out of the 10 social media posts that received the highest number of clicks (total no. 6912) and 5 out of 10 that received the highest number of impressions (total no. 4900) were related to pro-Black American messages targeting people who believe in “Black Power, Black Panther Party or Black panther”. Though not as prominent as the case of Black Americans and White conservatives, the IRA also targeted Muslim Americans, Latinos (“Brown Power” page), native Americans and LGBTQ community with pro-messages aligning with their ways of thinking and backgrounds .
These findings profoundly resonate with our earlier discussions of political astroturfing. It shows that the IRA deliberately attempted to exploit divisive issues to cultivate followers for their pages. However, this was unlike any traditional political astroturfing practice that usually aims at manufacturing a homogenous support base. Instead, the IRA trolls divided the task into multi-ethnic segments to further isolate different groups into their filter bubbles and attract them with favorable messages, using the old strategy of divide and conquer. The entire processes, if the IRA’s identity was not revealed by the FBI probe by now, could have been further replicated and recycled on future occasions, such as 2018 mid-term election. This partly explains why the IRA kept targeting messages even after the 2016 election was over, possibly to make the source pages (such as, Blacktivists, Heart of Texas, United Muslims of America, Being Patriotic, Secured Borders, LGBT United) seem legitimate, valid beyond one election cycle, and acting like genuine grassroots movements that are long-term and persistent. By tapping into a mix and match method of political astroturfing, the IRA attempted to maximize interactions within different social media audiences by exploiting emotional issues that are dear to them.
To sum up, from the theoretical discussions and analytical findings it becomes clear that astroturfing as a political communication operation requires a range of planning, preparation, and most importantly, a platform to execute and coordinate the entire operation including its online strategies. Facebook as a social media platform served as a mechanism to conceal the perpetrators’ true identity (the IRA), ensconce the evidence of wrong-doing in plain sight (i.e., paying for ads with stolen identities), and most crucially, promoting the message (the political ads) with a surgical precision (microtargeting with specific identarian emotional appeals) and intentionally provoking the voters with identity and racial issues. From the findings of our analysis, we discern that the broader goals of the Russian troll factory were to further sow social division by agitating social media users and isolate them by exploiting sensitive and controversial topics that have political or religious implications. Though the majority of people targeted were based in the United States, a few ads containing pro- and anti-immigration sentiments also targeted people living in Canada, Netherlands, U,K,, Germany, and France.
Discussion and conclusion
Drawing from an empirical analysis of Facebook and Instagram ads, this study shows how the Russian Internet Research Agency (IRA) has managed not only to use the most popular social media in the U.S. with a view to intensify existing racial and political tensions from within and beyond, but it has also employed a highly deceptive political advertising technique that we highlight as political astroturfing. The IRA’s activities were not a mere collection of political astroturfing practices, but it was a cohort of online events planned and executed carefully with high precision in terms of microtargeting specific segments of the population by race, sex, religion, location, and political orientation. The consequence of the IRA’s microtargeting goes beyond influencing election decision or enhancing racial divisions. There are many localized, open, and secretive political campaigns that also contribute to racial division and rage using online platforms such as white supremacist and alt-right wing groups (Marwick and Lewis, 2017). But the IRA’s significance is more germane geopolitically as it contributes to Russia’s foreign political and diplomatic power, and it sets a precedence for future events. Another consequence is that paid trolling creates a problem for genuine protests and democratic practices, while the social media companies continue profiting from the user engagement, interactions, and sponsored contents. The campaigns by the IRA (as well as by companies like Cambridge Analytica) show that Facebook’s core business model (advertising) can remain vulnerable to political manipulation by domestic and foreign actors who can afford it. In this sense, social media giant Facebook must be held accountable for its failure to safeguard its users from contents promoted by inauthentic actors.
The study findings thus also contribute to the ongoing research on political communication and disinformation through the lens of political astroturfing. The paper sufficiently analyzes the production and distribution aspects of the messages. However, it does not present any empirical evidence on the off-line impact of these advertised messages. This remains a gap in the study of effectiveness of political astroturfing as a political communication strategy. Indeed, the IRA turned the social media platforms against their owner, contributing to a wider cyber anxiety. It further reveals the geopolitical dangers of political microtargeting, data-driven electoral campaigns, and voter surveillance (Bennett and Lyon, 2019; Borgesius, et al., 2018). From a cross-national point of view, the IRA’s astroturfing activities, as evident in our analysis, can be also seen as an auxiliary cyberwar threat to liberal democracies. Hence, understanding how social media platforms enable foreign and state-sponsored political astroturfing practices can help global and national regulatory bodies to enact the legal and policy measures necessary to prevent it. Exposing and preventing political astroturfing are also important so that the legitimate social and political movements can survive and remain trustworthy to the general public.
We hope our mixed method, theoretical discussions, and arguments contribute to the ongoing research in the field of disinformation since political astroturfing remains a significant issue that has real and direct implications on democracy not only in the U.S. but in many other countries. A further study in this approach would go beyond the Russian case and explore how non-state actors may also use social media to attack state-actors; for instance, a growing number of anti-Chinese-state voices seek to use bots against the Chinese government (Bolsover and Howard, 2019). This study is limited in other ways since it uses a publicly available dataset that is offered by Facebook, and it is not clear whether the dataset examined here includes all of IRA’s paid ads or not. Also, other social media platforms were not examined such as Twitter that released over nine million tweets sent by Russian trolls. Hence, it is important to conduct other studies that can address the gaps highlighted earlier.
About the authors
Ahmed Al-Rawi is Assistant Professor of News, Social Media, and Public Communication at the School of Communication at Simon Fraser University.
E-mail: aalrawi [at] sfu [dot] ca
Anis Rahman is Lecturer at Simon Fraser University’s School of Communication.
E-mail: abur [at] sfu [dot] ca
We would like to thank Mr. Michael Joyce, a Web and Data Services Developer, at Simon Fraser University Library for his kind assistance in converting the PDF files into an excel sheet and organizing the metadata.
1. Mejias and Vokuev, (2017, p. 1,034) define, “an Internet troll is a person who posts incendiary comments and trolls as expresses disagreement through insults.” Troll culture, however, is not exclusive to Russia, it is popular in many other places in North America and elsewhere (Condis, 2018).
2. For a detailed discussion in this line of argument, see Jamieson (2018).
3. Bennett and Livingston, 2018, p. 132.
4. Harvey, 2014a, p. 87.
5. Lock, et al., 2016, p. 88.
6. Newsome, 2014; Harvey, 2014b, pp. 359–360. The term “political astroturf” were specifically used by the researchers at the Indiana University School of Informatics and Computing to identify the diffusion of political spam. In political astroturfing, notes Kerric Harvey, “Under the guise of grassroots organizing online, a single entity generates a campaign via Twitter and other social media that can be used to spread disinformation to undermine competitors’ campaigns. Such information flows can go well beyond the original online venues” (Harvey, 2014b, pp. 359–360).
7. Harvey, 2014a, p. 88.
8. Klotz, 2007, p. 5.
9. Sockpuppets are dummy or fake accounts, and the practice of using misleading online identities is known as sockpuppeting (see Harvey, 2014a, pp. 87–88).
10. The concept “deep-fake” refers to highly manipulated audio and video of real people saying and doing things they never said or did (see Chesney and Citron, 2019, p. 1,173).
11. In the context of political astroturfing we define disinformation as the intentional dissemination of false information to serve a certain agenda or achieve political goals (Bennett and Livingston, 2018, p. 124).
12. Blakeman, 2014, p. 40.
13. Lock, et al., 2016, p. 97.
14. Mejias and Vokuev, 2017, p. 1,034.
15. Leiser, 2016, p. 3.
16. Social media analysts, notably, Bucher and Helmond (2018) offer a helpful discussion on the enabling features of social media platforms as ‘affordances’, in which the clicking-buttons allow users to express and interact with a diverse range of “meanings, feelings, imaginings, and expectations” (p. 2) revealing an extensive realm of “understanding and analysing social media interfaces and the relations between technology and its users” (p. 3). Platform affordance can be social, technological, perceived, or even imagined. A separate study can be undertaken analysing the affordance aspect of Facebook vis-à-vis the IRA’s social media messages in general, as we limit our study to advertised messages only.
17. Brown, et al., 2016, p. 136.
18. Provalis Research, 2015, p. 65.
19. van Eck and Waltman, 2007, p. 630.
20. Chesney and Citron, 2019, p. 1,774.
21. Bennett and Livingston, 2018, p. 123.
22. As for the remaining targeted states, there does not seem to be a clear pattern except for the fact that they are all written in English which means that the target groups are expatriates living in these countries. Subsequently, the three ads targeting China were related to pro-issues on Native Americans, ex-prison inmates, and Blacks, while the four ads targeting Russia contained three pro-Black American messages and one pro-White conservative issue. Finally, the single ad targeting Turkmenistan contained an anti-Hillary Clinton message. In other words, there is not clear strategy or emphasis on a unified group here except for attracting diverse audiences by offering them messages that correspond with their own racial, political, and/or social backgrounds.
23. It is imperative to note that when it comes to issues related to immigration, the alt-right networks in the U.S., most notably the Breitbart and its allied conservative media ecosystem, appear to be far more influential in inflaming rage and anger against immigrants, refugees, and asylum seekers (see for example Faris, et al., 2017). A distinguishing factor is that, discreetly, the IRA trolls played on both sides of the anti- and pro- immigration camps (see Figure 4). This is very unlike Breitbart or any other known media outlets.
24. The impressions measure calculates how often ads were on screen for a target audience.
25. When it is related to Black Americans, the most recurrent topics discussed are related to the alleged oppressive power of some White Americans, such as police brutality, killing of innocent Black civilians, arrests, and other human rights violations.
A. Badawy, A. Addawood, K. Lerman, and E. Ferrara, 2019. “Characterizing the 2016 Russian IRA influence campaign,” Social Network Analysis and Mining, volume 9, article number 31.
doi: https://doi.org/10.1007/s13278-019-0578-6, accessed 15 August 2020.
C.J. Bennett and D. Lyon, 2019. “Data-driven elections,” Internet Policy Review, volume 8, number 4.
doi: http://dx.doi.org/10.14763/2019.4.1433, accessed 15 August 2020.
W. Bennett and S. Livingston, 201). “The disinformation order: Disruptive communication and the decline of democratic institutions,” European Journal of Communication, volume 33, number 2, pp. 122–139.
doi: https://doi.org/10.1177/0267323118760317, accessed 15 August 2020.
R. Blakeman, 2014. “Guerrilla marketing,” In: R. Blakeman. Nontraditional media in marketing and advertising. Thousand Oaks, Calif.: Sage, pp. 37–56.
doi: https://doi.org/10.4135/9781506335261.n3, accessed 15 August 2020.
M. Boler and E. Davis. 2018. “The affective politics of the ‘post-truth’ era: Feeling rules and networked subjectivity,” Emotion, Space and Society, volume 27, pp. 75–85.
doi: https://doi.org/10.1016/j.emospa.2018.03.002, accessed 15 August 2020.
G. Bolsover and P. Howard, 2019. “Chinese computational propaganda: Automation, algorithms and the manipulation of information about Chinese politics on Twitter and Weibo,” Information, Communication & Society, volume 22, number 14, pp. 2,063–2.080.
doi: https://doi.org/10.1080/1369118X.2018.1476576, accessed 15 August 2020.
F.Z. Borgesius, J. Moeller, S. Kruikemeier, R. Ó Fathaigh, K. Irion, T. Dobber, B. Bodó, and C.H. de Vreese, 2018. “Online political microtargeting: Promises and threats for democracy,” Utrecht Law Review, volume 14, number 1, pp. 82–96.
doi: http://doi.org/10.18352/ulr.420, accessed 15 August 2020.
H. Borko and M. Bernick, 1963. “Automatic document classification,” Journal of the ACM, volume 10, number 2, pp. 151–162.
doi: https://doi.org/10.1145/321160.321165, accessed 15 August 2020.
D.M. Brown, A. Soto-Corominas, J.L. Surez, and J. de la Rosa, 2016. “Overview: The social media data processing pipeline,” In: L. Sloan and A. Quan-Haase (editors). Sage handbook of social media research methods. Thousand Oaks, Calif.: Sage, pp. 125–145.
doi: https://dx.doi.org/10.4135/9781473983847.n9, accessed 15 August 2020.
T. Bucher and A. Helmond, 2018. “The affordances of social media platforms,” In: J. Burgess, A. Marwick, and T. Poell (editors). Sage handbook of social media. Thousand Oaks, Calif.: Sage, pp. 233–253.
doi: http://dx.doi.org/10.4135/9781473984066.n14, accessed 15 August 2020.
R. Chesney and D.K. Citron, 2019. “Deep fakes: A looming challenge for privacy, democracy, and national security,” California Law Review, volume 107, pp. 1,753–1,819, and at https://scholarship.law.bu.edu/faculty_scholarship/640, accessed 15 August 2020.
M. Condis, 2018. Gaming masculinity: Trolls, fake geeks, and the gendered battle for online culture. Iowa City: University of Iowa Press.
S.M. Driedger and T. Garvin, 2016. “Media and framing: Processes and challenges,” In: N.E. Fenton and J. Baxter (editors). Practicing qualitative methods in health geographies. London: Routledge, pp. 190–199.
doi: https://doi.org/10.4324/9781315601946, accessed 15 August 2020.
U. Etudo, V.Y. Yoon, and N. Yaraghi, 2019. “From Facebook to the streets: Russian troll ads and Black Lives Matter protests,” Proceedings of the 52nd Hawaii International Conference on System Sciences, and at http://hdl.handle.net/10125/59529, accessed 15 August 2020.
R. Faris, H. Roberts, B. Etling, N. Bourassa, E. Zuckerman, and Y. Benkler, 2017. “Partisanship, propaganda, and disinformation: Online media and the 2016 US presidential election,” Berkman Klein Center for Internet & Society, and at https://cyber.harvard.edu/publications/2017/08/mediacloud, accessed 15 August 2020.
R.E. Guadagno and K. Guttieri, 2019. “Fake news and information warfare: An examination of the political and psychological processes From the digital sphere to the real world,” In: I.E. Chiluwa (editor). Handbook of research on deception, fake news, and misinformation online. Hershey, Pa.: IGI Global, pp. 167–191.
doi: https://doi.org/10.4018/978-1-5225-8535-0.ch01, accessed 15 August 2020.
K. Harvey, 2014a. “Astroturfing,” In: K. Harvey. Encyclopedia of social media and politics. Thousand Oaks, Calif.: Sage, pp. 87–88.
doi: https://doi.org/10.4135/9781452244723.n30, accessed 15 August 2020.
K. Harvey, 2014b. “Decoy campaign Web sites,” In: K. Harvey. Encyclopedia of social media and politics. Thousand Oaks, Calif.: Sage, pp. 359–360.
doi: https://doi.org/10.4135/9781452244723.n140, accessed 15 August 2020.
N.I. Harway and H.P. Iker, 1964. “Computer analysis of content in psychotherapy,” Psychological Reports, volume 14, number 3, 720–722.
doi: https://doi.org/10.2466/pr0.1922.214.171.1240, accessed 15 August 2020.
P.N. Howard, B. Ganesh, D. Liotsiou, J. Kelly, and C. François, 2018. “The IRA, social media and political polarization in the United States, 2012–2018,” Oxford Project on Computational Propaganda, Working Paper, 2018.2, at https://comprop.oii.ox.ac.uk/research/ira-political-polarization/, accessed 15 August 2020.
C. Jacobi, W. van Atteveldt, and K. Welbers, 2016. “Quantitative analysis of large amounts of journalistic texts using topic modelling,” Digital Journalism, volume 4, number 1, pp. 89–106.
doi: https://doi.org/10.1080/21670811.2015.1093271, accessed 15 August 2020.
K.H. Jamieson, 2018. Cyberwar: How Russian hackers and trolls helped elect a president — What we don’t, can’t and do know. Oxford: Oxford University Press.
S. Kim, 2013. “Astroturfing,” In: R.L. Heath (editor). Encyclopedia of public relations. Thousand Oaks, Calif.: Sage, pp. 44–45.
doi: https://doi.org/10.4135/9781452276236.n26, accessed 15 August 2020.
R.J. Klotz, 2007. “Internet campaigning for grassroots and astroturf support,” Social Science Computer Review,volume 25, number 1, pp. 3–12.
doi: https://doi.org/10.1177/0894439306289105, accessed 15 August 2020.
I. Lapowsky, 2018. “Russia-linked Facebook ads targeted a sketchy Chrome extension at teen girls,” Wired (12 May), at https://www.wired.com/story/russia-facebook-ads-sketchy-chrome-extension/, accessed 15 August 2020.
I. Lock, P. Seele, and R.L. Heath, 2016. “Where grass has no roots: The concept of ‘shared strategic communication’ as an answer to unethical astroturf lobbying,” International Journal of Strategic Communication, volume 10, number 2, pp. 87–100.
doi: https://doi.org/10.1080/1553118X.2015.1116002, accessed 15 August 2020.
A. Marwick and R. Lewis, 2017. “Media manipulation and disinformation online,” >Data & Society, at https://datasociety.net/library/media-manipulation-and-disinfo-online/, accessed 15 August 2020.
U.A. Mejias and N.E. Vokuev, 2017. “Disinformation and the media: The case of Russia and Ukraine,” Media, Culture & Society, volume 39, number 7, pp. 1,027–1,042.
doi: https://doi.org/10.1177/0163443716686672, accessed 15 August 2020.
B. Newsome, 2014. “Information, communications, and cyber security,” In: B. Newsome. A practical introduction to security and risk management. London: Sage, pp. 259–307.
doi: https://doi.org/10.4135/9781506374437.n14, accessed 15 August 2020.
New Knowledge, 2018. “The tactics & tropes of the Internet Research Agency,” at https://disinformationreport.blob.core.windows.net/disinformation-report/NewKnowledge-Disinformation-Report-Whitepaper.pdf, accessed 15 August 2020.
N. Péladeau and E. Davoodi, 2018. “Comparison of latent Dirichlet modeling and factor analysis for topic extraction: A lesson of history,” Proceedings of the 51st Hawaii International Conference on System Sciences, at http://hdl.handle.net/10125/49965, accessed 15 August 2020.
Provalis Research, 2015. “WordStat 7: User’s guide,” at https://provalisresearch.com/Documents/WordStat7.pdf, accessed 15 August 2020.
F.N. Ribeiro, K. Saha, M. Babaei, L. Henrique, J. Messias, F. Benevenuto, O. Goga, K.P. Gummadi, and E.M. Redmiles, 2019. “On microtargeting socially divisive ads: A case study of Russia-linked ad campaigns on Facebook,” FAT* ’19: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 140–149.
doi: https://doi.org/10.1145/3287560.3287580, accessed 15 August 2020.
A. Rožukalne and K. Sedlenieks, 2017. “The elusive cyber beasts: How to identify the communication of pro-Russian hybrid trolls in Latvia’s Internet news sites?” Central European Journal of Communication, volume 10, number 1, pp. 79–97.
doi: https://doi.org/10.19195/1899-5101.10.1(18).6, accessed 15 August 2020.
M. Thelwall, 2017. “Sentiment analysis for small and big data,” In: N.G. Fielding, R.M. Lee, and G. Blank (editors). Sage handbook of online research methods. California: Sage, pp. 344–355.
doi: https://dx.doi.org/10.4135/9781473957992.n20, accessed 15 August 2020.
V. Tyyskä, J. Blower, S. DeBoer, S. Kawai, and A. Walcott, 2017. “Syrian refugee crisis in Canadian media,” Ryerson Centre for Immigration and Settlement, RCIS Working Paper, number 2017/3, at https://www.ryerson.ca/content/dam/rcis/documents/RCIS%20Working%20Paper%202017_3%20Tyyska%20et%20al.%20final.pdf, accessed 15 August 2020.
U.S. Department of the Treasury, 2018. “Treasury sanctions Russian cyber actors for interference with the 2016 US elections and malicious cyber attacks” (15 March), at https://home.treasury.gov/news/press-releases/sm0312, accessed 15 August 2020.
U.S. Census Bureau, 2011. “2010 Census shows Black population has highest concentration in the south” (29 Spetember), at https://www.census.gov/newsroom/releases/archives/2010_census/cb11-cn185.html, accessed 15 August 2020.
U.S. House of Representatives, Permanent Select Committee on Intelligence, 2018a. “HPSCI minority open hearing exhibits,” at https://democrats-intelligence.house.gov/hpsci-11-1/hpsci-minority-open-hearing-exhibits.htm, accessed 15 August 2020.
U.S. House of Representatives, Permanent Select Committee on Intelligence, 2018b. “Social media advertisements,” at https://democrats-intelligence.house.gov/social-media-content/social-media-advertisements.htm, accessed 15 August 2020.
N.J. van Eck and L. Waltman, 2007. “Bibliometric mapping of the computational intelligence field,” International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, volume 15, number 5, pp. 625–645.
doi: https://doi.org/10.1142/S0218488507004911, accessed 15 August 2020.
N.J. van Eck, L. Waltman, R. Dekker, and J. van den Berg, 2010. “A comparison of two techniques for bibliometric mapping: Multidimensional scaling and VOS,” Journal of the American Society for Information Science and Technology, volume 61, number 12, pp. 2,405–2,416.
doi: https://doi.org/10.1002/asi.21421, accessed 15 August 2020.
G. Wiedemann, 2016. “Introduction: Qualitative data analysis in a digital world,” In: G. Wiedemann. Text mining for qualitative data analysis in the social sciences: A study on democratic discourse in Germany. Wiesbaden: Springer VS, pp. 1–16.
doi: https://doi.org/10.1007/978-3-658-15309-0_1, accessed 15 August 2020.
S. Woolley and D. Guilbeault, 2017. “Computational propaganda in the United States of America: Manufacturing consensus online,” Oxford Project on Computational Propaganda, Working Paper, 2017.5, at https://comprop.oii.ox.ac.uk/research/working-papers/computational-propaganda-in-the-united-states-of-america-manufacturing-consensus-online/, accessed 15 August 2020.
S. Zannettou, T.Caulfield, E. De Cristofaro, M. Sirivianos, G. Stringhini, and J. Blackburn, 2019. “Disinformation warfare: Understanding state-sponsored trolls on Twitter and their influence on the Web,” arXiv arXiv:1801.09288 (4 March), at https://arxiv.org/abs/1801.09288, accessed 3 November 2018.
A. Zelenkauskaite and B. Niezgoda, 2017. “‘Stop Kremlin trolls’: Ideological trolling as calling out, rebuttal, and reactions on online news portal commenting,” First Monday, volume 22, number 5, at https://firstmonday.org/article/view/7795/6225, accessed 15 August 2020.
doi: https://doi.org/10.5210/fm.v22i5.7795, accessed 15 August 2020.
Received 29 May 2020; revised 11 June 2020; accepted 14 August 2020.
Copyright © 2020, Ahmed Al-Rawi and Anis Rahman. All Rights Reserved.
Manufacturing rage: The Russian Internet Research Agency’s political astroturfing on social media
by Ahmed Al-Rawi and Anis Rahman.
First Monday, Volume 25, Number 9 - 7 September 2020