First Monday

Internet Research Agency Twitter activity predicted 2016 U.S. election polls by Damian J. Ruck, Natalie Manaeva Rice, Joshua Borycz, and R. Alexander Bentley



Abstract
In 2016, the Internet Research Agency (IRA) deployed thousands of Twitter bots that released hundreds of thousands of English language tweets. It has been hypothesized that this affected public opinion during the 2016 U.S. presidential election. Here we test that hypothesis using vector autoregression (VAR) comparing time series of election opinion polling during 2016 versus numbers of re-tweets or ‘likes’ of IRA tweets. We find that changes in opinion poll numbers for one of the candidates were consistently preceded by corresponding changes in IRA re-tweet volume, at an optimum interval of one week before. In contrast, the opinion poll numbers did not correlate with future re-tweets or ‘likes’ of the IRA tweets. We find that the release of these tweets parallel significant political events of 2016 and that approximately every 25,000 additional IRA re-tweets predicted a one percent increase in election opinion polls for one candidate. As these tweets were part of a larger, multimedia campaign, it is plausible that the IRA was successful in influencing U.S. public opinion in 2016.

Contents

Introduction
Results
Discussion and conclusion

 


 

Introduction

While social media originally allowed a decentralized sharing of information by individuals (Tufekci and Wilson, 2012; Tufekci, 2017), it has more recently provided state actors with new tools for propaganda (Gunitsky, 2015; Rød and Weidmann, 2015; Spaiser, et al., 2017; Sanovich, 2017). The extent to which bad information propagates on social media (Vosoughi, et al., 2018) has led to speculation that disinformation on social media can affect political opinion and even election outcomes (Benkler, et al., 2018; Grinberg, et al., 2019; Allcott and Gentzkow, 2017).

The Russian Internet Research Agency (IRA) has sought to influence public opinion in many countries (Spaiser, et al., 2017; Grigas, 2016; Narayanan, et al., 2017; Neudert, et al., 2017; Bodine-Baron, et al., 2018) using state directed disinformation campaigns (Lysenko and Brooks, 2018). In fact, the phrase “Kremlin troll” has become a term of abuse in Lithuanian comment threads (Zelenkauskaite and Niezgoda, 2017). As stated on page four of the Mueller Report (Mueller, 2019) released in April 2019, the IRA carried out “a social media campaign designed to provoke and amplify political and social discord in the United States.” There is debate, however, as to how substantially IRA disinformation affected public opinion leading up to the 2016 U.S. presidential election (Jamieson, 2018; Badawy, et al., 2018; Howard, et al., 2018; Guess, et al., 2019; Allcott, et al., 2019; Garrett, 2019). Here we test a condition of this hypothesis, in whether IRA Twitter activity predicted future changes in 2016 election opinion polling — ‘predicted’ meaning that information in one time series contains information about the future activity in other time series. Causation is not proven by this analysis, but certain directions of causality can be ruled out when one time series does not predict the other.

We take the view that IRA Twitter activity was representative of a larger, multimedia disinformation campaign (Paul and Matthews, 2016; Schoen and Lamb, 2012; Benkler, et al., 2018). Here we apply vector autoregression (VAR) to compare weekly time series of re-tweet and ‘like’ activity of IRA tweets versus weekly data from U.S. election opinion polls in 2016.

In October 2018, Twitter released the content of “Twitter accounts potentially connected to a propaganda effort by a Russian government-linked organization known as the Internet Research Agency” to both the United States Congress and the public (Twitter, 2017).

The Twitter data contains over nine million tweets representing the activity of 3,613 IRA linked accounts (Twitter, 2017). Of these, 770,005 English language tweets occurred during the 2016 presidential campaign. The number of tweets per week increased during the campaign (see Figure 2A). To correct for this, we used ‘number of retweets per tweet’ as our first measure of success. We confirm our findings using a second measure: ‘number of likes per tweet’. We also see that 91 percent of first retweeters of IRA tweets were non-IRA bots, which suggests that propaganda spread into networks of real U.S. citizens.

 

Number of weekly tweets by the 10 most prominent IRA Twitter accounts
 
Note: Larger version available here.

 

The opinion polling data come from FiveThirtyEight.com (FiveThirtyEight, 2017). FiveThirtyEight compiled a database of 3315 national polls from 54 pollsters asking whether the participant intended to vote for Donald Trump or Hillary Clinton; many also included Gary Johnson as a third option. The Fivethirtyeight data exist in two forms, raw weekly time series of Trump and Clinton’s polling percentage, as well as adjusted polls that correct for the following: presence/absence of a third candidate, likely voter status, smoothing and political bias of the pollster. Time series were built by averaging all national polls in a given week across all pollsters (see Supplement for a list of pollsters).

Our first VAR tests if the weekly time series of retweets for IRA Twitter accounts, Rt, predicted the next week’s changes in election polls for Trump Tt and/or Clinton Ct:

TtTt-1 + Rt-1 (1)

CtCt-1 + Rt-1 (2)

VAR analysis is then repeated, only using ‘likes’ rather than re-tweets as the measure of Rt. Conversely, we also tested whether polling activity predicted IRA Twitter success, such that Tt or Ct predicted future Rt:

RtRt-1 + Tt-1 (3)

RtRt-1 + Ct-1 (4)

Akaike Information Criterion indicates that a time lag of one week is optimum for these VAR tests. The statistical significance of each VAR result is tested by “Granger causality” (Granger, 1969), a statistical test of prediction rather than true causality.

We ran a number of robustness checks (see Supplement) that measured IRA Twitter activity and polling in different ways. We also controlled for the number of re-tweets from Donald Trump’s personal Twitter account using data from the ‘Trump Twitter Archive’ (Brown, 2019) because it is a possible cause for both opinion polls and IRA Twitter activity.

 

++++++++++

Results

Timeline indicates disinformation strategy

IRA activity on Twitter was an order of magnitude larger than that on other social media platforms: around 1,500 posts per week on Facebook and Instagram, compared to around 15,000 on Twitter (Howard, et al., 2018). Figure 2A shows that during the presidential election itself, this increased to above 25,000 posts per week. However, volume of tweets did not necessarily translate into more retweets or likes per tweet Figure 2C).

 

Number of IRA tweets per week during election campaign
 
Note: Larger version available here.

 

The Mueller investigation concluded that Russia attempted to influence U.S public opinion during the 2016 presidential election (Mueller, 2019). Such campaigns have been carried out in other countries too (Grigas, 2016; Spaiser, et al., 2017; Narayanan, et al., 2017; Neudert, et al., 2017; Bodine-Baron, et al., 2018) and exhibit a particular modus operandi (MO) that is high volume and multichannel, as well as rapid, continuous and repetitive (Paul and Matthews, 2016; Sanovich, 2017). This MO is evident in the timeline of Twitter bot activity (Figure 1a). Firstly, the IRA Twitter activity appears to change abruptly at important political moments. After the San Bernardino shooting (2 December 2015), for example, five new IRA Twitter accounts, including “TEN GOP” and others, were introduced (Figure 1b). In the tweets from these accounts, ‘police’, ‘shooting’ and ‘muslim’ were among the most frequently used words (Figure 1b). The IRA Twitter accounts showcased in Figure 1 are the most prominent measured by total number of tweets, retweets and followers. Incidentally, the number of re-tweets from these IRA accounts was correlated (r = 0.8) and co-evolving (see Supplement) with Trump’s personal Twitter activity, suggesting they had a similar audience.

Many of the most prominent IRA Twitter accounts were imitating U.S news sources and tweeted using convincing English (Table 1). Among the most successful 25 IRA individual tweets (see Supplement), two prominent themes emerge in-line with the Russian modus operandi (Paul and Matthews, 2016): discrediting an establishment figure in Hilary Clinton and emphasizing pre-existing societal divisions by focusing on black racial identity.

IRA Twitter success predicted election opinion polls

As the popularity of presidential candidates ebbed and flowed during the 2016 campaign (Figure 2B), changes in opinion poll numbers for Trump were consistently preceded by corresponding changes in IRA re-tweet volume, at an optimum interval of one week before (Figure 2C&D). Compared to its time-average of about 38 percent, support for Trump increased to around 44 percent when IRA tweets were at their most successful (Figure 2D).

 

Top 5 most retweeted tweets
 

 

Vector autoregression (VAR) and Granger causality tests provide statistical support that IRA Twitter success (measured as both retweets and likes per tweet) predicted future increases for Trump in the polls, but did not predict Clinton’s polls Figure 3). Conversely, neither set of opinion poll numbers correlated with future re-tweets or ‘likes’ of the IRA tweets.

 

Vector Auto-Regression (VAR) showing IRA Twitter success predicts future increases in Donald Trump's polling
 
Note: Larger version available here.

 

This result proved robust when running the analysis in a number of different ways: measuring Twitter success as total number of re-tweets (not the average), shortened time resolution (two days), using polling end date (not start date), using Twitter likes and using adjusted polls. In none of these tests, however, does the raw number of original IRA tweets predict the polls. Instead, it is the re-tweets, not total volume of original IRA tweets, that predicts the opinion polls (see Supplement for robustness checks and “Granger causality” tests). We also discovered that IRA retweets still predicted Trump’s polls with the same magnitude when we controlled for the possible confounding effect of average weekly re-tweets from Trump’s personal Twitter account Pt (see Supplement).

Overall, the effect is quantified such that a gain of 25,000 re-tweets per week over all IRA tweets (or about 10 extra re-tweets per tweet per week), predicted approximately one percent increase in Donald Trump’s poll numbers.

 

++++++++++

Discussion and conclusions

Here we have (a) examined the timing of the IRA Twitter activity, which suggests a strategic release in parallel with significant political events before the 2016 election and (b) used vector autoregression (VAR) to test if the success of IRA activity on Twitter predicted changes in the 2016 election opinion polls. On a weekly time scale, we find that multiple time series of IRA tweet success robustly predicted increasing opinion polls for one candidate, but not the other. The opinion polls do not predict future success of the IRA tweets. The findings proved robust to many different checks.

The result, a one percent poll increase for the Republican candidate for every 25,000 weekly re-tweets of IRA messages, raises two questions about the effect: one regarding the magnitude and one regarding its asymmetry.

Here we have tested prediction, not causality. It seems unlikely that 25,000 re-tweets could influence one percent of the electorate in isolation (Guess, et al., 2019; Allcott, et al., 2019), although this might be more plausible than presumed at first glance, given that only about four percent of viewed tweets result in re-retweets (Lee, et al., 2015), such that 25,000 re-tweets could imply about 500,000 exposures to those messages per week. It is more likely, however, that Twitter is just a subset of a larger disinformation campaign carried out on multiple social media platforms (Issac and Wakabayashi, 2017; Howard, et al., 2018), as well as spread through social contagion (Centola, 2010) and to other parts of the interconnected ‘media ecosystem’ including print, radio and television (Benkler, et al., 2018). In this way IRA disinformation can frame the debate, meaning many more people than those directly exposed can be affected (Jamieson, 2018).

Any correlation established by an observational study could be spurious. Though our main finding has proved robust and our time series analysis excludes reverse causation, there could still be a third variable driving the relationship between IRA Twitter success and U.S. election opinion polls. We controlled for one of these — the success of Donald Trump’s personal Twitter account — but there are others that are more difficult to measure; including exposure to the U.S domestic media.

The asymmetrical effect we observed could be because specific groups and media outlets were targeted by the IRA (Jamieson, 2018; Miller, 2019) and those media outlets were particularly susceptible to disinformation (Benkler, et al., 2018), leading to considerably more re-tweets from those targeted groups (Badawy, et al., 2018).

We use macro-level data to establish a link between exposure to IRA disinformation and changes in U.S. public opinion. However, using aggregated data means we cannot know the extent to which the participants in election polls were exposed to IRA disinformation. This may not matter once social contagion (Centola, 2010) and media ecosystem effects (Benkler, et al., 2018) are taken into consideration. Nonetheless, establishing individual-level causal mechanisms should be a priority (Gerber and Zavisca, 2016; Spaiser, et al., 2017).

Here we have presented evidence that social media disinformation can measurably change public opinion polls. Though we focused on a particular high-profile example in 2016, social media propaganda is a growing problem affecting voting populations around the world, regardless of affiliation, and ought to be given serious attention in the future. Our study motivates future investigation that seeks to establish the causal mechanisms of disinformation exposure on the opinions and behavior of individuals. These future studies should measure exposure to all media in the media ecosystem, not just social media. End of article

 

About the authors

Damian J. Ruck is a post-doctoral researcher with a joint appointment in the Departments of Anthropology and Information and Communication at the University of Tennessee.
Direct comments to: druck [at] utk [dot] edu

Natalie Manaeva Rice is Research Associate II in the Department of Information and Communication at the University of Tennessee.
E-mail: natalie [dot] manaeva [at] gmail [dot] com

Joshua Borycz is STEM Librarian at Vanderbilt University and a graduate assistant in information science at the University of Tennessee.
E-mail: jborycz [at] vols [dot] utk [dot] edu

R. Alexander Bentley is Professor and Head of the Department of Anthropology at the University of Tennessee.
E-mail: rabentley [at] utk [dot] edu

 

Acknowledgments

Special thank you to Suzie Allard for her helpful comments on early drafts. Damian J. Ruck is funded by the College of Arts and Sciences and the Office of Research and Engagement at the University of Tennessee. Natalie M. Rice is funded by an NSF-sponsored DataONE grant and a Minerva Research Initiative (Department of Defence) at the University of Tennessee. The authors declare that they have no competing financial interests.

 

References

H. Allcott and M. Gentzkow, 2017. “Social media and fake news in the 2016 election,” Journal of Economic Perspectives, volume 31, number 2, pp. 211–236.
doi: https://doi.org/10.1257/jep.31.2.211, accessed 18 June 2019.

H. Allcott, M. Gentzkow, and C. Yu, 2019. “Trends in the diffusion of misinformation on social media,” National Bureau of Economic Research, Working Paper, number 25500, at https://www.nber.org/papers/w25500, accessed 18 June 2019.
doi: https://doi.org/10.3386/w25500, accessed 18 June 2019.

A. Badawy, E. Ferrara, and K. Lerman, 2018. “Analyzing the digital traces of political manipulation: The 2016 Russian interference Twitter campaign,” 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining.
doi: https://doi.org/10.1109/ASONAM.2018.8508646, accessed 18 June 2019.

Y. Benkler, R. Faris, and H. Roberts, 2018. Network propaganda: Manipulation, disinformation, and radicalization in American politics. New York: Oxford University Press.
doi: https://doi.org/10.1093/oso/9780190923624.001.0001, accessed 18 June 2019.

E. Bodine-Baron, T. Helmus, A. Radin, and E. Treyger, 2018. “Countering Russian social media influence,” RAND Research Report, RR-2740-RC. Santa Monica, Calif.: RAND Corporation.
doi: https://doi.org/10.7249/RR2740, accessed 18 June 2019.

B. Brown, 2019. “Trump Twitter archive,” at http://www.trumptwitterarchive.com/about, accessed 14 March 2019.

D. Centola, 2010. “The spread of behavior in an online social network experiment,” Science, volume 329, number 5996 (3 September), pp. 1,194–1,197.
doi: https://doi.org/10.1126/science.1185231, accessed 18 June 2019.

FiveThirtyEight, 2017. “2016 election forecast: National polls” (8 November), at https://projects.fivethirtyeight.com/2016-election-forecast/national-polls/, accessed 24 February 2019.

R.K. Garrett, 2019. “Social media’s contribution to political misperceptions in U.S. Presidential elections,” PLoS ONE, volume 14, number 3, e0213500.
doi: https://doi.org/10.1371/journal.pone.0213500, accessed 18 June 2019.

T.P. Gerber and J. Zavisca, 2016. “Does Russian propaganda work?” Washington Quarterly, volume 39, number 2, pp. 79–98.
doi: https://doi.org/10.1080/0163660X.2016.1204398, accessed 18 June 2019.

C.W.J. Granger, 1969. “Investigating causal relations by econometric models and crossspectral methods,” Econometrica, volume 37, number 3, pp. 424–438.
doi: https://doi.org/10.2307/1912791, accessed 18 June 2019.

A. Grigas, 2016. Beyond Crimea: The new Russian empire. New Haven, Conn.: Yale University Press.

N. Grinberg, K. Joseph, L. Friedland, B. Swire-Thompson, and D. Lazer, 2019. “Fake news on Twitter during the 2016 U.S. presidential election,” Science, volume 363, number 6425 (25 January), pp. 374–378.
doi: https://doi.org/10.1126/science.aau2706, accessed 18 June 2019.

A. Guess, J. Nagler, and J. Tucker, 2019. “Less than you think: Prevalence and predictors of fake news dissemination on Facebook,” Science Advances, volume 5, number 1 (9 January), eaau4586.
doi: https://doi.org/10.1126/sciadv.aau4586, accessed 18 June 2019.

S. Gunitsky, 2015. “Corrupting the cyber-commons: Social media as a tool of autocratic stability,” Perspectives on Politics, volume 13, number 1, pp. 42–54.
doi: https://doi.org/10.1017/S1537592714003120, accessed 18 June 2019.

P.N. Howard, B. Ganesh, D. Liotsiou, J. Kelly, and C. François, 2018. “The IRA, social media and political polarization in the United States, 2012–2018,” Project on Computational Propaganda, Working Paper, number 2018.2, at https://comprop.oii.ox.ac.uk/research/ira-political-polarization/, accessed 18 June 2019.

M. Issac and D. Wakabayashi, 2017. “Russian influence reached 126 million through Facebook alone,” New York Times (30 October), at https://www.nytimes.com/2017/10/30/technology/facebook-google-russia.html, accessed 18 June 2019.

K.H. Jamieson, 2018. Cyberwar: How Russian hackers and trolls helped elect a president. New York: Oxford University Press.

K. Lee, J. Mahmud, J. Chen, M. Zhou, and J. Nichols, 2015. “Who will retweet this? Detecting strangers from Twitter to retweet information,” ACM Transactions on Intelligent Systems and Technology (TIST), volume 6, number 3, article number 31.
doi: https://doi.org/10.1145/2700466, accessed 18 June 2019.

V. Lysenko and C. Brooks, 2018. “Russian information troops, disinformation, and democracy,” First Monday, volume 23, number 5, at https://firstmonday.org/article/view/8176/7201, accessed 18 June 2019.
doi: https://doi.org/10.5210/fm.v22i5.8176, accessed 18 June 2019.

D.T. Miller, 2019. “Topics and emotions in Russian Twitter propaganda,” First Monday volume 24, number 5, at https://firstmonday.org/article/view/9638/7785, accessed 18 June 2019.
doi: https://doi.org/10.5210/fm.v24i5.9638, accessed 18 June 2019.

R.S. Mueller, 2019. Report on the investigation into Russian interference in the 2016 presidential election. Washington, D.C.: U.S. Department of Justice; Volume I at https://www.justice.gov/storage/report_volume1.pdf, accessed 18 June 2019; Volume II at https://www.justice.gov/storage/report_volume2.pdf, accessed 18 June 2019.

V. Narayanan, P.N. Howard, B. Kollanyi, and M. Elswah, 2017. “Russian involvement and junk news during Brexit,” Computational Propaganda Project, Data Memo, number 2017.10, at https://comprop.oii.ox.ac.uk/research/working-papers/russia-and-brexit/, accessed 18 June 2019.

L.-M. Neudert, B. Kollanyi, and P.N. Howard, 2017. “Junk news and bots during the German parliamentary election: What are German voters sharing over Twitter?” Computational Propaganda Project, Data Memo, number 2017.7, at https://comprop.oii.ox.ac.uk/research/junk-news-and-bots-during-the-german-parliamentary-election-what-are-german-voters-sharing-over-twitter/, accessed 18 June 2019.

C. Paul and M. Matthews, 2016. “The Russian ‘firehose of falsehood’ propaganda model: Why it might work and options to counter it,” RAND Research Report, PE-198-OSD. Santa Monica, Calif.: RAND Corporation.
doi: https://doi.org/10.7249/PE198, accessed 18 June 2019.

E.G. Rød and N.B. Weidmann, 2015. “Empowering activists or autocrats? The Internet in authoritarian regimes,” Journal of Peace Research, volume 52, number 3, pp. 338–351.
doi: https://doi.org/10.1177/0022343314555782, accessed 18 June 2019.

S. Sanovich, 2017. “Computational propaganda in Russia: The origins of digital misinformation,” Computational Propaganda Project, Working Paper, number 2017.3, at https://comprop.oii.ox.ac.uk/research/junk-news-and-bots-during-the-german-parliamentary-election-what-are-german-voters-sharing-over-twitter/, accessed 18 June 2019.

F. Schoen and C.J. Lamb, 2012. “Deception, disinformation, and strategic communications: How one interagency group made a major difference,” Institute for National Strategic Studies Strategic Perspectives, number 11, at https://inss.ndu.edu/Media/News/Article/693590/deception-disinformation-and-strategic-communications-how-one-interagency-group/, accessed 18 June 2019.

V. Spaiser, T. Chadefaux, K. Donnay, F. Russmann, and D. Helbing, 2017. “Communication power struggles on social media: A case study of the 201112 Russian protests,” Journal of Information Technology & Politics, volume 14, number 2, pp. 132–153.
doi: https://doi.org/10.1080/19331681.2017.1308288, accessed 18 June 2019.

Z. Tufekci, 2017. Twitter and tear gas: The power and fragility of networked protest. New Haven, Conn. Yale University Press.

Z. Tufekci and C. Wilson, 2012. “Social media and the decision to participate in political protest: Observations from Tahrir Square,” Journal of Communication, volume 62, number 2, pp. 363–379.
doi: https://doi.org/10.1111/j.1460-2466.2012.01629.x, accessed 18 June 2019.

Twitter, 2017. “IRA Twitter data,” at https://blog.twitter.com/, accessed 24 February 2019.

S. Vosoughi, D. Roy, and S. Aral, 2018. “The spread of true and false news online,” Science, volume 359, number 6380 (9 March), pp. 1,146–1,151.
doi: https://doi.org/10.1126/science.aap9559, accessed 18 June 2019.

A. Zelenkauskaite and B. Niezgoda, 2017. “Stop Kremlin trolls: Ideological trolling as calling out, rebuttal, and reactions on online news portal commenting,” First Monday, volume 22, number 5, https://doi.org/10.5210/fm.v22i5.7795, accessed 18 June 2019.
doi: https://firstmonday.org/article/view/7795/6225, accessed 18 June 2019.

 

Supplement

Granger causality

We have shown that the success of Internet Research Agency (IRA) tweets R predicts a future increase in Donald Trump’s polls T, but not Clinton’s C. We also show that neither T nor C predicted future changes in R.

Here we show Granger causality tests demonstrating that IRA Twitter success predicts future increases in Trump’s opinion polls, but that statistical significance is not met in all other cases. Table 1 contains results where IRA tweet success Granger causes opinion polls and Table 2 is where the opinion polls Granger cause IRA tweet success.

 

Granger causality tests for average IRA tweet success predicting election opinion polls
 
Note: Larger version available here.

 

 

Granger causality tests for election opinion polls predicting average IRA tweet success
 
Note: Larger version available here.

 

Top 25 most retweeted tweets from IRA accounts

 

25 most retweeted IRA tweets
 
Note: Larger version available here.

 

Top 25 national pollsters

 

25 most frequent pollsters during 2016 presidential campaign
 
Note: Larger version available here.

 

Robustness checks

In the main text, we showed that IRA tweet success predicted election opinion polls, whether we measure success as ‘retweets per tweet’ or ‘likes per tweet’. Furthermore, it does not matter whether we use adjusted or raw polling data. Here we add more robustness checks and show we get the same results when we decrease the time resolution from seven to two days; use total number of retweets rather than average retweets and, finally, using the end date of the polls rather than the start date.

Two day time resolution

The finest time resolution we can attain for IRA Twitter activity and opinion polls is two days. Compared to the weekly time resolution analysis, the optimum VAR lag (determined by Akaike Information Criterion) has decreased slightly from seven to between four and six days. As Tables 5 and 6 show, the Granger causality structure is the same as the main analysis.

 

Two day time resolution: Granger causality tests for average IRA tweet success predicting election polls
 
Note: Larger version available here.

 

 

Two day time resolution: Granger causality tests for election polls predicting average IRA tweet success
 
Note: Larger version available here.

 

Total number of retweets

In the main text, we assumed that the average retweets/likes per tweet was the appropriate measure of IRA twitter success. However, the operative quantity could feasibly be the total number of retweets/likes. Tables 7 and 8 show that the same Granger causal structure exists in both cases.

 

Total number of retweets: Granger causality tests for total IRA tweet success predicting election polls
 
Note: Larger version available here.

 

 

Total number of retweets: Granger causality tests for election polls predicting total IRA tweet success
 
Note: Larger version available here.

 

Polling end date (not start date)

Opinion polls take days/weeks to complete, so it is not clear whether we should use the start or the end date of the poll. In the main text we use polling start date, but Tables 9 and 10 show the Granger causal structure is not changed when polling end dates are used instead.

 

Polling end date: Granger causality tests for average IRA tweet success predicting election polls
 
Note: Larger version available here.

 

 

Polling end date: Granger causality tests for election polls predicting average IRA tweet success
 
Note: Larger version available here.

 

Total number of tweets

Our hypothesis is that IRA Twitter success influenced U.S. election polls. Therefore, we should expect total number of unique tweets to be unrelated to the polls. To test this, we fitted an alternative VAR measuring IRA Twitter activity as number of unique tweets per week.

We found no evidence that total number of IRA tweets predicted election polls (Tables 11 and 12). If anything we see weak evidence for an effect in the opposite direction, suggesting the possibility that IRA Twitter activity is increasing in response to Trump’s polling.

 

Number of tweets: Granger causality tests for total number of IRA tweets predicting election polls
 
Note: Larger version available here.

 

 

Number of tweets: Granger causality tests for election polls predicting total number of IRA tweets
 
Note: Larger version available here.

 

Controlling for success of Trump’s personal Twitter account

Vector autoregression (VAR) can only tell you if a changes in one time series (average IRA tweet success R) precede those in another (Donald Trump’s opinion polls T). Therefore, to give us more confidence that the relationship is not spurious, we need to control for possible confounding variables.

One variable that could plausibly be causing changes in both Trump’s opinion polls and IRA success is domestic Republican supporting U.S. media. Therefore, to control for this, we introduce a third time series for the average number of weekly re-tweets from Donald Trump’s personal Twitter account Pt. We ran the following revised VAR:

TtTt-1 + Rt-1 + Pt-1

RtRt-1 + Tt-1 + Pt-1

PtPt-1 + Rt-1 + Tt-1

We see in Figure 1 that introducing this control does not reduce the magnitude of the effect for average IRA retweets on Trump’s polls (still around 10 re-tweets per tweet for every percentage point). However, the effect has become statistically insignificant because we now have a much larger error. This is because of collinearity caused by the very high correlation between the success of Trump and IRA tweets (r = 0.8). Figure 2 shows that Trump and IRA Twitter success are not merely correlated, but coevolving (both time series predict each other).

In conclusion. First, the effect of IRA retweets on Trump’s polls is undiminished when controlling for the success of Trump’s own personal Twitter account. Second, Trump and IRA Twitter successes are strongly correlated and coevolving.

 

Vector Auto-Regression (VAR) for IRA twitter success and 2016 election polls, controlling for the number of retweets from Trump's personal Twitter account
 
Note: Larger version available here.

 

 

VAR results
 
Note: Larger version available here.

 

 


Editorial history

Received 13 May 2019; revised 4 June 2019; revised 7 June 2019; revised 17 June 2019; accepted 17 June 2019.


Creative Commons License
“Internet Research Agency Twitter activity predicted 2016 U.S. election polls” by Damian J Ruck, Natalie M Rice, Joshua Borycz and R Alexander Bentley is licensed under a Creative Commons Attribution 4.0 International License.

Internet Research Agency Twitter activity predicted 2016 U.S. election polls
by Damian J. Ruck, Natalie Manaeva Rice, Joshua Borycz, and R. Alexander Bentley.
First Monday, Volume 24, Number 7 - 1 July 2019
https://firstmonday.org/ojs/index.php/fm/article/download/10107/8049
doi: http://dx.doi.org/10.5210/fm.v24i7.10107