State-sponsored “bad actors” increasingly weaponize social media platforms to launch cyberattacks and disinformation campaigns during elections. Social media companies, due to their rapid growth and scale, struggle to prevent the weaponization of their platforms. This study conducts an automated spear phishing and disinformation campaign on Twitter ahead of the 2018 United States midterm elections. A fake news bot account — the @DCNewsReport — was created and programmed to automatically send customized tweets with a “breaking news” link to 138 Twitter users, before being restricted by Twitter.
Overall, one in five users clicked the link, which could have potentially led to the downloading of ransomware or the theft of private information. However, the link in this experiment was non-malicious and redirected users to a Google Forms survey. In predicting users’ likelihood to click the link on Twitter, no statistically significant differences were observed between right-wing and left-wing partisans, or between Web users and mobile users. The findings signal that politically expressive Americans on Twitter, regardless of their party preferences or the devices they use to access the platform, are at risk of being spear phished on social media.
The case for experimental platform research
Twitter: The perfect digital architecture for automated spear phishing
Discussion and conclusion
After revelations of the Cambridge Analytica scandal and Russian-backed influence operations during the 2016 U.S. election, social media platforms have increased their efforts to reduce the misuse of their platforms. Collectively, Facebook and Twitter have removed thousands of accounts linked to “bad actors,”  who engage in “platform manipulation”  to undermine trust in democracy. To date, much of the public’s focus on bad actors has been on the paid use of trolls to spread propaganda (Aro, 2016; Zelenkauskaite and Niezgoda, 2017) or the abuse of platforms’ advertising services by covert organizations (Nadler, et al., 2018).
However, bad actors also fashion social media into a much more concrete form of weaponry. State-sponsored cyber groups from Russia, Iran, and China increasingly weaponize social media platforms to conduct spear phishing attacks against Western governments (Bossetta, 2018a). Spear phishing relies on social engineering — essentially a form of trickery — to bait victims into taking an action that reveals sensitive information. Automation a is key feature of modern social engineering attacks, allowing attackers to conduct phishing attacks at scale (Ariu, et al., 2017).
Usually, phishing attacks occur through e-mail and rely on victims to click a malicious hyperlink, download an attachment laced with malware, or enter login credentials to a spoof Web site. If successful, a phishing attack can lead to the hijacking of a victim’s social media account, device, or private information.
Phishing remains the preferred method of state-sponsored actors to conduct cyberattacks. In 2017, “70 percent of successful security breaches associated with nation-state or state-affiliated actors involved phishing.”  While difficult to quantify, only a small portion of these attacks likely occur via social media. Nevertheless, reports from cybersecurity firms estimate that spear phishing on social media rose 500 percent in 2016 (Proofpoint, 2017), tripled in 2017 (PhishLabs, 2018), dipped after platforms’ purge of fake accounts, but increased 30 another percent in the first half of 2018 (Proofpoint, 2018).
Precisely due to platforms’ efforts to remove bad actors and fake accounts, ordinary citizens on social media are now more valuable targets for state-sponsored phishing attacks than previously. The largescale removal of inauthentic accounts raises the currency of real accounts for bad actors. If bad actors want to spread disinformation without being detected, they can hijack real user accounts who have established an authentic history through their interactions with a given platform over time.
Moreover, once a user’s account has been successfully hijacked, bad actors can pivot off their success and launch successive attacks on that user’s connections. Since users are more likely to open links from known connections rather than strangers (Seng, et al., 2018), bad actors can leverage compromised accounts to snowball an attack across a social network.
Taking seriously the political implications of large-scale cyberattacks on social media, the present study seeks to test the American public’s vulnerability to spear phishing on Twitter. Therefore, I ask:
How vulnerable are political Twitter users to spear phishing attacks on social media?
To answer the research question, the study tests the extent to which partisan Twitter users are likely to click a hyperlink sent by a fake news account. The “DC News Report,” an automated bot account created by the author, sent 138 Twitter users (77 right-wing partisans and 61 left-wing partisans) a link to a fabricated “breaking news” story about the 2018 midterm elections.
The results of the experiment reveal that 27 of the 138 users, or 20 percent, clicked the link. Three independent variables — partisanship, device, and time proximity to the election — were all found to be statistically insignificant predictors for clicking the link. This null finding suggests that the risk of being spear phished on Twitter crosscuts partisan lines as well as the type of device used to access the platform.
Important to note is that the link was non-malicious and redirected users only to a Google survey form, which users were then invited to fill in. However, bad actors could easily circumvent the filters of link shortening services to weaponize the link by redirecting users to a malicious Web site that harbors a malware payload.
The study proceeds as follows. First, I outline the motivation for conducting a cyberattack experiment against the backdrop of researchers’ increasingly limited access to social media data. Second, I outline why Twitter’s digital architecture facilitates spear phishing attacks. Third, the experiment’s methodology is outlined before presenting the results. Finally, the study concludes with a discussion of the study’s findings.
The case for experimental platform research
Amidst the cross-platform crack down on bad actors and fake accounts, social media platforms have taken steps that limit researchers’ access to data. Most pointedly, Facebook restricted access to its pages API on 4 April 2018 (Schroepfer, 2018), ultimately barring researchers from accessing what little public data the platform offered previously.
Twitter, although maintaining a more generous data policy than Facebook, introduced a new verification process for app developers on 24 July 2018 (Roth and Johnson, 2018). While meant to curb the misuse of Twitter by malicious bot accounts, the initiative does little to help researchers help solve the automation problems that Twitter faces. Moreover, Twitter severely hampers researchers’ access to data further back than one week in time, making it difficult to explain phenomena that are only brought to the public’s attention after the fact (such as state-sponsored disinformation).
To be fair, both Facebook and Twitter have traditionally been somewhat generous in allowing researchers limited access to their data. However, the Cambridge Analytica scandal (which was in part caused by an academic [Wong, et al., 2018]) and the public outcry around state-sponsored disinformation have led social platforms to retreat further into their walled gardens.
Clearly, though, the scale of Facebook and Twitter has exceeded the companies’ capabilities to competently monitor their own platforms. Time and time again, the platforms take steps to improve only after third-party actors, such as journalists, government agencies, and academics, alert the platforms to their own failures.
Facebook, for example, improved its advertising platform only after the investigative journalism organization ProPublica demonstrated that anti-Semitic categories could be targeted with ads on Facebook (Angwin, et al., 2017). Similarly, one week before the midterm elections, Vice News exposed that the new “Paid for by” feature required by Facebook to run political ads — an effort aimed to improve transparency (Leathern, 2018) — was flawed. Vice News successfully ran ads identical to those issued by the Russian Internet Research Agency (Агентство интернет-исследований) and attributed them to being “paid for by” prominent U.S. senators (Turton, 2018).
Twitter, although less a target for investigative journalism than Facebook, has unknowingly hosted a Russian-led cyberattack targeted at over 10,000 Pentagon employees (Calabresi, 2017). And even though the platform is active behind the scenes in shutting down bots, the scholarly work of Bastos and Mercea (2017) was integral to alerting British policy-makers of partisan botnets active during the Brexit referendum.
The point here is not to criticize the performance of Facebook and Twitter. Rather, my aim is to highlight that these platforms rely on external help to detect problems on their platforms and refine their policies. The magnitude of data flowing through tech giants like Facebook and Twitter are unprecedented in history, and they seemingly exceed the monitoring capabilities of any one company or service.
Look no further than recent initiatives by Facebook and Twitter to enlist the help of select researchers to solve some of the platforms’ most pressing problems — in exchange for money and privileged access to data. Facebook is the first partner in the recently established Social Science One initiative, which aims to assist platforms to “produce social good, while protecting their competitive [market] advantage.”  The initiative is focused specifically around social media’s impact on elections and democracy.
Twitter, meanwhile, has recruited two teams of researchers from 230 proposals to “more deeply understand the concept of measuring conversational health,”  likely in response to growing public controversies around free speech censorship on the platform.
To summarize the above, Facebook and Twitter have failed at safeguarding the integrity of their platforms for users. Historically, these platforms have relied on external watchdogs to point out flaws in their systems to improve their policies. As the platforms throttle researchers’ public access to data, Facebook and Twitter’s new incentive structure is to hire academics and grant them privileged access to data.
These steps may be positive for the platforms in the long run, but state-sponsored bad actors do not play by the same rules in the short term. As select researchers slowly obtain and analyze data provided by Facebook and Twitter, bad actors are running experiments in real time with the purpose of influencing elections and undermining trust in democracy.
Therefore, I advocate meeting bad actors where they are and running ethical, non-manipulative experiments on social media during contemporary political events for two reasons. First and on the supply side, this method tests the responsiveness of platforms in shutting down malicious actors (which, as will be shown in this experiment, Twitter was relatively quick to do). Second and on the demand side, live experiments conducted on social media afford researchers the possibilities to understand, test, and explain mechanisms that might help safeguard the public against bad actors in the future.
This study, employing a relatively basic and small-scale experimental design in comparison to state-sponsored actors, tests the vulnerability of partisan Twitter users to click malicious links from a fabricated news outlet. The study is a live field experiment that simulates a spear phishing cyberattack, and in order to accurately assess citizens’ vulnerability, the users selected for the study were not notified before the experiment.
However, users’ anonymity is protected and no personally identifiable information is reported in the study. The project received ethics approval from the University of Copenhagen and was conducted in full compliance with the European General Data Protection Regulation (GDPR).
Conducting live field experiments on social media platforms is not unprecedented. In cybersecurity studies, researchers have simulated phishing attacks on Twitter (Seymour and Tulley, 2016a) and Facebook (Benenson, et al., 2014) without obtaining prior consent from users. Similarly, in political communication studies, Vaccari, et al. (2017) and Chadwick, et al. (2018) solicited 22,000 and 39,639 Twitter users to participate in a survey via several automated Twitter accounts.
The ability to automatically initiate contact with thousands of users on a non-paid basis is a particular feature of Twitter’s digital architecture. In the following section, I outline Twitter’s architectural features and explain why the platform is prime for spear phishing by bad actors.
Twitter: The perfect digital architecture for automated spear phishing
Despite the glitzy features that users engage with on the front end, social media platforms are constructed from thousands of lines of unglamorous computer code. I refer to this back end code as a platform’s digital architecture: “the technical protocols that enable, constrain, and shape user behavior in a virtual space.” 
Every social media platform has a distinct digital architecture, which is constantly improved to achieve or maintain a competitive market advantage. A platform’s digital architecture directly impacts its available features, what they do, and how they work. As such, the digital architecture of a platform sets the parameters for how users create, engage with, and distribute political content on both the Web and mobile version of a social networking site (Bossetta, et al., 2017).
Typically, the average social media user engages with the features of a platform on the front end through its graphical user interface (GUI). For example, a user may signal a reaction to a Facebook post in the News Feed by tapping the “like” button on a mobile phone, or share a tweet to one’s followers on Twitter by clicking “retweet” with a mouse connected to a desktop.
However, most platforms allow for users to interact with the platform on the back end through an application programming interface (API). APIs allow users with programming knowledge to collect large amounts of data from the platform in a much faster manner than copying and pasting information from the GUI. In addition, most APIs also permit users to control an account directly through the platform’s back end by writing computer code, without having to access the GUI.
The ability to control an account’s content through an API opens up the possibility for account automation. On Twitter, for example, a user can write computer code for an account to automatically post, favorite, or retweet tweets, as well as send direct messages to other users. An automated account is typically referred to as a bot.
As with a platform’s front end, the digital architecture of a platform also sets the criteria for APIs and subsequently, automation. Relative to other social media platforms, Twitter’s digital architecture permits a high degree of automation and therefore hosts a ‘bot-friendly API”.  As a result, Twitter hosts millions of bots as part of its 326 million monthly active users (Twitter, 2018a), although the exact number of bots on the platform is disputed.
Before Twitter’s bot crackdown, Varol, et al. (2017) estimated that bots comprised “between 9 and 15 percent of active Twitter accounts.”  Twitter, in a U.S. Senate hearing held on 29 November 2017 and concerning social media’s influence in the 2016 U.S. election, stated that “false or spam accounts represent less than 5% of our MAUs” (Edgett, 2018). MAU is an abbreviation for Monthly Average User.
While programmers write bots on Twitter for a wide array of purposes, such as comedic entertainment or marketing, bots have also been deployed for a wide array of political purposes such as spreading disinformation or propaganda (for an overview, see Woolley, 2016). Political bots have been found to push an agenda in online debates leading up to elections (Bessi and Ferrara, 2016) as well as non-political contexts such as discussions around fandom (Bay, 2018).
Apart from electoral influence, automated bot accounts can be weaponized to conduct spear phishing attacks. Chhabra, et al. (2011) refer to Twitter as a “phisher’s paradise,”  finding in their study that 89 percent of the accounts engaging in phishing on Twitter were automated. Apart from automation, five other aspects of Twitter’s digital architecture make the platform technology conducive for spear phishing.
First, Twitter’s APIs allow for the collection of structured data on its users, which can be used to both discover potential targets for a spear phishing attack as well as perform reconnaissance on their whereabouts, interests, and connections (Bossetta, 2018a). The data mining of information about users, a technique referred to as open source intelligence (OSINT), is a common tactic in modern phishing attacks (Ariu, et al., 2017).
Second, Twitter’s digital architecture supports hyperlinking through short URLs such as those generated by https://bitly.com. Phishers utilize short URLs to obfuscate the identity of a malicious link (Nepali and Wang, 2016), and due to Twitter’s 280 character limit, short URLs are commonly used on the platform.
Third, phishers can target a tweet with a malicious short URL to a particular user directly with the @mention feature, which sends a notification to that user and increases the chance that the tweet will be seen. Moreover, if a tweet begins with an @mention, the tweet “is visible in most circumstances only to the sender and addressee,”  essentially hiding the attack from public view.
Fourth, users’ privacy settings on Twitter are set to open by default. Phishers can therefore @mention a targeted user without establishing prior contact.
Fifth, Twitter’s 280 character limit, as well as its timeline feed’s focus on chronology, create a platform profile where users go for breaking news (Osborne and Dredze, 2014). As a result, Twitter attracts a user base who spend time on the platform to satisfy a need for cognition (Hughes, et al., 2012). Since users regularly log into the platform for news and fulfill a cognitive need, they are primed to click links that seem to promise access to information.
Altogether, six elements of Twitter’s digital architecture facilitate spear phishing on the platform: account automation, open source intelligence gathering, short URL support, @mention notifications, open network structure, and a news-oriented content profile. In the following section, I outline how each of these architectural features were leveraged for the simulated cyberattack in this study.
Constructing the cyberattack
This study’s methodology is heavily inspired by the work of cybersecurity researchers Seymour and Tulley (2016b). They developed SNAP_R, a machine learning based Python tool that “automatically generat[es] spear-phishing posts on social media.” 
Succinctly, SNAP_R operates as follows. A list of Twitter accounts is fed into the tool, which connects to Twitter’s REST API and collects the recent tweets of each account. Then, the tool uses the Markovify Python module (Singer-Vine, 2015) to generate random tweets based an algorithm that combines words and sentences that the user has expressed in previous tweets. As a result, the generated tweets mimic the language and interests expressed recently by the user, but they are varied enough as to not replicate any previous tweet. No output from the generated tweets is reported here to protect users’ anonymity.
Once tweets have been generated for each user, SNAP_R adds an @mention to the beginning of the tweet and appends a short URL to the end of the tweet. I reprogrammed the tool to also add an emoji of a hand pointing downwards (👇), to draw users’ attention to the tweet and nudge them to click the embedded link. The resulting tweet can be summarized as follows:
@TargetUser + Tweet generated by Markovify + 👇 + Short URL
SNAP_R then leverages Twitter’s automation feature to send tweets to each user. Since an @mention begins each tweet, the user is alerted via a notification that they have been contacted. Moreover, and as mentioned in the previous section, tweets beginning with an @mention are largely hidden from public view.
The tweets generated and issued by SNAP_R are not directly visible on a users’ timeline; rather, they are located in the “Tweets & Replies” section of the graphical user interface. This means that if you were to visit the Twitter profile of user who has been sent a tweet via SNAP_R, you would have to actively navigate to the “Tweets & Replies” section of that user’s profile in order to know that the tweet existed.
Another important feature to note relates to the tool’s short URL creation. SNAP_R uses Google’s link shortening service, https://goo.gl, which has three implications for spear phishing. First, the service hides the link’s real destination. Second, the link’s destination hides behind a well-known brand (i.e., Google). Third, the goo.gl service allows the tweet’s sender to monitor information such as when the link was clicked, from where, as well as from what type of device. This information can be used by spear phishers to detect patterns in clicks and optimize future attacks.
For this project, though, SNAP_R needed to be reprogrammed since Google discontinued its goo.gl link shortening service and instituted a new service called Firebase Dynamic Links . Firebase Dynamic Links (FDL) offer much less detailed analytics than the goo.gl service; FDL only allows users to see the overall number of clicks received by a link.
However, FDL has the benefit of allowing users to customize exactly how short links are displayed on social media. Users can define the exact picture, headline, and byline of how a link will display on Twitter. Since this study’s interest concerns Twitter users’ likelihood to click information from a fake news outlet (discussed below), I customized the FDL to display on Twitter as shown in Figure 1.
Figure 1: Firebase Dynamic Link as shown on Twitter.
The embedded link’s picture and text were designed to convey political neutrality. Although no pre-testing was performed prior to the experiment to verify the link’s neutrality, the following steps were taken to minimize harm while maintaining ecological validity.
A scandal involving a “rising star” was chosen as the text for the byline, as such a news item would believably warrant the status of “breaking news” in today’s sensationalist media landscape. While the byline is misleading in referring to a non-existent scandal, harm is minimized by not mentioning any candidate or party involved in the scandal, so as to not influence a user’s political opinion.
Similarly, the “breaking news” image contains the colors of both major political parties (red and blue) and offers no further contextual information as the background is blurred. The image was selected from a Google Images search for “breaking news.” Although its copyright status could not be verified, the image appeared on multiple private blogs and is consided by the author to fall under copyright fair use for nonprofit, educational purposes.
Important to emphasize is that the link did not lead to any malicious destination; rather, it directed users to a short Google form that required answering three questions. The first asked users about the device they used to access the link: computer, smartphone, or tablet. The remaining two questions focused on users’ social media activity. One question asked users how often they checked Twitter, and the other asked users to check with other social media platforms they use. Figure 2 below displays the Google form.
Figure 2: Google form for survey.
Account creation and mock-up
The following section details the process of creating the @DCNewsReport Twitter account. The aim was to create an account using (mostly) free and anonymous methods, in order to make tracing the account to any individual person or entity difficult.
All Twitter accounts must be verified by providing a valid e-mail address. For this purpose, a Google Gmail account was created, which in turn requires verification by mobile text message. To verify the Gmail account, the Web site https://receivesms.xyz was used to receive the verification code to an online number associated with the site. Several of these Web sites exist and are often used to verify social media accounts without using one’s personal mobile number. Once the Gmail account was verified and activated, the @DCNewsReport Twitter account could be verified through e-mail.
With the @DCNewsReport up and running, the next step was to mimic a legitimate U.S. news organization. After browsing several American media outlets on Twitter, the Washington Post (@WashingtonPost) was chosen as a template due to its easily reproducible profile picture and cover photo, which featured the United States Capital building in Washington. A freelance graphic designer, contracted via the Web site Fiverr.com (http://fiverr.com), created a similar profile picture and cover photo for five U.S. dollars.
Figure 3 below depicts the real @WashingtonPost account and the created @DCNewsReport (all identifying information to real Twitter accounts in the photo have been blurred). Important to note is that the @DCNewsReport did not contain any profile description information, in order to avoid impersonating an authentic journalism outlet.
Figure 3: @WashingtonPost and @DCNewsReport Twitter accounts.
In order to post Twitter messages automatically through the platform’s API, one needs to also register a Twitter app developer account. As of July 2018, and likely due to the bot crackdown mentioned earlier, app developers must apply through Twitter and state their motivations for creating a developer account. However, the DCNewsReport was established in June 2018 and only needed to undergo another mobile verification process to create an app developer account on Twitter.
This time, attempts to use online websites to verify the number failed. Instead, Google Voice was used to generate a number that successfully validated the DCNewsReport’s Twitter developer account.
Apart from posting messages automatically, a Twitter developer account allows you to collect data at scale through Twitter’s freely accessible APIs. The Streaming API collects data in real time based on certain keywords. The Search API, meanwhile, collects data 7–10 days in the past. However, neither API delivers all tweets based on search criteria. For a full list of tweets, you need to pay for access to the firehose API, which can cost thousands of U.S. dollars.
Thus, for this project, only the Streaming API and Search API were used for data collection. Although only a subset of overall tweets containing the search criteria were collected, the aim was only to collect a sample of Twitter users tweeting about the 2018 U.S. elections. The data is collected over two time periods and divided into two studies. Although the methodologies of both studies are similar, the first study was run on a small number of users as a pilot to avoid bot detection. As will be shown by the second study, this concern was warranted as the @DCNewsReport was eventually blocked from posting tweets automatically.
Study 1: Data collection and user selection
For Study 1, Twitter’s Streaming API was queried using the package “rtweet” (Kearney, 2018) for the programming software R. The search query focused on tweets containing the word “midterm” (and the plural “midterms”) from 18–26 September 2018. This time period was heavily focused on controversy around Brett Kavanaugh, then a Supreme Court nominee fighting accusations of sexual harassment. The dataset originally included 393,872 tweets (and retweets) from 182,541 unique users.
Twitter would likely detect an account issuing unsolicited tweets (i.e., “spam”) to such a large a number of users. Therefore, several steps were applied to filter the data. First, retweets were removed, as well as duplicate tweets. This resulted in 59,913 tweets from 46,132 users.
To further divide the data, only accounts having more than 50 followers and less than 400 were kept. This step was taken for two reasons. First, removing accounts with under 50 followers helps reduce the number of bots accounts in the dataset. Many bots — including the one developed for this study — are not aimed at attracting followers; they simply spam messages to other users. On the other hand, keeping only users with more than 400 followers helps ensure that organizations, celebrities, and brands are not included in the data. Therefore, the resulting 15,153 users are likely to be real, ordinary Twitter users.
The timelines of each of these remaining users were harvested, and only tweets sent after June 2018 were retained. From this subset, users were kept in the dataset if they tweeted “MAGA” or “BlueWave”, offering a preliminary signal as to whether they were right-wing or left-wing users, respectively. MAGA stands for “Make America Great Again” and is often used a slogan by Trump supporters. On the other side of the aisle, “BlueWave” is a reference to Democratic Party supporters voting in the elections to take power back from Republicans. A space was not included between “Blue” and “Wave”, in order to capture its use with a hashtag (i.e., “#BlueWave). As a result of these filtering steps, the final tally of MAGA tweeters was 113, and 120 Blue Wave tweeters.
The next step of the method aimed to select a small number of users in order to test the study’s hypotheses while avoiding bot detection. A manual coding was performed in order to make sure users met three criteria: they often tweeted about politics, their ideological views were consistent, and they tweeted from the same device.
For each of the MAGA and BlueWave tweeters, a random sample of 10 tweets were taken and compiled into a MS Excel spreadsheet. To be included in the final dataset, 80 percent of the sampled tweets for each user needed to be about politics, express a political preference (Republican or Democrat), and be sent from the same device. Fifty-one MAGA tweeters and 31 BlueWave tweeters met this criteria, mostly since the BlueWave tweeters issued their tweets from a diverse array of devices (e.g., the Web, smartphones, and iPads).
Right-wing users typically expressed tweets along issues such as: support for Donald Trump, the confirmation of Brett Kavanaugh, protest of Nike for supporting Colin Kaepernick, the use of the MAGA hashtag, criticism of migrant caravans, and conservative codewords like “snowflake” or “social justice warrior.”
Left-wing users, meanwhile, could be identified through tweeting expressions that signaled: criticism of Donald Trump, the rejection of Brett Kavanaugh, support for environmental protection legislation, support for the #metoo movement, and calls to action for Democrats to vote in the midterms (i.e., create a “blue wave”).
These remaining partisan users who tweeted from the same device were randomly sampled into four batches of 10 users. This step was taken so that each batch of users would be “attacked” on a weekday (Monday–Thursday). Every batch had five Web tweeters (tweeting from a computer) and five mobile tweeters (tweeting from an Android or iPhone). To segment by ideology, two of the batches included right-wing partisans while the other two comprised left-wing partisans. The final breakdown of users can be summarized as follows:
40 Users = (5 Web + 5 Mobile) × 2 Ideology × 2 Days
Lastly, each user was run through the Botometer bot checker (Varol, et al., 2017) to ensure that users were authentic. Botometer gives accounts a score from zero to five, with accounts closer towards five indicating bot-like activity. The mean score for the 40 users was .66, indicating most accounts were authentic. Only one account scored over two (receiving a 3.6), but upon a manual inspection it was unclear whether the account was a bot or not. Thus, that user was left in the study.
Launching the attack
The first study ran between 22–25 October 2018 (Monday–Thursday). Right-wing users were targeted Monday and Wednesday, and left-wing users were targeted Tuesday and Thursday. Every day, the first tweet was sent at 3 PM EST. This time was chosen because according to an analysis by the social media management platform Hootsuite ], “The best time to post on Twitter [for engagement] is 3pm Monday to Friday.”
Previous research suggests less than one percent of Twitter users tag their tweets with geo-location information (Cheng, et al., 2010). Therefore, no attempt was made to segment users by time zone. Instead, the attack began at 3 PM EST and as discussed below, would continue for a time period corresponding to random sleep intervals. Thus, users living west of EST might have been targeted around 3 PM in their corresponding time zones as the attack progressed, but there is no simple way to discern when users actually came into contact with the tweet.
Once the first tweet was issued, the bot would “sleep” (i.e., remain inactive) for a random interval between 61 and 950 seconds in an attempt to avoid bot detection by Twitter. Similarly, after each attack tweet generated by SNAP_R, a tweet without an @mention or link would be issued at the same random interval. These tweets would appear on the @DCNewsReports main timeline and were randomly selected from a list of 100 tweets created by the author in a .txt file.
The tweets in the .txt file had small variations in content such as: “The DC News Report checks our facts. We work around the clock to bring you breaking news!” and “We check our facts. That’s why we’re proud to be the DC News Report!” The purpose of these text-only tweets was to avoid Twitter’s bot filters, which actively seek out accounts who consistently @mention users or spam links.
Each batch of ten users received its own Firebase Dynamic Link to separate any survey responses into batches. Due to how Twitter embeds links in tweets, the hyperlink itself was not shown to the user. In text form, the user only saw the @mention, the generated tweet, and the hand-pointing emoji. The FDL was then embedded into the tweet as shown above in Figure 1.
Link clicks were measured using Twitter’s own “Tweet activity dashboard.”  The feature allows a tweet sender to see how many times a link in a tweet was clicked, as well as other information such as number of retweets, likes, and profile clicks. Any engagement with a tweet listed by the dashboard was recorded except for impressions, which were influenced heavily by the researcher’s own monitoring of the tweets.
Results (Study 1)
Out of the original 40 users, 39 successfully received the tweet. One user’s account was protected (i.e., private and requiring manual approval to receive tweets) and therefore was unable to be targeted.
Overall, nine out of 39 users, or 23 percent, clicked the link included in the targeted tweet. Six were right-wing partisans, and three were left-wing. Looking at device type, five users who clicked the link were desktop users, and the other four were mobile phone users. In terms of other engagements, only two users out of the 40 sought more information about the tweet’s source by clicking on the @DCNewsReport’s profile.
Interestingly, four users from the first batch of right-wingers filled out the Google Forms survey, but no other users did. The survey respondents all indicated they check Twitter “multiple times per day” and are also active on Facebook and LinkedIn. Two indicated being active on Instagram, and one user reported being active on Reddit.
While these results offer some preliminary insights, a trial of 39 users is insufficient to make any generalizations. The main purpose was to see whether such an experiment could be run while avoiding Twitter’s bot detection. Overall, the @DCNewsReport generated 78 tweets (39 attack tweets with links, and 39 tweets with text-only tweets from the .txt file). Therefore, a second study was conducted in order to increase the number of participants, as well as uncover whether tweets sent in closer proximity to the midterms would increase the click rate.
Study 2 used the same search criterion as Study 1 (keyword: “midterm”) but this time used the Search API to collect tweets from 28 October 2018. Time constraints necessitated collecting data from the Search API, which collects data in the past. Once bot detection was successfully avoided in the first study, a larger scale attack aimed to increase the number of targeted users in the final week before the midterm. Thus, to perform the second study before the midterm date of 6 November 2018, data needed to be collected historically. Altogether 145,818 original tweets (i.e., no retweets) from 94,036 users were collected, since the Search API allows you to filter out retweets when collecting data. After applying the same filtering steps from Study 1, Study 2 resulted in 277 MAGA tweeters and 153 Blue Wave tweeters.
As in Study 1, these users’ timelines were harvested and filtered to include only tweets from 1 September 2018 to be recent. A random sample of 10 tweets per user were loaded into a MS Excel file, and users were coded by partisanship and device type. In both cases, at least eight tweets needed to signal a clear partisan affiliation and tweet publication from the same device.
The coding resulted in 196 users. Thirty right-wing and 30 left-wing partisans tweeted solely from a desktop, whereas 84 right-wing users and 52 left-wing users could be classified as mobile users. In Study 1, some mobile users were discarded from the study in order to keep the normalize the distribution of left- and right-wing, as well as desktop and mobile. In order to increase the participants in Study 2, however, all mobile users were included, leading to a mobile user population that skewed right-wing.
The 196 users were again divided into four batches over four days:
196 Users = (15 Web + ½ of Mobile per Ideology) × 2 Ideology × 2 Days
However, Twitter restricted the @DCNewsReport from posting automatically on the third day of Study 2, and therefore only the first two days are reported. The breakdown of the users from the first two days (n=98) are reported in Table 1.
Table 1: Study 2 participants by ideology and device. Ideology Desktop Mobile Right-wing 15 42 Left-wing 15 26
Study 2 ran on 29 and 30 October 2018, with the first tweet issued at 3 PM EST on both days. The average Botornot score for the users in Study 2 was .64, with 1.7 as the highest score. Thus, the probability is high that none of the users included in this study were automated accounts.
Results (Study 2)
In Study 2, 18 out of 98 users clicked the link, or 18 percent. 10 were right-wing partisans, and eight were left-wing. Given that more right-wing partisans (n=57) were targeted in Study 2 than left-wing (n=41) in order to increase the sample size, the difference between the two ideologies’ likelihood to click becomes rather marginal at two percent (17 percent or right-wingers, 19 percent for left-wingers).
Looking at device type, six of the 30 desktop users clicked (20 percent), compared to 12 of the 68 mobile users (17 percent). Irrespective of ideology or device type, approximately one in five users were tricked into clicking the link. Only three right-wing and two-left wing partisans sought more information about the @DCNewsReport by clicking the account’s profile.
Interestingly, one left-wing user both liked and retweeted the link, but that user did click the link (nor did any of that user’s followers). Moreover, one right-wing user who clicked the link, and also checked the @DCNewsReport’s profile three times, replied to the tweet. The reply angrily called out the account for being fake, associated the account with Democratic party, included profanity, and used a racial slur toward Hispanics. The generated tweet issued with the link related to the migrant caravan, and the ethical implications of using machine learning in live experiments on social media is discussed in the final section.
As in Study 1, four users filled out the Google Forms survey but this time from the left-wing batch. All four participants expressed that they checked Twitter “multiple times per day.” Three reported also using Facebook, two reported using Instagram and LinkedIn, and one reported using Reddit. Below, I aggregate the results of the two studies to paint an overall picture of the findings.
Results (Overall and statistical tests)
In total, 27 out of 138 users (or 19.5 percent) clicked the link. Table 2 reports the overall Click Through Rate (CTR) for each group of targeted users, divided by: Ideology, Device Type, and Study (a proxy for time relative to election day). The CTR calculates the percentage of users who clicked in relation to the overall number of users in each group.
Table 2: Overall results and Click Through Rate (CTR) by Study, Ideology, and Device. Study Ideology Desktop users Mobile users Click Total CTR Click Total CTR 1 Right 3 10 30% 3 9 33% 1 Left 2 10 20% 1 10 10% 2 Right 4 15 27% 6 42 14% 2 Left 2 15 13% 6 26 23%
Although the CTR’s differ across time, ideology, and device, the relatively small number of users included here do not allow for making sweeping generalization about user behavior. Twitter ultimately prohibited more users from being included in the study, as the platform restricted the @DCNewsReport from issuing automatic tweets on the seventh day of the experiment.
To examine whether ideology, device type, or time proximity to the election could explain users’ likelihood to click the breaking news link, chi-square tests were performed on the overall results and each variable separately to see whether any were statistically independent. Ideology, device type, and time were coded as binary variables and tested for independence against click status. The results of the chi-square tests are reported in Tables 3, 4, and 5.
Table 3: Results of chi-square test for click by ideology.
Note: χ2 = 0.163, df = 1, p = .686. Numbers in parentheses indicate column percentages; *p < .05
Click status Ideology Right-wing Left-wing Click 16 (21%) 11 (18%) No click 61 (79%) 50 (82%)
Table 4: Results of chi-square test for click by device.
Note: χ2 = 0.525, df = 1, p = .469. Numbers in parentheses indicate column percentages; *p < .05
Click status Device Desktop Mobile Click 11 (23%) 16 (18%) No click 37 (77%) 74 (82%)
Table 5: Results of chi-square test for click by time period.
Note: χ2 = 0.426, df = 1, p = .541. Numbers in parentheses indicate column percentages; *p < .05
Click status Time period Study 1 Study 2 Click 9 (23%) 18 (20%) No click 30 (77%) 81 (80%)
The chi-square tests show that there were no significant differences between users’ ideology (χ2 (1, N = 138) = 0.163, p < .05), their device type (χ2 (1, N = 138) = 0.525, p < .05), or time proximity to election day (χ2 (1, N = 138) = 0.426, p < .05). While minor differences can be observed in the percentage of users who clicking the link based on the variables, the percentages ranged between only 18–23 percent. This null finding, and its relevance, is interpreted in the following and final section.
Discussion and conclusion
Combining the results from the two studies, 138 Twitter users were automatically issued personalized tweets that contained a link to “breaking news” about the 2018 U.S. midterm elections. The aim of the experiment was to answer the research question: How vulnerable are political Twitter users to spear phishing attacks on social media?
Overall, 27 out of 138 users (or 19.5 percent) clicked the link that could, if issued by a bad actor, have contained a malware payload. The results of this experiment therefore suggest that one in five political Twitter users are susceptible to spear phishing attacks.
Moreover, there were no statistically significant differences found between users’ likelihood to click the link based on ideology (right-wing or left-wing), device type (desktop or mobile), or time proximity to the election day (one or two weeks before the vote). This null finding suggests that political users on Twitter are equally susceptible to spear phishing attacks, regardless of their ideology or the type of device used to access the platform.
This finding is surprising, as most studies argue that right-wing users are more likely to share disinformation (Badaway, et al., 2018; Narayan, et al., 2018). Thus, we might expect that right-wing users are also more likely to click disinformation. However, the study’s design and results point to an important difference between the sharing and actual clicking of disinformation content.
Studies of disinformation tend to find that right-wing Twitter users are more likely to retweet — and thus amplify — dubious information. However, they do not offer a sense of how likely users are to click such stories. One study on Twitter estimates that 59 percent of URLs on Twitter are never clicked (Gabielkov, et al., 2016), suggesting that sharing information on Twitter is not necessarily correlated to reading it (this study found one example supporting this phenomenon, when a left-wing user retweeted the link without clicking it). Although previous research suggests that right-wing users might be more apt to share disinformation on Twitter, the findings reported here demonstrate that users across party lines are susceptible to clicking a disinformation news item.
In addition, users of both political persuasions were highly unlikely to validate the source of disinformation. Only five percent of users clicked on the @DCNewsReport’s profile to check the account’s authenticity. Had users visited the account’s profile, they would have quickly uncovered that the outlet was not legitimate, since it only had one follower (the author) and contained no serious information.
Also worthy of note is that eight of the 27 users who clicked the link (or 30 percent) filled out the survey. While their responses varied on two of three questions, the survey participants were unanimous in stating that they checked Twitter “multiple times per day.” This result appears to align with the results of a study by Vishwanath (2015), who showed in an experimental design that habitual Facebook use correlated positively with the likelihood to be spear phished on the platform. It was not feasible to survey those who didn’t click this study’s fake news link, but future research should examine more thoroughly how the social media habits of individuals relate to their susceptibility to spear phishing and disinformation on social media.
Although exploratory and involving a small number of participants, the current study paints a bleak picture for Twitter users’ susceptibility to cyberattacks and attentiveness to disinformation. Though only 20 percent of users clicked the link, their accounts or devices could have been compromised by bad actors, which has two serious implications.
First, bad actors could leverage an account or device takeover to inject ransomware or steal sensitive information, potentially leading to identity theft. Second, the connections of a compromised account may also be put at risk, with bad actors using the account to target other users in a network. In the context of election interference, compromised accounts could be manipulated to spew propaganda or disinformation. Since users are more likely to be mobilized in the democratic process when campaign messages are mediated through social media connections (Aldrich, et al., 2016), disinformation spread by a user’s connections might be more persuasive than the same content issued through strangers. Moreover, hijacked accounts with a history of engagement with the platform would be extremely difficult for Twitter’s filters to detect, allowing the hijacked accounts to operate for an extended period of time.
Adding to these concerns, the cyberattack simulated here was extremely basic and low-threat by design. State-sponsored bad actors with more resources, technological proficiency, and malicious intent could conduct a much more sophisticated attack involving multiple bot accounts and higher level targets, such as government personnel. Indeed, previous media and cybersecurity reports signal that such attacks targeting the Pentagon have already occurred on Twitter (Calabresi, 2017; ZeroFOX, 2017).
This study replicated a technique likely used in these Russian-backed attacks on U.S. government employees: the automatic generation of tweets using machine learning. Given the relatively black-box mechanics of machine learning, the use of such tools for research raises critical questions about ethics. For example, although this study sought to minimize harm and neutralize partisanship in the link’s stimuli, the automated text generation clearly angered one user, who replied with profanity to the @DCNewsReport’s tweet. While the generated tweet derived solely from that user’s previous language, the output of the machine learning algorithm indirectly framed President Trump in a denigrating way that the user had previously expressed about someone else. Yet, given the study’s focus on selecting political partisans, the issued tweet was unlikely to change the user’s political preferences.
The digital architectures of social media, and the manipulation of them, afford researchers the possibility to conduct live experiments on platforms. In some cases, these experiments are necessary to answer questions in the public interest, as hurdles to achieving ecological validity in research settings thwart the validity of the findings themselves. I have chosen to conduct a live experiment about spear phishing and disinformation on Twitter in real time, on a real setting, and during the very real political climate leading up to the 2018 midterm elections in the United States. Certainly this provokes ethical risks, but careful attention to a study’s design can mitigate risk while producing findings that align closer to real-world conditions than controlled experimental settings. Careful attention to a study’s design can help tip the risk/reward ratio in science’s favor, by taking steps such as: maintaining political neutrality, using only training corpora generated by users themselves, and carefully filtering users to minimize the impact of the treatment.
As a concluding note, one of the primary motivations of this study is to lobby social science researchers to help take up the fight against cyber threats and disinformation. In an era of API lockdowns, scholars need to move beyond platform reliance on data in future research and actively conduct experiments to answer questions in the public interest. This study marks an exploratory step toward doing so, and it is limited by several factors. The study examines one platform and one national context, of which the participant population is not representative. Further, the experimental design encountered resistance by Twitter itself and was blocked from completion. Still, the study’s findings call for more research to uncover the mechanisms that underpin users’ susceptibility to spear phishing, disinformation, and online news seeking more broadly.
About the author
Michael Bossetta is a Ph.D. Fellow in the Department of Political Science at the University of Copenhagen. His research interests primarily revolve around citizens’ use of social media during elections, the design of social media platforms, and how platform design is manipulated to conduct cyberattacks. He is the producer and host of the Social Media and Politics podcast. You can follow him on Twitter @MichaelBossetta and the podcast @SMandPPodcast.
E-mail: mjb [at] ifs [dot] ku [dot] dk
1. Facebook, 2018, https://newsroom.fb.com/news/2018/07/removing-bad-actors-on-facebook/.
2. Harvey and Roth, 2018, https://blog.twitter.com/official/en_us/topics/company/2018/an-update-on-our-elections-integrity-work.html.
3. Verizon, 2018, p. 12.
4. Social Science One, 2018, https://socialscience.one/our-facebook-partnership.
5. Gadde and Gasca, 2018, https://blog.twitter.com/official/en_us/topics/company/2018/measuring_healthy_conversation.html.
6. Bossetta, 2018b, p. 473.
7. Seymour and Tulley, 2016a, p. 1.
8. Varol, et al., 2017, p. 1.
9. Chhabra, et al., 2011, p. 97.
10. Bruns and Moe, 2014, p. 19.
11. Seymour and Tulley, 2016b, https://github.com/zerofox-oss/SNAP_R.
12. Google, 2018, https://firebase.google.com/docs/dynamic-links/.
13. Aynsley, 2018, https://blog.hootsuite.com/best-time-to-post-on-facebook-twitter-instagram/#twitter.
14. Twitter, 2018b, https://help.twitter.com/en/managing-your-account/using-the-tweet-activity-dashboard.
John H. Aldrich, Rachel K. Gibson, Marta Contijoch, and Tobias Konitzer, 2016. “Getting out the vote in the social media era: Are digital tools changing the extent, nature and impact of party contacting in elections?” Party Politics, volume 22, number 2, pp. 165–178.
doi: https://doi.org/10.1177/1354068815605304, accessed 14 November 2018.
Julia Angwin, Madeleine Varner, and Ariana Tobin, 2017. “Facebook enabled advertisers to reach ‘Jew Haters’,” ProPublica (14 September), at https://www.propublica.org/article/facebook-enabled-advertisers-to-reach-jew-haters, accessed 5 November 2018.
David Ariu, Enrico Frumento, and Giorgio Fumera, 2017. “Social engineering 2.0: A foundational work,” CF ’17: Proceedings of the Computing Frontiers Conference, pp. 319–325.
doi: https://doi.org/10.1145/3075564.3076260, accessed 14 November 2018.
Jessikka Aro, 2016. “The cyberspace war: Propaganda and trolling as warfare tools,” European View, volume 15, number 1, pp. 121–132.
doi: https://doi.org/10.1007/s12290-016-0395-5, accessed 5 November 2018.
Michael Aynsley, 2018. “The best time to post on Instagram, Facebook, Twitter, and LinkedIn,” Hootsuite (5 March), at https://blog.hootsuite.com/best-time-to-post-on-facebook-twitter-instagram/#twitter, accessed 10 November 2018.
Marco T. Bastos and Dan Mercea, 2017. “The Brexit botnet and user-generated hyperpartisan news,” Social Science Computer Review (10 October).
doi: https://doi.org/10.1177/0894439317734157, accessed 5 November 2018.
Morten Bay, 2018. “Weaponizing the haters: The Last Jedi and the strategic politicization of pop culture through social media manipulation,” First Monday, volume 23, number 11, at https://firstmonday.org/article/view/9388/7603, accessed 6 November 2018.
doi: http://dx.doi.org/10.5210/fm.v23i11.9388, accessed 14 November 2018.
Zinaida Benenson, Anna Girard, Nadina Hintz, and Andreas Luder, 2014. “Susceptibility to URL-based Internet attacks: Facebook vs. email,” 2014 IEEE International Conference on Pervasive Computing and Communication Workshops (PERCOM), pp. 604–609.
doi: http://dx.doi.org/10.1109%2FPerComW.2014.6815275, accessed 14 November 2018.
Allesandro Bessi and Emilio Ferrara, 2016. “Social bots distort the 2016 U.S. Presidential election online discussion,” First Monday, volume 21, number 11, at https://firstmonday.org/article/view/7090/5653, accessed 6 November 2018.
doi: http://dx.doi.org/10.5210/fm.v21i11.7090, accessed 14 November 2018.
Michael Bossetta, 2018a. “The weaponization of social media: Spear phishing and cyberattacks on democracy,” Journal of International Affairs, volume 71, number 1.5, pp 97–106, and at https://jia.sipa.columbia.edu/weaponization-social-media-spear-phishing-and-cyberattacks-democracy, accessed 5 November 2018.
Michael Bossetta, 2018b. “The digital architectures of social media: Comparing political campaigning on Facebook, Twitter, Instagram, and Snapchat in the 2016 U.S. election,” Journalism & Mass Communication Quarterly, volume 95, number 2, pp. 471–496.
doi: https://doi.org/10.1177/1077699018763307, accessed 14 November 2018.
Michael Bossetta, Anamaria Dutceac Segesten, and Hans-Jörg Trenz, 2017. “Engaging with European politics through Twitter and Facebook: Participation beyond the national?” In: Mauro Barisione and Asimina Michailidou (editors). Social media and European politics: Rethinking power and legitimacy in the digital era. London: Palgrave Macmillan, pp. 53–76.
doi: https://doi.org/10.1057/978-1-137-59890-5_3, accessed 14 November 2018.
Axel Bruns and Hallvard Moe, 2014. “Structural layers of communication on Twitter,” In: Katrin Weller, Axel Bruns, Jean Burgess, Merja Mahrt, and Cornelius Puschmann (editors). Twitter and society. Peter Lang: New York, pp. 15–28; version at http://eprints.qut.edu.au/66321/1/Twitter_and_Society_(2014).pdf, accessed 6 November 2018.
doi: https://doi.org/10.3726/978-1-4539-1170-9, accessed 14 November 2018.
Massimo Calabresi, 2017. “Inside Russia’s social media war on America,&frdquo; Time (18 May), at http://time.com/4783932/inside-russia-social-media-war-america/, accessed 5 November 2018.
Andrew Chadwick, Cristian Vaccari, and Ben O’Loughlin, 2018. “Do tabloids poison the well of social media? Explaining democratically dysfunctional news sharing,” New Media & Society, volume 20, number 11, pp. 4,255–4,274.
doi: https://doi.org/10.1177/1461444818769689, accessed 14 November 2018.
Zhiyuan Cheng, James Caverlee, and Kyumin Lee, 2010. “You are where you tweet: A content-based approach to geo-locating Twitter users,” CIKM ’10: Proceedings of the 19th ACM International Conference on Information and Knowledge Management, pp. 759–768.
doi: https://doi.org/10.1145/1871437.1871535, accessed 12 November 2018.
Sidharth Chhabra, Anupama Aggarwal, Fabricio Benevenuto, and Ponnurangam Kumaraguru, 2011. “Phi.sh/$oCiaL: The phishing landscape through short URLs,” CEAS ’11: Proceedings of the 8th Annual Collaboration, Electronic messaging, Anti-Abuse and Spam Conference, pp. 92–101.
doi: https://doi.org/10.1145/2030376.2030387, accessed 6 November 2018.
Sean Edgett, 2018. “Questions for the record: Senate Select Committee on Intelligence, Hearing on social media influence in the 2016 US elections November 29, 2017,” at https://www.intelligence.senate.gov/sites/default/files/documents/Twitter%20Response%20to%20Committee%20QFRs.pdf, accessed 6 November 2018.
Facebook, 2018. “Removing bad actors on Facebook” (31 July), at https://newsroom.fb.com/news/2018/07/removing-bad-actors-on-facebook/, accessed 14 November 2018.
Vijaya Gadde and David Gasca, 2018. “Measuring healthy conversation” (30 July), at https://blog.twitter.com/official/en_us/topics/company/2018/measuring_healthy_conversation.html, accessed 5 November 2018.
Google, 2018. “Firebase Dynamic Links,” at https://firebase.google.com/docs/dynamic-links/, accessed 14 November 2018.
Del Harvey and Yoel Roth, 2018. “An update on our elections integrity work” (1 October), at https://blog.twitter.com/official/en_us/topics/company/2018/an-update-on-our-elections-integrity-work.html, accessed 14 November 2018.
Michael W. Kearney, 2018. “rtweet: Collecting twitter data,” R package, version 0.6.8, at https://cran.r-project.org/package=rtweet, accessed 6 November 2018.
Rob Leathern, 2018. “Shining a light on ads with political content,” Facebook (24 May), at https://newsroom.fb.com/news/2018/05/ads-with-political-content/, accessed 5 November 2018.
Anthony Nadler, Matthew Crain, and Joan Donovan, 2018. “Weaponizing the digital influence machine: The political perils of online ad tech,” Data & Society Research Institute, at https://datasociety.net/wp-content/uploads/2018/10/DS_Digital_Influence_Machine.pdf, accessed 5 November 2018.
Raj Kumar Nepali and Yong Wang, 2016. “You look suspicious!!: Leveraging visible attributes to classify malicious short URLs on Twitter,” Proceedings of the 29th Hawaii International Conference on Social Systems, pp. 2,648–2,655.
doi: https://doi.org/10.1109/HICSS.2016.332, accessed 14 November 2018.
PhishLabs, 2018. “2018 Phishing trends & intelligence report: Hacking the human,” at https://info.phishlabs.com/hubfs/2018%20PTI%20Report/PhishLabs%20Trend%20Report_2018-digital.pdf, accessed 5 November 2018.
Proofpoint, Inc., 2018. “Quarterly threat report Q2 2018,” at https://www.proofpoint.com/sites/default/files/pfpt-us-tr-q218-quarterly-threat-report.pdf, accessed 5 November 2018.
Proofpoint, Inc., 2017. “Q4 2016 & year in review: Threat summary,” at https://www.proofpoint.com/us/threat-insight/threat-reports, accessed 5 November 2018.
Yoel Roth and Rob Johnson, 2018. “New developer requirements to protect our platform” (24 July), at https://blog.twitter.com/developer/en_us/topics/tools/2018/new-developer-requirements-to-protect-our-platform.html, accessed 5 November 2018.
Mike Schroepfer, 2018. “An update to our plans to restrict data access on Facebook,” Facebook (4 April) at https://newsroom.fb.com/news/2018/04/restricting-data-access/, accessed 5 November 2018.
Social Science One, 2018. “Our Facebook partnership,” at https://socialscience.one/our-facebook-partnership, accessed 5 November 2018.
Sovantharith Seng, Mahdi Nasrullah Al-Ameen, and Matthew Wright, 2018. “Understanding users’ decision of clicking on posts in Facebook with implications for phishing,” Workshop on Technology and Consumer Protection (ConPro 18), at https://www.ieee-security.org/TC/SPW2018/ConPro/papers/seng-conpro18.pdf, accessed 5 November 2018.
John Seymour and Philip Tully, 2016a. “Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter,” at https://www.blackhat.com/docs/us-16/materials/us-16-Seymour-Tully-Weaponizing-Data-Science-For-Social-Engineering-Automated-E2E-Spear-Phishing-On-Twitter-wp.pdf, accessed 6 November 2018.
John Seymour and Philip Tully, 2016b. “SNAP_R,”at https://github.com/zerofox-oss/SNAP_R, accessed 14 November 2018.
Jeremy Singer-Vine, 2015. “Markovify,” Python module version 0.7.1, at https://github.com/jsvine/markovify, accessed 10 November 2018.
William Turton, 2018. “We posed as 100 Senators to run ads on Facebook. Facebook approved all of them,” Vice (30 October), at https://news.vice.com/en_us/article/xw9n3q/we-posed-as-100-senators-to-run-ads-on-facebook-facebook-approved-all-of-them, accessed 5 November 2018.
Twitter, 2018a. “Form 10–Q: October 30, 2018,” at https://investor.twitterinc.com/sec-filings/sec-filing/10-q/0001564590-18-025638, accessed 6 November 2018.
Twitter, 2018b. “About your activity dashboard,” at https://help.twitter.com/en/managing-your-account/using-the-tweet-activity-dashboard, accessed 6 November 2018.
Cristian Vaccari, Andrew Chadwick, and Ben O’Loughlin, 2015. “Dual screening the politics: Media events, social media, and citizen engagement,” Journal of Communication, volume 65, number 6, pp. 1,041–1,061.
doi: https://doi.org/10.1111/jcom.12187, accessed 14 November 2018.
Onur Varol, Emilio Ferrara, Clayton A. Davis, Filippo Menczer, and Allesandro Flammini, 2017. “Online human-bot interactions: Detection, estimation, and characterization,” arXiv (9 March), at https://arxiv.org/abs/1703.03107, accessed 6 November 2018.
Verizon, 2018. “2018 Data breach investigations report,” Eleventh edition, at https://enterprise.verizon.com/content/dam/resources/reports/2018/DBIR_2018_Report.pdf, accessed 5 November 2018.
Arun Vishwanath, 2015. “Habitual Facebook use and its impact on getting deceived on social media,” Journal of Computer-Mediated Communication, volume 20, number 1, pp. 83–98.
doi: https://doi.org/10.1111/jcc4.12100, accessed 14 November 2018.
Julie C. Wong, Paul Lewis, and Harry Davies, 2018. “How academic at the centre of Facebook scandal tried — and failed — to spin personal data into gold,” Guardian (24 April), at https://www.theguardian.com/news/2018/apr/24/aleksandr-kogan-cambridge-analytica-facebook-data-business-ventures, accessed 12 November 2018.
Samuel C. Woolley, 2016. “Automating power: Social bot interference in global politics,” First Monday, volume 21, number 4, at https://firstmonday.org/article/view/6161/5300, accessed 6 November 2018.
doi: http://dx.doi.org/10.5210/fm.v21i4.6161, accessed 14 November 2018.
Asta Zelenkauskaite and Brandon Niezgoda, 2017. “‘Stop Kremlin trolls:’ Ideological trolling as calling out, rebuttal, and reactions on online news portal commenting,” First Monday, volume 22, number 5, at https://firstmonday.org/article/view/7795/6225, accessed 12 November 2018.
doi: http://dx.doi.org/10.5210/fm.v22i15.7795, accessed 14 November 2018.
ZeroFOX, 2017. “Russia just used Trump’s favorite social network to hack the US government” (18 May), at https://www.zerofox.com/blog/russia-just-used-trumps-favorite-social-network-hack-us-government/, accessed 10 November 2018.
Received 10 November 2018; revised 12 November 2018; accepted 12 November 2018.
To the extent possible under law, Michael Bossetta has waived all copyright and related or neighboring rights to the paper “A simulated cyberattack on Twitter: Assessing partisan vulnerability to spear phishing and disinformation ahead of the 2018 U.S. midterm elections.”
A simulated cyberattack on Twitter: Assessing partisan vulnerability to spear phishing and disinformation ahead of the 2018 U.S. midterm elections
by Michael Bossetta.
First Monday, Volume 23, Number 12 - 3 December 2018