First Monday

Trump bots and algorithmic experimentation on Twitter by Randall M. Livingstone

Although media headlines about Twitterbots often revolve around their malicious uses (e.g., inflating follower counts, pushing polarizing messages, confusing users), many bots add value to the platform by contributing informational or creative content. Bot creators have used the relatively open Twitter API to experiment with automation, and over the past five years, a growing number of accounts, dubbed here the Trump bots, have filled Twitter with Donald Trump’s own words filtered through the many possible concepts of a Twitterbot. This study considers the breadth of Twitterbots created around Trump’s Twitter persona through a qualitative categorization of 59 Trump bots, examining how these projects match many of the major threads of algorithmic experimentation used by the Twitterbot maker community at large. Three major functions of Trump bots — language play, information distribution, and direct criticism — are identified and explored through relevant exemplars.


Bots and Twitter
Discussion, conclusion, and limitations




At 5 PM EST each day from 2017 to early 2021, the countdown was on. At that time, the Twitter account Trump Countdown (@Trumpgone) tweeted what percentage of Donald Trump’s first term as president was completed and how many days were left in the term. Created by user @colin_barnes in 2016 even before Trump’s term in office began, Trump Countdown was a software robot (or “bot”) that automatically posted its countdown status on Twitter each day to its over 700 followers (Botwiki, 2020). And Trump Countdown was not alone in its algorithmic charge. At least three other Twitterbot accounts tracked the time left in the U.S. president’s term, including When Is Trump Gone? (@DaysLeft4Trump), which had nearly 30,000 followers and proudly declared in its Twitter bio “Blocked by @realDonaldTrump after only 140 days” (Bathman, 2020).

Bots — automated programs designed to perform specific tasks — are becoming ubiquitous on Internet platforms and in Internet culture (Martineau, 2018). They maintain structure and protocol on information sites like Wikipedia (Geiger, 2017; Zheng, et al., 2019), interact with customers on business Web sites (Crolic, et al., 2022), and crawl the Web to maintain search engines (Google, 2009). Much focus recently has been on malicious implementations of bots, as they have been a tool of misinformation and false representation for groups looking to disrupt democratic processes (Luceri, et al., 2021; Stella, et al., 2018; Woolley, 2016). These disruptive efforts have overshadowed often meaningful contributions that many bots make to user experiences, as well as the thoughtful and creative work of bot makers, especially on social media platforms like Twitter.

One notable group of recent Twitterbots was developed around the once inescapable Twitter presence of Donald Trump. Over the past five years, these bot accounts have filled Twitter with Donald Trump’s own words filtered through the many possible concepts of a Twitterbot. Often critical or humorous, these accounts reformatted Trump’s tweets onto “official” White House letterhead (@TrumpTweetsWH), translated his words into the Star Wars character Jar Jar Binks’ phonetic accent (@DJarJarTrump), and used AI neural networks to predict his next tweet (@DeepDrumpf), to name a few. This research considers the breadth of Twitterbots created around Trump’s Twitter persona, examining how these projects match many of the major threads of algorithmic experimentation used by the Twitterbot maker community at large.



Bots and Twitter

Recent research on bots has identified common forms and functions of these programs, both in general use and on specific platforms. Working in the context of broader bot implications for the Internet, some of these studies use the apparent intent of bots, whether malicious/malevolent or benign/benevolent, as a major categorization (Stieglitz, et al., 2017; Tsvetkova, et al., 2017). Other studies in this vein label bots with disruptive or ill intent “spambots” (Gorwa, et al., 2018; Oentaryo, et al., 2016), a common term originating from unwanted messages on early message boards and e-mail systems. Some typologies are built using intent and a second variable; Stieglitz, et al. (2017) use the degree to which a bot imitates human behavior, while Tsvetkova, et al. (2017) focus on the activity performed by the bot. Other categorizations dig more exclusively into the functions of bots as the major distinguishing factor (Oentaryo, et al., 2016) or look specifically at how bots are employed on certain platforms like Twitch (Seering, et al., 2018) and Wikipedia (Zheng, et al., 2019).

Veale, et al. (2015) simply define a Twitterbot as “an autonomous software system that generates and tweets messages of its own design and composition.” Of course, this automation arises from the design and coding of bot makers, who work within the affordances of Twitter to create bots that range from useful to absurd. Over the past decade, an informal community has coalesced on the platform around the hashtag #botALLY, with bot makers sharing ideas, implementations, and support (Parrish, 2016; Veale and Cook, 2018). A series of bot summits have also been organized by prominent bot maker Darius Kazemi (@tinysubversions) to bring this community of interest together off-line (Kazemi, 2016). While most Twitterbots have relatively small followings, some bots have hundreds of thousands of followers, including a bot that generates a sound on the hour like Big Ben (@big_ben_bot), a bot that creates a progress bar as the year passes (@year_progress), and a bot that places the curse “Fuck” in front of every word in the English language (@fckeveryword).

Bot makers have explored a range of possibilities in the Twitterbot form, from the simple to the complex. According to Veale and Cook (2018), most Twitterbots can be categorized by three central functions: tweeting, searching, and responding. Feed bots systematically tweet out content based on the concept and schedule for the bot, Watcher bots monitor public Twitter activity and tweet when certain conditions are met, and Interactor bots respond to engagement from other users (Veale and Cook, 2018). In conjunction with these categories, some bots engage in other niche functions like mashing together content, making political statements, and employing AI techniques for content creation. Veale and Cook (2018) explain:

“Ambitious designers aspire for their best bots to operate as thought-provoking conceptual artists, exploring a space of often surprising possibilities, while others are built to be the software equivalent of street performers, each plying the same gimmicks on the same streets to an ever-changing parade of passersby each day.” [1]

Experimentation can happen in the conceptualization of the bot, the implementation of the bot, or both. Many bots are built around language play, taking in structured data and then tweeting structured or random text outputs from that data. Another flavor of language bot is devoted to imitation, whether it be another Twitter account, a public figure, or a famous movie character. And then, some bots do not deal with language at all, instead processing images, video, or even Unicode to produce visual artifacts.

Twitterbots are a strong examples of two contemporary media concepts: participatory culture and algorithmic culture. Participatory culture, though not unique to our contemporary era, has emerged as a dominant paradigm of media interaction in the social media and interactive Web era (Fuchs, 2014; Jenkins, 2012, 2006). Media consumers are now empowered by free or low-cost tools and user-friendly platforms to become producers of cultural objects, often reworking, remixing, or parodying texts initially created by the traditional media industries. In doing so, some of these “produsers” purposefully transgress the original intent or meaning of the targeted cultural objects in the name of cultural criticism, while others engage in the creation process for entertainment, status, or economic incentive (Bruns, 2008). In the context of online bots, creators have been found to participate in bot creation for many of these same reasons, as well as to develop personal technical skills or experiment with new coding languages or software platforms (Kollanyi, 2016; Livingstone, 2021, 2016).

The central nature of software and computer coding to current digital media has led to a plethora of recent inquiry in communication studies to algorithms and what Galloway (2006) and Striphas (2015) dub “algorithmic culture.” Striphas defines the term as “the ways in which computers, running complex mathematical formulae, engage in what’s often considered to be the traditional work of culture: the sorting, classifying, and hierarchizing of people, places, objects, and ideas” (interview with Striphas, in Granieri, 2014). Indeed, at a high level, advancements in artificial intelligence and machine learning have led to the emergence of computer-generated music, art, and creative writing (Mazzone and Elgammal, 2019; Strum, et al., 2019), while at a lower level, more simple programs are able to develop news articles and personalized information feeds (Lewis, et al., 2019; Lokot and Diakopoulos, 2015). Of course, these computers and software programs are themselves created and programmed by humans, at least in the first instance, but algorithm studies consider how many cultural objects previously created directly by humans are now appearing with a software layer that can change the forms, functions, and meanings of the objects. Kitchin (2017) points out that “algorithms perform in context — in collaboration with data, technologies, people, etc. under varying conditions — and therefore their effects unfold in contingent and relational ways, producing localised and situated outcomes” [2]. Because of this, one important way to study algorithmic culture is “by observing [an algorithm’s] work in the world under different conditions” [3], which the present research takes on in the context of Twitterbots related to Donald Trump.

Notable for his constant and often controversial use of Twitter up until being permanently suspended from the platform on 8 January 2021 (Twitter, 2021a), Donald Trump and his @realDonaldTrump account provided the inspiration and/or data for a number of Twitterbot accounts. This research provides the first in-depth review of these “Trump bots” in order to understand how developers experimented with the affordances of Twitter (the account profile, the account feed, and the API) to produce algorithmic cultural artifacts. The guiding research question was: How do Trump bots demonstrate common Twitterbot forms and functions?




Trump bots on Twitter were identified through keyword searching of “Trump bot” on the platform’s database of public users. The profile page, bio, and other descriptive elements of each account returned from these searches was examined to determine if the account was a bot. Signals of a bot account include the word “bot” in the account name, the words “bot” or “automated” in the profile bio, or images including robots or computers. Additionally, each account’s feed was examined for indications of automated content; this content may appear on a fixed schedule (i.e., every hour or every day) or in immediate response to another account (most often the @realDonaldTrump account). Finally, accounts that seemed to be bots were run through Botometer, an online tool that examines bot-like signals, for additional indications that the account is a bot (Sayyadiharikandeh, et al., 2020). In addition to keyword searching, some Trump bots were found from the “You might like” suggestion algorithm that appears alongside many Twitter profiles, and then examined using the confirmation process outlined earlier.

A total of 59 Trump bots were identified on 31 August 2020, and statistics for the account were calculated up through that date. Forty-four percent (n=26) of these accounts were active on that date, tweeting at least once in the preceding seven days. The total number of days active for each account was calculated by measuring the time between the account’s creation and its most recent tweet. The mean number of days active for the entire sample was 695.6, and the median number of days active was 585, indicating that many of these accounts tweeted content for well over one and a half years. The oldest and longest active account was @thetrumpwatcher, a bot that translates the text of @realDonalTrump into an emoji; created in September 2012, the account was last active in July 2020 after tweeting more than 12,500 times over 2,867 days. The shortest lived account was @roboDonaldTrump, a Markov Chain generator mimicking @realDonalTrump’s tweeting style, which tweeted only 10 times over two days in September 2015.

Two Trump bots were created prior to Donald Trump’s June 2015 announcement that he was running for the U.S. presidency (Diamond, 2015), 13 were created between that date and the U.S. presidential election in November 2016, and two more were created between the election and his inauguration in January 2017. The remaining 42 Trump bots were created during his presidency, with 2017, the first year of his presidency, seeing the largest share of new Trump bots (42 percent, n=25).

The mean number of account followers in the sample was 5,835.79, though the median was 106, indicating that only a small number of accounts attracted a large following. @RealPressSecBot, an account that reformatted Trump’s tweets to look like official White House memos, was the most followed account and the only account with more than 100,000 followers (see Table 1), while 14 accounts had over 1,000 followers. Nearly half of accounts, however, had fewer than 100 followers, with 19 accounts having less than 10.


Top 10 most followed Trump bots in sample
Table 1: Top 10 most followed Trump bots in sample.
Note: Larger version of Table 1 available here.


Based on previous literature (Oentaryo, et al., 2016; Stieglitz, et al., 2017; Tsvetkova, et al., 2017) and the researcher’s experience studying Twitterbots, each Trump bot was qualitatively coded into categories for analysis. An iterative process of open coding was utilized (Charmaz, 2006) based on the functions of the bots as evidenced in account bios, descriptions linked in account bios, and observations of the bots’ tweets. Seven initial non-mutually exclusive categories were constructed based on bot functionality: AI, countdown, information, language, parody, protest, and activism. In additional rounds of coding, categories were merged into three broad themes based on functional commonalities: language play (combining language and parody), information distribution (countdown and information), and direct criticism (protest and activism). For example, @potus_in_parens, a bot originally coded as language that filters text based on punctuation, and @djarjartump, a bot originally coded as parody that alters text for comedic effect, each experiment with language in simple algorithmic ways and are comparable in function. AI was determined to be a tool for implementing a bot’s functionality, but not the expression of the function itself, so it was not collapsed into the above broad categories; instead, AI is discussed where appropriate in the ensuing analysis. Finally, some subcategories were constructed when information was available to distinguish certain groups within the broader categories, including some of the initial specific category ideas. For the final qualitative analysis, each bot was coded into one broad category and no more than one subcategory, but it is important to note that some bots do have multiple functions and that these categorizations are not meant to provide a strict typology. The present method allows for the exploration of Trump bots as a case study with a focus on uncovering major areas of bot experimentation and implementation on Twitter.




This analysis details Trump bots grouped into three categories based on the apparent activity performed by the bots. Other studies (Oentaryo, et al., 2016; Tsvetkova, et al., 2017) have found it useful to categorize bots based on their activity and behavior. By doing so, we can explore the forms that these accounts take and how they interact with other users on their platforms. In this study, Trump bots on Twitter are classified as engaging in language play, distributing information, or criticizing Trump and/or the @realDonaldTrump account.

Language play

A major area of bot experimentation on Twitter involves the processing and redistribution of language, which can be accomplished in many ways. Bots can parse, rearrange, combine, and even create language data, or they can more simply distribute language data from databases or the Twitter wild in novel and creative ways. As the @realDonaldTump account was recognized for its large tweet volume and distinct message style, it inspired numerous Twitterbots that employed various approaches to language play, including machine learning or simple algorithmic processing, message reformatting or mashups, and parody.

Many Trump bots served as vehicles for language-based AI projects, using various approaches to construct tweets that mimicked @realDonaldTrump. Twenty bots in the sample were identified as using AI or language processing in some way, including bots that used neural networks (NN), recurrent neural networks (RNN), Generative Pre-trained Transformer 2 (GTP-2), and Markov chains. Each of these methods use established data sets (in this case, primarily Donald Trump’s previous tweets or transcripts of his speeches) to train the bot to produce new tweets in the original style. The results ranged from eerily accurate to incomprehensible.

One of the earliest and most followed AI Trump Bots was @DeepDrumpf, active from March 2016 to May 2017, which amassed nearly 25,000 followers (Figure 1). Bradley Hayes, the MIT roboticist who created the bot, attributed much of its effectiveness to the distinct style of the training data, as “Trump’s mode of speech is so disjointed and often ham-fisted that it proves particularly amenable to algorithmic parody” (Knight, 2016). The bot’s tweets “don’t always make complete sense, but are usually at least partially coherent — much like [Trump] himself” (Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology [CSAIL], 2016). As the 2016 presidential election approached, Hayes used the popularity of the bot to raise funds for Girls Who Code, an organization whose mission is to close the gender gap in technology, considering the “embarrassing, unacceptable, and outright scary comments about women” that the bot, and more pointedly Trump, have made (Hayes, 2016).


@DeepDrumpf used an AI algorithm to create original tweets that mimicked the @realDonaldTrump account
Figure 1: @DeepDrumpf used an AI algorithm to create original tweets that mimicked the @realDonaldTrump account.
Note: Larger version of Figure 1 available here.


Other Trump bots enployed much simpler algorithms to mimic, comment on, or complete an action based on @realDonaldTrump’s tweets. @thetrumpwatcher used sentiment analysis to convert Trump’s tweets into an emoji representing the mood of the message and gauge its language level, while also examining metadata to assess whether the tweet was likely from Donald Trump himself or from his staff (Shardcore, 2017). Also employing sentiment analysis, @botus, a bot developed by journalists from NPR’s Planet Money team, monitored @realDonaldTrump’s tweets for mentions of particular public corporations and bought or shorted the stock of those companies based on Trump’s comments about them (Goldmark, 2017). (Figure 2)


@thetrumpwatcher used sentiment analysis to categorize the emotion expressed in @realDonaldTrumps tweets, while @botus would trade publicly available corporate stocks based on the companies mentioned in @realDonaldTrumps tweets
Figure 2: @thetrumpwatcher used sentiment analysis to categorize the emotion expressed in @realDonaldTrump’s tweets, while @botus would trade publicly available corporate stocks based on the companies mentioned in @realDonaldTrump’s tweets.
Note: Larger version of Figure 2 available here.


One of the more notable AI-related Trump bots for understanding a historical context of software robots, and yet one with a very small following, was @eliza4trump (Figure 3). Created by start-up Snoozle Software, the bot took on the persona of the earliest chatbot, Joseph Weizenbaum’s ELIZA. Developed at MIT in 1966, ELIZA was a natural language processing program that acted as a Rogerian therapist for its users, redirecting their comments back to them in the form of open-ended questions in order for the “patient” to examine their thoughts (Weizenbaum, 1966). Although ELIZA was relatively simplistic in its algorithmic approach to interactions with users, it is widely identified as a seminal software artifact that demonstrated the social and discursive potentials of computer programs. Natale (2018) argued that ELIZA’s reception shaped various narratives about the development of computing to follow: “ELIZA became [...] a contested object whose different interpretations reflected and contribute to opposite visions of AI, which were destined to dominate debates about AI in the following decades and up to the present day” [4]. @elize4trump used an ELIZA-type algorithm to create short prompts and probing questions in response to tweets from @realDonaldTrump, positioning Trump as the “patient” in the relationship. Humor arose from the bot’s supportive and empathetic replies to Trump’s often extreme or exaggerated messages, especially since the reader viewed the bot’s reply before seeing the original tweet it was replying to. This often created a mocking tone that cast doubt on the original message (though there are certainly possible negotiated and oppositional readings from a reception theory perspective) (Hall, 1973).


@eliza4trump, an homage to one of the earliest chatbot programs, engaged @realDonaldTrumps tweets in a Rogerian form of talk therapy
Figure 3: @eliza4trump, an homage to one of the earliest chatbot programs, engaged @realDonaldTrump’s tweets in a Rogerian form of talk therapy.
Note: Larger version of Figure 3 available here.


A second form of language play involved processing and reformatting the language of @realDonaldTrump’s tweets themselves. Prominent examples of these were @RealPressSecBot (Figure 4) and @potus_in_parens. With over 118,000 followers, @RealPressSecBot was the most followed Trump bot, yet it was based on a simple premise: reformat Trump’s tweets to visually appear as official White House press releases. As Trump had unofficially adopted Twitter as a primary communication tool for his office and often used the account both for seemingly-official White House announcements and the airing of personal grievances, this bot showed its audience what these messages would look like through the traditional channel of the Office of the Press Secretary. This change of context often highlighted the absurdity of the U.S. leader’s messages and critically commented on the appropriateness of this discourse from the highest-level elected official. On 7 September 2020, however, the bot changed its functionality and began using its platform to highlight the “administration’s incompetent COVID-19 response” by tweeting daily death statistics that featured the toll on individual U.S. counties.


@RealPressSecBot was the most-followed Trump bot
Figure 4: @RealPressSecBot was the most-followed Trump bot, originally reformatting @realDonaldTrump’s tweets into a White House memo format, and then deflecting attention away from those tweets by highlighting the death toll of COVID-19 in the U.S.
Note: Larger version of Figure 4 available here.


@potus_in_parens took a different route to highlight parts of @realDonaldTrump’s tweets by isolating only language from the tweets that appeared in parentheses. Trump often used language in parentheses for asides, complementing, critiquing, or attempting to clarify the main subjects of the tweet, and this bot offered a way to read his feed simply through these asides (Figure 5).


@potus_in_parens would isolate and display any content that @realDonaldTrump placed in parentheses
Figure 5: @potus_in_parens would isolate and display any content that @realDonaldTrump placed in parentheses.
Note: Larger version of Figure 5 available here.


A third form of language play bot revolved around parody of the @realDonaldTrump account. Some of these Trump bots applied well-known personas, both fictional and real, to create a frame of humor or urgency around the original messages. Reader familiarity with the personas enhanced the parody, though was not necessarily essential to understanding the critique created in these comparisons. @DJarJarTrump and @kramer_vs_trump applied the characters of Jar Jar Binks, the fictional character from the Star Wars universe, and Cosmo Kramer, the television neighbor from Seinfeld, as forms of literary mirrors to accentuate a ridiculous element of Trump’s tweets (Figure 6). The former of these bots directly translated tweets from @realDonaldTrump into the heavily accented and exaggerated English of the fictional alien character. The tweets from @DJarJarTrump seemed to betray any seriousness in the original messages it translated. However, furthering the parody here (and adding some gravity) is the backstory of Jar Jar, a bumbling entertainer who ultimately becomes a politician and enables the violent rise of a dictator. Similarly, @kramer_vs_trump questioned the earnestness of @realDonaldTrump’s tweets by responding to them with quotes from Kramer’s television character, a fast talker who bounces from idea to idea with little substance to any of them. The brief, two-message dialogues created between @realDonaldTrump and @kramer_vs_trump sometimes read like a joke’s set-up and punchline, while other times they offered relief from the extreme views offered in the original message.


@DJarJarTrump and @kramer_v_trump used familiar pop culture characters to parody and engage with @realDonaldTrump
Figure 6: @DJarJarTrump and @kramer_v_trump used familiar pop culture characters to parody and engage with @realDonaldTrump.
Note: Larger version of Figure 6 available here.


A Trump bot offering a parody of a more earnest kind was @ilduce2016. The bot, created by Gawker journalists in 2015 and featuring a profile image of Benito Mussolini’s head with Donald Trump’s hair, tweeted quotations from the Italian dictator that were appended with “-@realDonaldTrump”, indicating attribution to Trump himself. By including this username, the bot was also constantly tweeting “at” Trump, in what the creators claim was an attempt to get Trump to retweet the bot: “We came up with the idea for [the] Mussolini bot under the assumption that Trump would retweet just about anything, no matter how dubious or vile the source, as long as it sounded like praise for himself. (It helps that a number of Mussolini’s quotes sound plausibly like lines from Trump’s myriad books)” (Pareene, 2016). In February 2016 @realDonaldTrump in fact retweeted the account, a post that remained on the account’s feed until Trump’s Twitter ban, despite numerous comments pointing out the actual source of the quotation (Figure 7). But regardless of the success of this trolling experiment, this parody was effective because the fascist machismo messages of Mussolini closely tracked with the persona and communication of Trump, who has revered contemporary authoritarian leaders while questioning global cooperation between nations.


@ilduce2016 would attribute Mussolini quotations to Donald Trump, and on 28 February 2016 @realDonalTrump retweeted the account
Figure 7: @ilduce2016 would attribute Mussolini quotations to Donald Trump, and on 28 February 2016 @realDonalTrump retweeted the account, leading to a myriad of responses from the Twittersphere.
Note: Larger version of Figure 7 available here.


Information distribution

A second major area of bot experimentation were Trump bots that served as vehicles for distributing information about Trump’s Twitter activity, the composition of his Twitter network, or the appearance of Trump’s name in external databases and public records. Many of them fall in line with what Ford, et al. (2016) dub “transparency bots,” or “automated agents that use social media to draw attention to the behavior of particular actors” [5]. These bots often displayed a critical bent, filling a need to watch or track the ecosystem of information around Trump or the @realDonaldTrump account.

Some of these Trump bots, such as @trumps_feed, offered the ability to see what Trump himself might have seen in his own Twitter feed. @trumps_feed retweeted the Twitter content of accounts Trump followed, providing a steady stream of tweets from his family, advisors, and conservative media sources. Similarly, @TrumpRetweeps tweeted out the Twitter bio of each account that was retweeted by the @realDonaldTrump account. @TrumpsAlert expanded this information sphere to those around Trump, tracking when members of the Trump family followed, unfollowed, and liked accounts and content on Twitter (Figure 8).

Other Trump Bots built to distribute information pulled in content from external sources. @trumptrackerbot, created by a financial data firm, posted mentions of Trump in public financial documents like U.S. Securities and Exchange Commission filings. The bot, created soon after Trump’s inauguration, highlighted the trend of more market attention for Trump’s policies than for Barack Obama’s policies, including discussion of Trump more often in the “risk factors” sections of corporate filings (Sentieo, 2017). Similarly, @CATrumpLawsuits tracked lawsuits between the state of California and the Trump administration, while @indextrump tweeted when Trump was mentioned on the homepages of major news outlets. These latter two bots were created and maintained by members of non-profit journalism organizations.


@TrumpsAlert and @trumps_feed tracked and retweeted content based on the following habits of Trump and his family
Figure 8: @TrumpsAlert and @trumps_feed tracked and retweeted content based on the following habits of Trump and his family.
Note: Larger version of Figure 8 available here.


Another category of informational Trump bots are accounts that tweeted the chronological progression of Trump’s time in office. In often simple and methodical posts, these bots counted up (@trumpgone, @trumpgone2) or down (@DaysLeft4Trump) to 20 January 2021 at noon (EST), when the Twentieth Amendment of the U.S. Constitution dictated Trump’s term as president would end. The @DaysLeft4Trump account had more than 24,000 followers, and its tweets regularly received likes in the hundreds. Notably, these tweets regularly received at least one reply, from @Add_4_years, a counter-bot that tweeted “(+4 years)” in response to the original countdown tweet. @PercentTrump was the most functionally complex bot of this group, tweeting not only the percentage of Trump’s term that was completed, but also Trump’s approval rating at the time (from the polling analytics site FiveThirtyEight), election forecasting (from the Economist), and the length of time since the @realDonaldTrump mentioned the term “fake news” (Figure 9).


A number of bots tracked the remaining length of Trumps term as the American president
Figure 9: A number of bots tracked the remaining length of Trump’s term as the American president.



The third major area of bot experimentation was a group of Trump bots that explicitly criticized Donald Trump and/or the @realDonaldTrump account. Certainly many of the previously discussed bots also expressed negative sentiment towards Trump; for instance, @ilduce2016 was an explicitly critical parody in its comparisons of Trump to Mussolini. The bots discussed presently, however, seem to purely exist in order to express a repetitive opinion about Trump.

Anti-Trump bots often used creative implementations in order to express their critical views of content from @realDonaldTrump. @burnedyourtweet was one of the most followed accounts in this category with over 26,000 followers (Figure 10). The bot replied to @realDonaldTrump tweets with the text “I burned your tweet” and a video of a homemade machine that printed the text of the @realDonaldTrump tweet on paper, set it on fire using a robotic arm and lighter, and placed it in an ashtray. From watching multiple videos in this feed, the viewer saw the machine placed in different locations and sometimes malfunctioning (for example, the lighter doesn’t produce a flame, or the flaming paper doesn’t end up in the ashtray), drawing attention to the Rube Goldberg-type nature of the machine. The pile of ash and burnt paper in the ashtray accumulated over many tweets, highlighting the volume of tweets from Trump’s account and signaling the disdain of the bot’s creator.


@burnedyourtweet featured a mechanical contraption that would literally burn tweets from the @realDonaldTrump account
Figure 10: @burnedyourtweet featured a mechanical contraption that would literally burn tweets from the @realDonaldTrump account.
Note: Larger version of Figure 10 available here.


Other Twitterbots utilized simpler bot functionality in offering criticism of Trump’s Twitter messaging. @trumphatebot quote tweeted each of Trump’s tweets with the response “Eat shit”, while @USDumpsterFire printed the introduction “A statement from the ‘President’”, with “President” questioned through the use of quotation marks, followed by the text of Trump’s tweets superimposed over a GIF of a dumpster fire (Figure 11).


@donaldhatebot and @USDumpsterFire used simple automated responses and reformatting, respectively, to criticize @realDonaldTrumps tweets
Figure 11: @donaldhatebot and @USDumpsterFire used simple automated responses and reformatting, respectively, to criticize @realDonaldTrump’s tweets.
Note: Larger version of Figure 11 available here.




Discussion, conclusion, and limitations

Jones (2015), in arguing that a contemporary reading of social media must accept and understand the presence of bots, writes that both people and machines “contribute threads to the tapestry of social media and, ultimately, to the reality in which we live” [6]. The reality of the Trump presidency was defined by controversy, contention, and a steady stream of tweets that served to rally supporters and attack detractors. The fact that Twitter served as the president’s platform of choice made the emergence of Trump bots all the more pertinent to understanding the tapestry of information around his social media presence. Indeed, many of these bots were algorithmically “connected” to the @realDonaldTrump account, using its tweets as the seed of their content and opening themselves up to Twitter blocks and suspensions based on actions taken by Twitter on @realDonaldTrump. Of the 26 Trump bots active when data for this study was collected (31 August 2020), six continued to tweet until 8 February 2021 when @realDonaldTrump was permanently banned from Twitter (Twitter 2021a), including @thetrumpwatcher, eliza4trump, @djarjartrump, and @usdumpsterfire. These language play and criticism bots were rendered silent when their source material stopped, though the creators of the last two in this list took over their bots’ feeds for a parting message (Figure 12). Their conceptual power, however, was always tied to the source data, so this was a fitting ending for these accounts. As Lampi (2017) wrote about @censusAmericans and other data-based bots: “The process by which their updates are compiled certainly add to the meaning of the bot as it is hard to imagine a bot that spews out randomly generated census-like narratives to gain as much interest as the one that does the same based on actual data” [7]. So goes for the Trump bots as well.


@DJarJarTrump and @USDumpsterFire were two Trump Bot accounts that ended their activity on 8 January 2021 when @realDonaldTrump was banned from Twitter
Figure 12: @DJarJarTrump and @USDumpsterFire were two Trump Bot accounts that ended their activity on 8 January 2021 when @realDonaldTrump was banned from Twitter.
Note: Larger version of Figure 12 available here.


This analysis of Trump bots highlighted the performative nature of these algorithmic projects, which in the absence of first-person accounts of their creation, we gauged by their outputs (tweets), their profiles, and their presence in the Twitterverse. Mackenzie (2005) argued that software and code can be discursively understood both through their technical performance and their performativity of power, particularly through the circulation and consumption of information. Certainly the @RealDonaldTrump account used the implicit power of political office to increase the rhetorical power of its messages, but so did many of the Trump bots, which algorithmically experimented with reframing, refuting, or critiquing this power. @ilduce2016’s performance of the historical strongman dictator with a Trump signature highlights the depravity of political ego in the social media age, while @Eliza4Trump’s performance as the Rogerian therapist brought down this political ego to the analyst’s couch for all to witness. @USDumpsterFire added motion to its performance through its eponymous GIF, while @burnedyourtweet performed a real life (IRL) demonstration of its disgust for @realDonaldTrump. Even the “countdown” Trump bots offered an implicit message that Donald Trump’s cultural and political influence had an expiration date. Each of these accounts were participatory, algorithmic experiments that pushed back on the rhetorical power of @realDonaldTrump.

Performativity online often takes the form of play, which was evidenced in this study by many of the language play and parody Trump bots. This type of digital communication play has been studied for decades across various platforms, from Internet Relay Chat (IRC) to multi-user dungeons (MUDs) to contemporary social media (Corneliussen and Rettberg, 2008; Danet, et al., 1997; Highfield, 2015). Writing of IRC in 1997, Danet, et al. point out that “Stylized online communication in real time is rapidly becoming a new form of leisure activity for the young, educated, and computer-literate.” Today, we see playful participatory culture from nearly all groups, not just the young and computer savvy, in the form of memes, short-form video, and non-fungible tokens (NFTs), to name a few. But those with the technical know-how to automate the digital production of content continue to explore new areas. Trump bot creators “playing” with the tools of machine learning to mimic Donald Trump’s language patterns were joining artists and developers in other areas of Twitter in automating cultural production (Carter, 2020). This process, often through programmed iteration (but sometimes through trial and error), can result in both extraordinary and pedestrian results, but by exploring the exhaustive breadth of possibilities, these creators allowed consumers of these cultural products — those who saw these bots in their feeds — to be the ultimate arbiters of their value.

While not all of the Trump Bots in this sample were overtly political, many of them functioned as what Sample (2014) called “protest bots,” or computer programs “whose indictments are so specific you can’t mistake them for bullshit.” Through the form of the Twitterbot that inhabiting feeds alongside news sources, celebrities, and social contacts, these accounts act as tactical media, creating “messy moments that destabilize narratives, perspectives, and events” (Sample, 2014). Certainly the bots that were explicitly critical of @realDonaldTrump served to disrupt the intended messaging of the former president, but more informational bots did so too. @trumptrackerbot, @CATrumpLawsuits, and @indextrump each highlighted Donald Trump’s entanglements in financial, legal, and media systems of the U.S. Perhaps most creatively, @RealPressSecBot highlighted the unconventionality of Trump’s Twitter communications by reversing their logic, presenting the messaging in the formal press release format that we would usually expect White House communications to take. These Trump bots created “localised and situated” artifacts (Kitchin, 2017) that contributed to our understanding and relationship with the usual actors in our Twitter feeds and complicated the forceful narratives that @realDonalTrump attempted to establish.

These Trump bots also demonstrate the overlapping concerns of parody/satire and politics that other recent research highlighted. Highfield (2016) explored popular parody accounts on Twitter, including accounts presenting as Lord Voldemort from the Harry Potter universe and Queen Elizabeth II of Great Britain. Highfield argued that these parodies were a form of fandom in line with theories of participatory culture (Jenkins, 2012). However, Highfield also acknowledged their political dimension when the accounts tweet about the news, stating that these tweets “can appeal to a wider audience and attract more attention than their everyday tweets” [8]. Additionally, Fichman (2022) studied the global satirical video trolling of Donald Trump in 2017, finding patterns of repetition, hyperbole, and derailment in international videos mocking Trump’s “America First” philosophy. Many of the Trump bots in this study, as outlined earlier, used similar tactics to comment on or interpret the absurdities of @realDonaldTrump for their followers. But even the more experimental language bots with few followers and no explicit concern for politics waded into the political through their use of Trump’s corpus of tweets as their training data. Indeed, both overt satire and unacknowledged parody in the form of a Trump bot operated in a political context.

Twitterbot experimentation has been a tenuous enterprise recently. In reaction to narratives around the 2016 U.S. presidential election and the 2016 Brexit vote that social media bots swayed public opinion and influenced voting behavior, Twitter made changes to their automation rules and third-party developer application, forcing many bot creators to abandon their creations rather than spend the time and effort to retrofit their bots’ code (Dozier, 2018). Then, in 2020 Twitter signaled a reengagement with third-party developers by launching a new version of its API that allows “access to features long absent from their clients,” including access to a real-time tweet stream (Vincent, 2020). Later that year the company announced it would introduce labels on account profiles to distinguish bot or automated accounts from other types of accounts (Twitter, 2020); testing of the labels began in 2021 (Perez, 2021). Also in 2021 Twitter published a Web page on their developer domain entitled “Build for fun,” suggesting that users “Make a bot,” “Create art,” and “Keep Twitter weird” while linking to three art-based bots (Twitter, 2021b). Overall, the platform’s messaging, including CEO Jack Dorsey’s U.S. congressional testimony that “there are plenty of bots on our service that provide a valuable function” (CNET, 2020) indicates the area it is navigating between competing narratives about bot activity. Carter (2020), in an analysis of a bot that tweets real photos of outer planets along with algorithmically chosen text excerpts in a poetic format, argues that the bot form is an important tool to maintain such a balance between narratives:

“Amidst the intensity of concern around the use of clandestine bots in shaping online discourse for politically subversive ends, the openly curious activities of generative bots, inserting themselves into tightly contested timelines, does gesture, however faintly, towards a form of creative disruption that reveals the network, and its algorithmic architectures, as offering potentials for activity and expression beyond facilitating various threatening agendas.” [9]

This research explored the creative disruption of Trump bots on Twitter. From parody accounts driven by machine learning to harshly critical rebukes of @realDonaldTrump’s words, these automated accounts revealed many possibilities for algorithmic experimentation on the platform. Yet it seemed that the @realDonaldTrump account provided a unique inspiration for the Twitterbot maker community, as bots based on the succeeding U.S. president Joe Biden have been much rarer (perhaps the most followed is @BidenInsultBot, created by a team at the satirical news program The Daily Show). If Twitter remains open to bot projects, though, we are likely to see continued creative technical development as both political culture and the broader Twitter culture evolves.

Some limitations of this study are important to acknowledge. The keyword method used to collect the sample of bots for this analysis most certainly did not identify all Trump-related bot accounts on Twitter, and the daily ebb and flow of new accounts on the platform, as well as the temporary suspension of accounts that violate Twitter’s terms of service, may have inadvertently excluded bots from this study. Therefore, the sample is likely not exhaustive. In addition, recent research has questioned the reliability of Botometer (Rauchfleisch and Kaiser, 2020), and while Botometer scores were not solely used to identify accounts for this research, the tool’s limitations should be recognized.

Additionally, this research considered all content from the @realDonaldTrump account, and therefore any bot-created content based on that account, as representative of Donald Trump’s use of Twitter. However, as many politicians and public figures employ staffers and media consultants to assist with their social media presence, we can assume that not all of the tweets from @realDonaldTrump were created and sent by Trump himself. In fact, one of the bots in this study, @TrumpOrNotBot, used “machine learning and natural language processing to estimate the likelihood Trump wrote a tweet himself” (McGill, 2017). Regardless of each tweet’s origin, though, the compendium of posts from @realDonaldTrump represented the overall persona of Trump as a public figure that inspired the creation of many Trump bots. End of article


About the author

Randall M. Livingstone is an associate professor in the School of Social Sciences, Communication, and Humanities at Endicott College. His research interests include social media automation, wikis, collective intelligence, and the political economy of communication.
E-mail: rlivings [at] Endicott [dot] edu



1. Veale and Cook, 2018. p. 19.

2. Kitchin, 2017, p. 25.

3. Kitchin, 2017, p. 26.

4. Natale, 2018, p. 723.

5. Ford, et al., 2016, p. 4,892.

6. Jones, 2015, p. 2.

7. Lampi, 2017, p. 54.

8. Highfield, 2016, pp. 2,042–2,043.

9. Carter, 2020, p. 1,002.



Jake Bathman, 2020. “Blocked by @realDonaldTrump after only 140 days,” @jakebathman.

Botwiki, 2020. “@Trumpgone,” at, accessed 31 August 2020.

Axel Bruns, 2008. Blogs, Wikipedia, Second Life, and beyond: From production to produsage. New York: Peter Lang.

Richard A. Carter, 2020. “Tweeting the cosmos: On the bot poetry of The Ephemerides,” Convergence, volume 26, number 4, pp. 990–1,006.
doi:, accessed 26 October 2022.

Kathy Charmaz, 2006. Constructing grounded theory. London: Sage.

CNET, 2020. “Dorsey fires back; robots and AI SHOULD be allowed Tweet things” (17 November), at, accessed 31 August 2020.

Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology (CSAIL), 2016. “Postdoc develops Twitterbot that uses AI to sound like Donald Trump” (3 March), at, accessed 31 August 2020.

Hilde G. Corneliussen and Jill Walker Rettberg (editors), 2008. Digital culture, play, and identity: A World of Warcraft reader. Cambridge, Mass.: MIT Press.

Cammy Crolic, Felipe Thomaz, Rhonda Hadi, and Andrew T. Stephen, 2022. “Blame the bot: Anthropomorphism and anger in customer-chatbot interactions,” Journal of Marketing, volume 86, number 1, pp. 132–148.
doi:, accessed 26 October 2022.

Brenda Danet, Lucia Ruedenberg-Wright, and Yehudit Rosenbaum-Tamari, 1997. “‘Hmmm ... wheres that smoke coming from?’ Writing, play and performance on Internet Relay Chat,” Journal of Computer-Mediated Communication, volume 2, number 4.
doi:, accessed 10 December 2021.

Jeremy Diamond, 2015. “Donald Trump jumps in: The Donald’s latest White House run is officially on,” CNN (17 June), at, accessed 30 August 2020.

Rob Dozier, 2018. “Twitter’s new developer rules might end one of its most enjoyable parts” (8 August), at, accessed 30 August 2020.

Pnina Fichman, 2022. “The role of culture and collective intelligence in online global trolling: The case of trolling Trump’s inauguration speech,” Information, Communication & Society, volume 25, number 7, pp. 1,029–1,044.
doi:, accessed 26 October 2022.

Heather Ford, Elizabeth Dubois, and Cornelius Puschmann, 2016. “Keeping Ottawa honest — One tweet at a time? Politicians, journalists, Wikipedians, and their Twitter bots,” International Journal of Communication, volume 10, pp. 4,891–4,914, and at, accessed 23 September 2020.

Christian Fuchs, 2014. Social media: A critical introduction. Los Angeles, Calif.: Sage.
doi:, accessed 26 October 2022.

Alexander R. Galloway, 2006. Gaming: Essays on algorithmic culture. Minneapolis: University of Minnesota Press.

R. Stuart Geiger, 2017. “Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture,” Big Data & Society, volume 4, number 2 (26 September).
doi:, accessed 23 September 2020.

Alex Goldmark, 2017. “Episode 763: BOTUS,” NPR Planet Money (7 April), at, accessed 23 September 2020.

Google, 2009. “Googlebot,” at, accessed 23 September 2020.

Robert Gorwa and Douglas Guilbeault, 2018. “Unpacking the social media bot: A typology to guide research and policy,” Policy & Internet, volume 12, number 2, pp. 225–248.
doi:, accessed 23 September 2020.

Giuseppe Granieri, 2014. “Algorithmic culture. ‘Culture now has two audiences: people and machines.’ A conversation with Ted Striphas” (30 April), at, accessed 30 August 2020.

Stuart Hall, 1973. “Encoding and decoding in the television discourse,” Discussion paper, University of Birmingham, at, accessed 30 August 2020.

Brad Hayes, 2016. “Make STEM great again,” at, accessed 30 August 2020.

Tim Highfield, 2016. “News via Voldemort: Parody accounts in topical discussions on Twitter,” New Media & Society, volume 18, number 9, pp. 2,028–2,045.
doi:, accessed 10 December 2021.

Tim Highfield, 2015. “Memeology Festival 04. On hashtaggery and portmanteaugraphy: Memetic wordplay as social media practice” (5 November), at, accessed 16 December 2021.

Henry Jenkins, 2012. Textual poachers: Television fans and participatory culture. Second edition. New York: Routledge.
doi:, accessed 26 October 2022.

Henry Jenkins, 2006. Convergence culture: Where old and new media collide. New York: New York University Press.

Steve Jones, 2015. “How I learned to stop worrying and love the bots,” Social Media + Society, volume 1, number 1 (11 May).
doi:, accessed 26 October 2022.

Darius Kazemi, 2016. “Bot summit 2016,” at, accessed 28 September 2020.

Rob Kitchin, 2017. “Thinking critically about and researching algorithms,” Information, Communication & Society, volume 20, number 1, pp. 14–29.
doi:, accessed 23 September 2020.

Will Knight, 2016. “Why I’m backing Deep Drumpf, and you should too,” MIT Technology Review (17 October), at, accessed 31 August 2020.

Bence Kollyani, 2016. “Where do bots come from? An analysis of bot codes shared on GitHub,” International Journal of Communication, volume 10, at, accessed 23 September 2020.

Ville Matias Lampi, 2017. “Looking behind the text-to-be-seen: Analysing Twitter bots as electronic literature,” Master’s of Arts thesis, Visual Culture and Contemporary Art, Department of Art, Aalto University School of Arts, Design and Architecture, at, accessed 26 October 2022.

Seth C. Lewis, Andrea L. Guzman, and Thomas R. Schmidt, 2019. “Automation, journalism, and human-machine communication: Rethinking roles and relationships of humans and machines in news,” Digital Journalism, volume 7, number 4, pp. 409–427.
doi:, accessed 23 September 2020.

Randall M. Livingstone, 2021. “Make a difference in a different way: Twitter bot creators and Wikipedia transparency,” Journal of Computer Supported Cooperative Work, volume 30, pages 733–756.
doi:, accessed 23 September 2020.

Randall M. Livingstone, 2016. “Population automation: Rambot’s work and legacy on Wikipedia,” First Monday, volume 21, number 1, at, accessed 23 September 2020.
doi:, accessed 23 September 2020.

Tetyana Lokot and Nicholas Diakopoulos, 2015. “News bots: Automating news and information dissemination on Twitter,” Digital Journalism, volume 4, number 6, pp. 682–699.
doi:, accessed 23 September 2020.

Luca Luceri, Felipe Cardoso, and Silvia Giordano, 2021. “Down the bot hole: Actionable insights from a one-year analysis of bot activity on Twitter,” First Monday, volume 26, number 3, at, accessed 14 May 2021.
doi:, accessed 14 May 2021.

Adrian Mackenzie, 2005. “The performativity of code: Software and cultures of circulation,” Theory, Culture, & Society, volume 22, number 1, pp. 71–92.
doi:, accessed 30 September 2020.

Paris Martineau, 2018. “What is a bot?” Wired (16 November), at, accessed 30 August 2020.

Marian Mazzone and Ahmed Elgammal, 2019. “Art, creativity, and the potential of artificial intelligence,” Arts, volume 8, number 1, 26.
doi:, accessed 23 September 2020.

Andrew McGill, 2017. “A bot that can tell when it’s really Donald Trump whos tweeting” (28 March), at, accessed 23 September 2020.

Simone Natale, 2018. “If software is narrative: Joseph Weizenbaum, artificial intelligence and the biographies of ELIZA,” New Media & Society, volume 21, number 3, pp. 712–728.
doi:, accessed 23 September 2020.

Richard J. Oentaryo, Arinto Murdopo, Philips K. Prasetyo, and Ee-Peng Lim, 2016. “On profiling bots on social media,” In: Emma Spiro and Yong-Yeol Ahn (editors). Social informatics. Lecture Notes in Computer Science, volume 10046. Cham, Switzerland: Springer, pp. 92–109.
doi:, accessed 26 October 2022.

Alex Pareene, 2016. “How we fooled Donald Trump into retweeting Benito Mussolini” (28 February), at, accessed 31 August 2020.

Allison Parrish, 2016. “Bots: A definition and some historical trends” (24 February), at, accessed 31 August 2020.

Sarah Perez, 2021. “Twitter introduces a new label that allows the ‘good bots’ to identify themselves” (9 September), at, accessed 26 October 2022.

Adrian Rauchfleisch and Jonas Kaiser, 2020. “The false positive problem of automatic bot detection in social science research,” PLoS One, volume 15, number 10, e0241045 (22 October).
doi:, accessed 14 May 2021.

Mark Sample, 2014. “A protest bot is a bot so specific you can’t mistake it for bullshit: A call for bots of conviction” (30 May), at, accessed 23 September 2020.

Mohsen Sayyadiharikandeh, Onur Varol, Kai-Cheng Yang, Alessandro Flammini, and Filippo Menczer, 2020. “Detection of novel social bots by ensembles of specialized classifiers,” CIKM ’20: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 2,725–2,732.
doi:, accessed 26 October 2022.

Sentieo, 2017. “Introducing the Sentieo Trump Tracker: Follow the president’s impact on your investments” (23 March), at, accessed 31 August 2020.

Shardcore, 2017. “@thetrumpwatcher,” at, accessed 31 August 2020.

Massimo Stella, Emilio Ferrara, and Manlio De Domenico, 2018. “Bots increase exposure to negative and inflammatory content in online social systems,” Proceedings of the National Academy of Sciences, volume 115, number 49, pp. 12,435–12,440.
doi:, accessed 26 October 2022.

Stefan Stieglitz, Florian Brachten, Björn Ross, and Anna-Katharina Jung, 2017. “Do social bots dream of electric sheep? A categorisation of social media bot accounts,” ACIS 2017 Proceedings, at, accessed 26 October 2022.

Ted Striphas, 2015. “Algorithmic culture,” European Journal of Cultural Studies, volume 18, numbers 4–5, pp. 395–412.
doi:, accessed 23 September 2020.

Bob L. Strum, Oded Ben-Tal, Úna Monaghan, Nick Collins, Dorien Herremans, Elaine Chew, Gaëtan Hadjeres, Emmanuel Deruty, and François Pachet, 2019. “Machine learning research that matters for music creation: A case study,” Journal of New Music Research, volume 48, number 1, pp. 36–55.
doi:, accessed 23 September 2020.

Milena Tsvetkova, Ruth Garcia-Gavilanes, Luciano Floridi, and Taha Yasseri, 2017. “Even good bots fight: The case of Wikipedia,” PLoS One, volume 12, number 2. e0171774 (23 February).
doi:, accessed 23 September 2020.

Twitter, 2021a. “Permanent suspension of @realDonaldTrump” (8 January), at, accessed 24 September 2021.

Twitter, 2021b. “Build for fun,” at, accessed 24 September 2021.

Twitter, 2020. “Out plan to relaunch verification and what’s next,” at, accessed 23 September 2021.

Tony Veale and Mike Cook, 2018. Twitterbots: Making machines that make meaning. Cambridge, Mass.: MIT Press.
doi:, accessed 26 October 2022.

Tony Veale, Alessandro Valitutti, and Guofu Li, 2015. “Twitter: The best of bot worlds for automated wit,” In: Norbert Streitz and Panos Markopoulos (editors). Distributed, ambient, and pervasive interactions. Lecture Notes in Computer Science, volume 9189. Cham, Switzerland: Springer, pp. 689–699.
doi:, accessed 26 October 2022.

James Vincent, 2020. “Twitter launches new API as it tries to make amends with third-party developers” (12 August), at, accessed 23 September 2020.

Joseph Weizenbaum, 1966. “ELIZA — a computer program for the study of natural language communication between man and machine,” Communications of the ACM, volume 9, number 1, pp 36–45.
doi:, accessed 23 September 2020.

Samuel C. Woolley, 2016. “Automating power: Social bot interference in global politics,” First Monday, volume 21, number 4, at, accessed 23 September 2020.
doi:, accessed 23 September 2020.

Lei (Nico) Zheng, Christopher M. Albano, Neev M. Vora, Feng Mai, and Jeffrey V. Nickerson, 2019. “The roles bots play in Wikipedia,” Proceedings of the ACM on Human-Computer Interaction, volume 3, article number 215, pp. 1–20.
doi:, accessed 26 October 2022.


Editorial history

Received 29 November 2021; revised 14 March 2022; accepted 26 October 2022.

Creative Commons License
This paper is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Trump bots and algorithmic experimentation on Twitter
by Randall M. Livingstone.
First Monday, Volume 27, Number 11 - 7 November 2022