"Stop Kremlin trolls:" Ideological trolling as calling out, rebuttal, and reactions on online news portal commenting
First Monday

Stop Kremlin trolls: Ideological trolling as calling out, rebuttal, and reactions on online news portal commenting by Asta Zelenkauskaite and Brandon Niezgoda



Abstract
Mainstream media sources have recently heightened public awareness to a phenomenon known as Russian troll farms. This research thematically analyzes “Kremlin troll” use and its variations found in user comments on a leading Lithuanian news portal. The main findings of this study indicate that “Kremlin troll” was used in two oppositional themes. The first one reveals accusations of paid commentators as “Kremlin trolls.” The second, in contrast, counter-argues “Kremlin troll” accusations through rebuttal. Sarcasm and humor, e.g., by emergence of self-identification as a “Kremlin troll” furthermore downplays the “Kremlin troll” accusations and reclaims uncertainty of who is the real troll.

Even if the offensive and defensive tactics might seem rather similar to overall Internet troll tactics found in the previous online research, the unique side of “Kremlin troll” use was the emergence of ideological trolling, charged with accusations of some commentators being paid by a foreign government, thus referring to “Kremlin trolling” as a form of astroturfing. We conclude that “Kremlin troll” in this study exemplifies politically charged ideological trolling, rather than the mere subcultural phenomenon that is prevalent in English-language contexts.

Contents

Introduction
Literature review
Background
Method
Results
Discussion
Conclusion

 


 

Introduction

Online environments have been reconsidered as a new form of Habermasian (Habermas, 1962/1989) deliberative public sphere. Yet, this study presents online news portals and commenting not as deliberative spaces but as a place for ideological trolling — a newly emerged phenomenon, known as “Kremlin trolls.” Presented here is a case of online news commenting through the lens of astroturfing. Astroturfing is not a new phenomenon, but it has not necessarily been used in modes of trolling such as disruption, distrust, and irreverence. Astroturfing is an increasingly prevalent practice used to shift commercial opinion, where third parties engage in reputation management of a given brand (Woodcock and Baum, 2015). In this case, astroturfing serves to promote political ideology, as it has been reported previously in the case of the Indian elections (Ippolito, 2013).

The concept of ‘trolling’ has been part of online vernacular for a significant amount of time, especially pertinent to Internet forums and other online spaces (see, e.g., Herring, et al., 2002). Trolling behavior in online contexts has been thoroughly studied, characterized as “repetitive, intentional, and harmful actions that are undertaken in isolation and under hidden virtual identities, involving violations of Wikipedia policies, and consisting of destructive participation in the community.” [1] While there is much literature on troll analysis, ideological and political trolls remain largely under-analyzed (Sanfilippo, et al., 2017). As Fichman and Sanfilippo (2016) argue, while trolling has been covered by mass media, political trolls are the most visible on the Web, i.e., through user comments.

Russian troll farms

In the era of 24-hour news cycles, a wave of mainstream media reports recently heightened public awareness of a phenomenon known as ‘Russian troll’ farms. Chen (2015) provided New York Times readers a sprawling exposé on ‘the Agency’ by tracing a hoax chemical plant disaster scare to 55 Savushkina Street, the last known home of the workplace of Russian Internet mercenaries, who for money and/or political motives distribute pro-Putin propaganda. In the midst of the whistle blower scandal of Lyudmila Savcuch, who leaked information about ‘the Agency’, the Guardian released a story including interviews with two of its former employees (Walker, 2015). From LiveJournal members who intersperse subtle anti-West rhetoric in mundane topics, to higher paid employees who post comments on English-speaking forums, the phenomenon is seemingly banal, yet insidiously Machiavellian. Ivan Kozachenko (2015) senses a malaise in pro-Russian online tactics, with participants designated to simply reposting from news agencies rather than employing overt propaganda techniques.

This research thematically analyzes ideological trolling in online news portal comments by analyzing themes associated with “Russian troll” use and its variations found in user comments (such as Kremlin troll, red troll, ruble troll, kopek troll, sold out troll or similar). The data were collected from the largest Baltic news portal, delfi.lt, from mid-February to mid-March 2015, during a time when Russia was involved in ‘hybrid warfare’ with Ukraine, described as “an operational approach to warfighting that uses an explicit mix of military and non-military tactics” [2]; the tension of the war has been particularly sensed in the Baltic states (Kroenig, 2015). While the Internet trolling phenomenon is well-known in English-speaking contexts, the term trolis ‘a troll’ referring to ‘Internet trolling’ in the Lithuanian language is a neologism that has only been introduced in Lithuanian journalistic practice since 2010. While we acknowledge that the use trolis is multifaceted, we focus on the meanings of ideological trolling, pertinent to “Kremlin troll.” We identify uses of the word trolis by regular delfi.lt commentators and the journalistic context in which it appears (in which news stories and which topics, as defined by the news portal). Yet we employed qualitative thematic analysis related only to ideological use of trolis. To do so, we qualitatively analyzed only user comment corpus that contained the word trolis with connotations to ‘Kremlin/Russian/red/Putin/Moscow troll.’ This corpus has been collected after the “Kremlin troll” frame has been introduced by journalists on delfi.lt, but before the public revelation of ‘the Agency’ (see Method section).

Results of the study show that the following themes emerged, associated with “Kremlin troll”: a) calling-out ‘Kremlin trolls’; b) unveiling ‘Kremlin trolls’; c) sarcasm and self-identification as a ‘Kremlin troll’; d) Kremlin troll rebuttal; e) reactions to the “Kremlin troll” phenomenon. The main findings of this study indicate that Kremlin troll ideology is driven by two opposite forces. The first one driven by heavy accusations of paid commentators as “Russian trolls.” The second one, the rebuttal of accused “Kremlin trolls,” which concurrently downplayed the accusations through sarcasm and humor, e.g., by emergence of self-identification as a “Kremlin troll.” Rebuttal and sarcasm potentially seed uncertainty for who is the troll, where trolls and the trolled are accused to be “Kremlin trolls.”

 

++++++++++

Literature review

Troll definitions and contexts

Scholars emphasize that trolling is often times ideological (e.g., Fichman and Sanfilippo, 2016), yet there is little empirical evidence of this, except for a few exceptions. Troll use in political contexts, by Pearce (2015), refers to state-sponsored deviant behaviors online against political opposition in authoritarian regimes (i.e., the case of Azerbaijan), however it was treated as “harassment” rather than trolling.

Regardless of the lack of ideological troll analysis, trolling behavior has been traced extensively in the past decade, indicating its prevalence in the English-language context (e.g., Bishop, 2014; Fichman and Sanfilippo, 2016; Harris, 2012; Ratkiewicz, et al., 2010, Stringhini, et al., 2015; Thomas, et al., 2013; Wang, et al., 2014). Theorists, similarly, have looked to investigate the effects of trolling, finding that the process can rework models of intention (Morrissey, 2010), or motivations (Fichman and Sanfilippo, 2016; Shahaf and Hara, 2010) while deconstructing laws of morality (Shin, 2008).

To contribute to a better understanding of trolls in a global context, this study is based on ideological trolls in a Lithuanian online news portal, as a majority of work on trolling has been done in English-speaking contexts, notwithstanding some exceptions (e.g., Shahaf and Hara’s (2010) analysis of Israeli Wikipedia trolls). In English-speaking contexts, key theoretical work has been done to define simply what trolling is. Phillips (2015) claims that trolling is a specific subcultural phenomenon. Hardaker (2010) works through a plethora of user definitions and a mix of academic definitions (Baker, 2001; Cox, 2006; Brandel, 2007; Dahlberg, 2001; Donath, 1999; Naraine, 2007), concluding that a troll is “a CMC [computer-mediated communication] user who constructs the identity of sincerely wishing to be part of the group in question, including professing or conveying pseudo-sincere intentions, but whose real intention(s) is/are to cause disruption and/or to trigger or exacerbate conflict for the purposes of their own amusement” [3].

While troll analysis in the past years has been discussed in a multifaceted manner, Fichman and Sanfilippo (2016) sum up various definitions by providing a typology and its subtypes of what they call online deviant behaviors and their motivations. Deviant behaviors include trolling, hacking, and deceiving. Trolling, furthermore is divided into four subtypes: political, grief, RIP, and LOL. All of them include provocation and deception that are geared towards disruption of the community. Trolling is aggressive, impolite, antagonistic, and manipulative.

Trolling language interpretation

Beyond theoretical work, attempting to define the term and trolling as a process, trolling has been analyzed and constructed as an appropriated discursive tactic relative to particular online modes. The work of Herring, et al. (2002) supports the position of trolling as a defensive and offensive discursive act that is accentuated by the platform affordances. In an early 2000 case study of a feminist forum, the authors identify tactics used against a disruptive male participant (troll). Three discursive practices constituting criteria of trolling are identified: outward manifestations of sincerity, flame bait, and ideological manipulation — an eclectic and sometimes contradictory mix, when situated within the historical timeline of trolling. Bergstrom (2011) argues that calling out trolls is a technique to silence the transgressor’s behavior.

 

++++++++++

Background

Ideological trolling in this section is contextualized within the journalistic practices of the analyzed news portal delfi.lt. As of September 2016, the delfi.lt news portal archive lists 66 stories written by journalists that contain the word ‘troll.’ The first one is dated 2002, and the last one was posted in August 2016. Ten of them were links and therefore have been excluded. Out of 56 remaining stories, 33 stories were unrelated to Internet trolling whatsoever (they refer to pets’ names or a band, or other); 10 stories referred to Internet trolling and were apolitical in nature; the remaining 13 stories that referenced Internet trolling were political in nature (three stories referenced internet trolls in Lithuanian political contexts; nine stories referred to Kremlin/Russian trolls; and one story, in 2016, used troll in reference to Donald Trump).

The nomiker ‘Kremlin troll’ was first used by delfi.lt journalists on 4 November 2014, followed by a story on 11 December 2014 and the 29 March 2015 story ‘Kremlin “trolls” habitat: uncovered who they are’ [Kremliaus „trolių“ irštva: atskleidė, kas jie tokie]. This internationally covered story alone has provoked 1,456 comments by delfi.lt readers (even if they were not part of the corpus analyzed in this study).

Before “Kremlin troll” was introduced (13 news stories from 2002 to 2012), ”troll” was unrelated to ‘Internet trolling’ at all. These first instances of the word ‘troll’ (in the headline or the body of the story) included names of pets, bands, and projects, with many references to the Scandinavian landscape or the Scandinavian mythical creature. For example, one of the headline’s states ‘Visiting trolls and deer’ [Svečiuose pas trolius ir briedžius] referring to the natural beauty of the Scandinavian landscape. The other unrelated story mentions the troll (in the body of the story) as the deceased writer’s cat’s name ‘J. Ivanauskaitė visits sister in her dreams’ [J. Ivanauskaitė seserį aplanko sapnuose].

The first delfi.lt mention of ‘troll’ as it pertains to ‘Internet trolling’ was found on 6 April 2012, referenced in a “vox populi,” otherwise known as a citizen opinion piece (Mindaugas, 2012). In the article ‘Lithuania — the land of trolls and the hardworking’ [Lietuva — tėvynė trolių ir varguolių] readers have been exposed to the concept of ‘trolls’ portrayed as ill-intentioned users who disrupt discussions, spread chaos, discord, and dissonance (Mindaugas, 2012). The author of this news story distinguishes between various types of tolls, from the regular troll who works to skew discussion, to the authoritative troll who attempts to influence mass opinion, divert attention, represent a concrete interest group, or fulfill any other pre-planned goal.

The term ‘troll’ has also been used in delfi.lt making references to Lithuanian political and current affair stories in a witty and humorous, even if sometimes disruptive, manner. For instance, a 31 August 2013 article covers the sarcastic laughs of trolls directed at Lithuanian politicians. The term has been further disseminated, and a 31 December 2013 article proclaimed 2013 “The year of the ‘trolls’” (Černiauskas, 2013). Such ‘witty’ behavior of Internet commentators includes techniques of ‘laughing at’, memes, and montages.

Delfi.lt

To analyze ideological trolls, on delfi.lt we capitalized on its sociotechnical system affordances (Saywer and Tapia, 2007) by focusing on comments and responses (we did not use upvotes and downvotes for the analysis). The comments section is asynchronous and threaded in nature; users can post comments and others can reply to a given users’ comment or write their own comment. Asynchronous comments, as opposed to live chat, offer extended time for reflection on messages sent and received (Ziegahn, 2001). Users can write comments on stories written by media professionals and respond to other user comments. Each news story is embedded in a news category — (13 topics) which is automatically generated by the news portal. Users may comment through their Facebook profile or register to delfi.lt as a commentator, thus revealing their identity, but almost all comment anonymously. Comments for each news story are organized based on a temporal dimension: from the oldest to the most recent, with possibilities to be sorted from best to worst as well by ranking based on the number of upvotes and downvotes each comment receives.

Research questions

Based on Herring, et al. (2002) this study asks: what are the themes that emerge in the contexts where users use “Russian troll” and what variations do they use? To contextualize the use of troll we ask:

RQ1: What is the frequency and the news story categories in which overall “trolling” occurs in delfi.lt?

RQ2: What themes have emerged in the use of “Kremlin troll” (and its variations) in user comments?

 

++++++++++

Method

Sample

Data were collected using an automated continuous data scraping method. Data included user ID, user name (or a heading), a comment, the time when the message was posted, and the content category in which it has appeared (content categories are defined by delfi.lt). Data were collected for a one-month period, yielding a total of 400,633 comments from February to March 2015. The average number of comments per story was 65.08. They were distributed across content categories as summarized in Table 1.

 

Table 1: Comment distribution across content categories.
Content categoryComments
(Frequency)
Comments
(%)
Comments that used trolis
(Frequency)
Percentage
Delfi262,12065.41,91383.7
Business41,26410.31918.4
Entertainment35,5188.9703.1
Citizen19,4304.8441.9
Life9,8892.580.4
Sports9,2092.3120.5
Pure6,3431.660.3
Cars5,8631.5150.7
Science4,7341.2100.4
Video3,4870.9150.7
5Strawberries1,6380.4150.7
LMZ5780.100
Projects5600.100
Total400,6331002,284100

 

The great majority of comments were posted in the Delfi section (262,120), and the fewest were found in the ‘Projects’ section [Projektai] (560), with an average of 30,818 over 13 categories (see Table 1). Table 1 shows variability in commenting frequencies. The maximum number of comments in one story was 4,402, and there were 798 stories with zero comments. Similar patterns can be observed for “troll” use. The Delfi category had a prevailing number of “troll” mentions, followed by entertainment, business, and citizen, as in the overall comments category.

‘Troll’ sample extraction

This sample was then subsequently filtered for the use of the word ‘troll’ in user comments. Comments were marked as containing or not containing the word ‘troll’. Since the word ‘troll’ has become part of Lithuanian vernacular by adding a masculine ending to it as trolis in the singular or troliai in the plural and its various forms of declination (by changing its ending based on Lithuanian language grammar rules), we used them to construct the sample.

The word ‘troll’ was manually searched for in its various declination forms, extracted, and then manually cross-checked for its context relevance in the following forms (trolis, trolio, troliui, troliu, trolyje, troli, troliai, trolių, troliams, trolius, troliais, troliuose). Additionally, all variations of the word ‘troll’ used by the commenters are included in the sample; instances include trolololol, Trolliputai, netroli*, sutrol*, patrol*, trolin*. Manual identification was employed in this step in order to be certain that the word ‘troll’ was contextually meaningful and was not part of other Lithuanian words — false positives (with the root of ‘trol*’, e.g., ‘troleibusas’ — trolleybus) were excluded from the sample.

Ideological troll sample extraction

To account for ideological troll use, the troll sample described above, has been subsequently qualitatively analyzed in reference to ideological terminology, which we adapt from the news article “The Agency” about the Russian paid workers who comment online on behalf of the government (Chen, 2015). We expanded the cases pertinent to the ideological trolling, by including associated words and expressions to Russian trolling, found in the text such as “Putin”, “Kremlin”, “Red trolls”, “Moscow trolls”, “ruble trolls”, “kopek” trolls or “sold-out” trolls. This process was used based on grounded theory approach as suggested by Glaser and Strauss (1967).

Troll descriptive context

To answer research question one What is the frequency and the news story categories in which overall “trolling” occurs in delfi.lt?, first we provide context to the overall ‘troll’ use. The troll sample included the following descriptive quantitative variables to contextualize qualitative assessments of messages with “troll” in comments: a) news portal defined topic categories; b) response vs. message; c) users who used the word ‘troll’.

  1. Topic categories (as defined by delfi) where the word trolling was used? Messages were coded by topic category (as defined by delfi.lt), frequency was assessed.

  2. ‘Troll’ use in the comment vs. response. Messages were coded as responses vs. comment, as defined by Delfi.lt. This information was counted for messages that contained ‘troll’ and did not contain ‘troll.’

  3. Number of users: Proportion of users and frequency of use of ‘troll.’ Users were identified by the IP, associated with the message, as a proxy to a user. Frequency of users who used ‘troll’ and the ones who did not use it were counted.

Analytical approach

To answer research question two What themes have emerged in the use of “Kremlin troll” (and its variations) in user comments? qualitative thematic analysis has been employed to identify themes associated with the ideological trolling, in the comments identified through “Putin”, “Moscow” trolls, “Kremlin”, “Red trolls”, “ruble trolls”, and “kopek trolls.” Comments pertaining to ideological troll use through “Kremlin troll” have been analyzed by using a grounded theory approach that extracts themes from context (Glaser and Strauss, 1967).

 

++++++++++

Results

Descriptive analysis of ‘trolling’ overall

To address the first research question What is the frequency and the news story categories in which overall “trolling” occurs in delfi.lt?, we found that out of the 400,633 overall comments, 2,284 (0.57 percent) included the word ‘trolis’ and its variations. Out of 5,358 stories written in one month on delfi.lt, there were 706 stories (13.2 percent) that had user comments containing the word ‘trolis’; however, the stories themselves did not include the word ‘trolis’ either in the headline or in the text of the story. Stories contained from one to 47 mentions of ‘troll.’ The delfi category (which is the largest, based on the number of number of messages, see Table 1); had the majority of ‘troll’ comments 1,912 (83.7 percent), yet ‘troll’ use did occur in every category on the news portal’s comments.

Troll use and structural features: responses vs. comments and users

Out of all comments that contained the word ‘trolis,’ 46.1 percent were replies. In contrast, replies in the overall sample constituted 35.2 percent. There were 1,120 users for 2,284 troll comments. We found that some very actively mentioned trolis — up to 117 times per person and 304 users (27 percent) wrote more than one comment containing the word trolis. Of 2,284 comments, 463 featured trolis within the headline (name) of the comment and 1,888 in the body of the comment section. Sixty-seven messages had trolis in both the heading and the main body of the comment.

Meanings associated with trolling in the message

To answer research question two What themes have emerged in the use of “Kremlin troll” (and its variations) in user comments? qualitative thematic analysis has been employed to identify themes associated with the ideological trolling referring to “Kremlin troll.” Based on grounded theory approach, we found the following words and word combinations, associated with “Kremlin troll” in the user comments: “Kremlin trolls” were also referred to as “Putin trolls”, “Red trolls”, “ruble trolls”, and “kopek trolls.” The frequency of these words in the overall “troll” corpus is summarized below:

 

Table 2: Variations and frequency of “Kremlin troll” use.
Troll and its associations
(in Lithuanian)
TranslationFrequency countPercentage
raud*red242.2
Kreml*Kremlin37834.1
Rusi*Russia26323.8
Rusu*Russian837.5
kapeik*kopek242.2
rubl*ruble11510.4
Maskv*Moscow625.6
Putin*Putin15814.3
 Total1,107100.0

 

Theme one: Offensive

Calling out a troll

Qualitative theme analysis has been employed only on the *troll* corpus pertaining to the ideology, i.e., when references were made to “Kremlin”, “Putin”, “red trolls”, “kopek trolls”, “ruble trolls”, “Moscow trolls.”

Calling out troll has been found as the first theme associated with trolls and Kremlin trolls in two different ways — by calling out trolls by ‘he/she’, ‘they’ or addressing the comment to trolls as ‘you’ as in examples below:

1) kremliniai troliai von is musu forumu
kremlin trolls get out of our forums

2) Putino trolis
Putin’s troll

3) troli maskvos
Moscow’s troll

4) Trolis rublinius trolius patrolino!
Troll has trolled ruble trolls!

There have been variations found on the attack theme where users employed sarcasm in this context as in examples 5 and 6:

5) trolis kremlinis uzdirbo savo rublius slykstu
Kremlin troll has earned his rubles disgusting

6) etatinis kremlinis trolis skundžiasi ...
employed kremlin troll complains ...

Calling out was used as directly addressing a comment to the person — the “Kremlin troll:”

7) Na Kremliaus troliai: Skaitote?
So Kremlin trolls: Are you reading?

8) Troliui raudonajam
To a red troll

9) Tai dirba Maskvos troliai ...
Here are Moscow trolls at work ...

Call out a Russian troll as astroturfing

The second theme, associated with “Kremlin trolls”, was trolls as astroturfers or paid laborers:

10) parsidavusio kremlinio trolio komentaras
[this] comment [is] written by a sold-out kremlin troll

11) vidutinė alga 27 000 rublių, arba apie 380 eurų ir panašu, kad dar kris. Reikėtų kremliaus trolius išsiųst į Maskoliją ir palaikyt ant vietinės minimalkės {6000 rublių}, mažiau skleistų kliedesių.
A medium salary is 27,000 rubles or roughly 380 euro and it looks like it will go down. It would be necessary to send kremlin trolls to Moscow and give them a minimum salary {6,000 rubles}, they would spread fewer deliriums.

As seen from the comments above 10) and 11), users are called as astroturfers.

No references to Kremlin but references to payment

There have been cases where astroturfing was inferred without a direct mentioning of “Kremlin troll” but by providing references to the paid labor, as in the examples below:

12) Trys samdyti troliai ...
[You/they are] three hired trolls ...

13) kolo radu troliai jau buvo aprime, bet dabar straipsnio pavadinime, tai visi suguzejo cia vel:))))) atnaujino agentura finansavima?:))))
koloradu trolls have calmed down but because of the article’s title they came back here again :))))) has the agency renewed the financing:)))) [4]

In addition, troll has been found to be used as an opposition to being a Lithuanian:

14) Nebūk trolis, būk už Lietuvą
Don’t be a troll, be for Lithuania

Calling out Kremlin troll astroturfing and sarcasm

“Kremlin trolls” as paid laborers have been called out by using sarcasm as seen in examples 15 and 16:

15) visi raudonieji troliai! greit po 50 raudonų komentarų parašot, nes kitaip kapeikas kaip savo ausis matysit
[to] all you red trolls! Quickly write 50 comments if not you will see kopek as your ears.

16) nepergyvenk ne tu vienas ... cia daug maskvos troliu tokiu :))))
don’t worry you are not alone ... there are many Moscow trolls like you :))))

Unveiling trolls

Unveiling “Kremlin trolling” or astroturfing strategies has been found as another theme as in examples 17 and 18:

17) žinokit, didžiausiuose Lietuvos žinaisklaidos portaluose darbuojasi kremliaus troliai. TIKSLAS Skleisti rusišką propogandą ir melą. įdomu yra tai kad apie 80 % prorusiškų komentarų ateina tik iš keleto kompiuterių.. Kiekvienas jų per valandą laiko gali parašyti iki 70 komentarų. Nesunk paskaičiuoti kad iš skirtingų10 kompiuterių jie gali labai greitai užversti bet kokį straipsnį savo kremlių šlovinančiais bei prieš Lietuvą nukreiptais komentarais. Jie dažnai keičia IP tam kad atrodytų kad yra rašoma iš skirtingų kompiuterių ir jų yra daug... TAM jie naudoja proxy.com, hidemyIP.com ir pan. programėles. ESU TIKRAS KAD LIETUVOS žMONėS NETIKI IR NETIKėS Jų SKLEIDžIAMA MELO PROPOGANDA !
I want to let you all know that in the biggest Lithuanian news portals there are kremlin trolls at work. Their AIM is to spread out Russian propaganda and lies. It is interesting that around 80% of pro-Russian comments come from only a few computers.. All of them at a given hour can write up to 70 comments. It is not difficult to count that from 10 different computers they can easily over-flood any article with kremlin-glorifying and anti-Lithuania comments. They often change IPs so that it would seem that they write from various computers and there are many of them ... FOR THAT they use proxy.com, hidemyIP.com and similar programs. I AM SURE THAT LITHUANIAN PEOPLE DO NOT BELIEVE AND WILL NOT BELIEVE THIS LYING SPREAD-PROPAGANDA!

18) pastebekit puse cia komentuojanciu IP yra pavogti is ispanijos, USA, Islandijos ir t.t !!! Cia kremliniai troliai dirba pries Lietuva uz rublius.. ir todel slepia savo tikrus IP nuo Lietuvos teisesaugos..
notice that the IPs of the commenters are stolen from Spain, USA, Island etc.!!! This is a job of kremlin trolls against Lithuania for rubles.. that is why they hide their own IPs from Lithuanian jurisdiction..

Self-identifying as a Kremlin troll

In contrast to the theme of calling out “Kremlin trolls,” some users have self-identified as trolls. Troll as a self-identifier was found as another theme exemplified below:

19) Trolis: jei ir vaziuosiu tai tik i donecka,,,, nes man patinka vodka ir ikrai ir be to as esu putino trolius
Troll: even if I go [anywhere], that would be only doneck,,,, because I love vodka and caviar and besides that I am a putin’s troll [5]

20) Trolis: kremlius liepe trolinti visus straipsniu su Lietuvos prezidente..., uztrolinom ir si..., bet buvo pranesta, kad musu sarasa Lietuvos saugumas jau sudare ir greitai galim sesti i kalejima..., ar cia tiesa...?
Troll: kremlin has commanded [us] to troll all the articles about the Lithuanian president ..., we have trolled this one ..., but we have been informed that we have been listed by the Lithuanian national security and soon we will all go to the jail ..., is this true..?

Some of the users responded by using troll as “Landsberginis” in contrast to Kremlin:

21) Neatspejai lazberginis trolinis nes trolinu ne kremliu o lanzbergsuvas :)))))))
You did not guess it right, [I am] a Landsbergis troll because I am trolling not Kremlin but Landsbergis!

Vytautas Landsbergis, in this comment, is a Lithuanian politician who has been critical of Russia.

Theme two: Defensive rebuttal by “Kremlin trolls”

Defensive “Kremlin troll” rebuttal was another theme that included accusations and blaming by shifting attention from “Kremlin trolls” to other issues.

“Kremlin troll” calling out rebuttal through accusations and blame

Blame was placed on the online news portals since they do not eliminate commenting options:

22) NIEKAS NENORI STABDUTI TROLIU: Lietuviski infosaitai kenkia lietivai tycis leisdami trolinti bet niekas net neketina to stabdyti, kad geriau atrodytu — Mus puola, mes aukos. Niekas net nesiulo infosaotams isimti galimybe komentuoti
NOBODY WANTS TO STOP TROLLS: Lithuanian infosaitai [Russian word for online news portals] deliberately allow trolling and nobody even considers to stop it, so that it looks better — We are attacked, we are victims. Nobody suggests to the infosaitai to cancel a possibility to comment.

Blame has been expressed by shifting attention to the issue of “Kremlin trolls” to socioeconomic issues in Lithuania, such as high emigration rates:

23) Aha, aplink vien putino troliai, o 1,5 mln siaip sau isvaziavo, is gero gyvenimo.
Sure, putin trolls are all around us, but 1.5 mln people have left just like this, because of the good life.

Calling-out strategy used against the “Stop Kremlin Trolls” group founder:

24) Kai grupes kuriamos stop kremliaus troliams tai viskas normalu, po kirkvienu straipsniu tas pats zmogus komentuoja ir jau vemt vercia su savo 206 nariais, o kai atsirado daug zmoniu su maziau praplautom smegenim tai jau kremliniai ? Ne visi parsidave uz doleri !!!! Stabdykit tuos kremliaus trolius kuriais esate jus patys
When groups such as stop kremlin trolls are created it is all normal, after each article the same person comments and makes me sick with its 260 members, and when others emerged with less-brain washed brains those are considered as Kremlin trolls? Not all have sold out for a dollar!!!! Stop those Kremlin trolls who are you yourself

This rebuttal comment uses the same elements of astroturfing as the “Stop Kremlin Trolls” group when calling out “Russian trolls”; but instead, they reference the paid labor by dollars (i.e., U.S. influence) not rubles.

Defensive rebuttal to accusations regarding IP address changes was another example of “Kremlin troll” rebuttal:

25) Jau kelintą kartą tas pats veikėjas visus čia kaltina IP keitimu ir kremliaus troliais pravardžiuoja. Gal jau užteks šiandienai? Nes kaip ir nieko protingo nepasakote.
This is not the first time when the same person accuses us for changing IPs and [condescendingly] names us kremlin trolls. Maybe it is enough for today? You never say anything smart.

Theme three: Reactions to a “Kremlin troll” phenomenon

Users have expressed a disappointment in why these types of “Kremlin trolling” take place and are tolerated by national security:

26) J8s ne perspedinekit, o pakuokit tuos trolius. Su jusu turima info ir techninemis priemonemis cia kaip 123 suskaiciuot.
You should not warn us but have to get those trolls. With the information you have and technical means you can count them like 123

27) Troliu komentarus nedelsiant turim salinti
We should delete troll comments right away

28) Matai, jei taip pasisakytų tikri žmonės, nesislepiantys po išgalvotais fb profiliais, jei tai darytų atvirai ir garbingai, tai būtų viena. O dabar čia visiškas trolinimas.
If there were real people, they would express themselves openly, honorably, that would be one thing, instead of hiding themselves through fake fb [Facebook] profiles. [This looks like] a complete trolling. [6]

 

++++++++++

Discussion

This study analyzed themes associated with messages of ideological trolling, in reference to “Kremlin trolls” on the most prominent Lithuanian news portal, delfi.lt. To identify ideological trolling, first we provided the overall use of ‘troll’ in the sample, however, qualitatively we focused only on themes, associated with “Kremlin troll” and its variations. “Kremlin troll” was found to be used in themes calling out the “Kremlin trolls” and rebuttal by “Kremlin trolls.” Emergence of the uncertainty related to calling out the “Kremlin troll” and the “Kremlin troll” rebuttal leaves an unanswered question: Who is the troll? Uncertainty created through comments, furthermore, resembles the current media environment of “fake news” that is prevalent both in English-speaking (see e.g., Berkowitz and Schwartz, 2016) and non-English speaking contexts (see e.g., Khaldarova and Pantti, 2016).

Whether community trolls are working to draw out paid Russian trolls, or paid trolls are working to perpetuate their status rather than submissively disappear, there is a symbiotic relationship occurring. This interpretation continues the mode of questioning raised by Winter (2015) in his discussion of ISIS. He finds that propagandists’ first intention is to build a sort of symbiosis between themselves and the propagandee, so that their consumers willingly project a predetermined version of events across the public and private spheres.

Handling (ideological) trolling: Platform-based solutions

Theoretically and methodologically, the first logical step regarding ideological trolling online is accurately detecting trolls and trolling behavior. In online news portals, troll detection presents itself with specific challenges not only for media practitioners who handle user comments online (Hern, 2015; Rosenbloom, 2014) but also for the scholarly community (Lee, et al., 2014; Mehta, et al., 2007; Stringhini, et al., 2015; Thomas, et al., 2013; Wei, et al., 2015).

Previous research has provided ample evidence for potential manipulation in online spaces (Fayazi, et al., 2015; Ferrara, 2015; Lee, et al., 2014; Ratkiewicz, et al., 2010) exemplifying a certain degree of success (Wang, et al., 2014). Such manipulations have been found to be used not only in trivial situations but also to achieve political influence (Thomas, et al., 2013). Techniques included analysis of accounts (Stringhini, et al., 2015; Thomas, et al., 2013) or behavioral pattern detection by extracting Twitter networks of user interactions (Ratkiewicz, et al., 2010). Mehta, et al. (2007) found that while current detection algorithms are able to use certain characteristics of spam profiles to detect spam, the algorithms suffer from low precision and require large amounts of data for training.

Specific challenges of online news portals relate to its accessibility. As online portals provide voices for anyone to participate, they may be subject to trolling or astroturfing, especially with current identity-masking software abilities. As Shachaf and Hara (2010) have noted, online trolls (i.e., in Wikipedia) work in isolation and under hidden virtual identities. In the case of delfi.lt this definition might apply to all of the posters, since registration is not mandatory, neither it is part of the delfi.lt user commenting practices.

Institutional perspectives (decentralization)

Online comments so far, have been handled on a case-to-case basis by media practitioners. Many leave it up to the participants, as the overworked and understaffed news organizations may perhaps be susceptible to their own trolling and doxing by Russian surveillance. Some Scandinavian news portals have adopted practices to block user comments for a given time (Almgren and Olsson, 2015), even if such practices were not related to trolling or ideological trolling. Delaying the comment release may be potentially effective for reducing ‘trolling,’ one might argue that such practices discourage participation overall due to the relevance of new news items, compared to older ones, in rapidly changing news cycles.

Decentralizing discourse from media portals could be another option. For example, communities could manage comments, as is currently the case in delfi.lt (where users can flag inappropriate comments) or as it takes place on Twitter (see Geiger, 2016). Indeed, some of the user comments on delfi.lt point to a Facebook group page “Stop Kremlin Troll” dedicated to helping with identifying and revealing the tactics of trolling and ideological trolling in online comments. The existence of such a Facebook page with information on how to detect ‘trolls’ showcases the grassroots or at least oppositional forces that have been activated to educate commenters.

Limitations and future studies

While this study has analyzed trolling as found in user comments, we do not underscore the limitations of the positivism attributed to the do-it-yourself paradigm of Web 2.0 and false hopes related to the Habermasian public sphere expressed by many (Bakardjieva, 2008; Bennett, 2012; Deuze, 2006; Manovich, 2009). This study shows the opposite: calling out Russian trolls or Russian troll rebuttal creates more uncertainty than certainty.

This study presents questions for future research. First of all, who are the users who ‘provoke’ defense mechanisms or the “Russian troll” rebuttal theme found in this study? Second, previous research has identified that in Twitter, during the U.S. presidential elections, a significant amount of posts were generated by automated bots, rather than individuals, as discussed by Bessi and Ferrara (2016). Thus, it is not clear, if users are defending themselves from real posters or automated bots. Future studies should address questions in relation to the ‘other side’ of trolling, i.e., by focusing on content that provokes trolling: Internet trolling or “Russian trolls.”

Broader theoretical questions that remain unanswered are how to account for ideological trolling, embedded in a given political context, rather than the discursive practice of a given subculture? What does it mean to identify such behaviors? What does it mean to post or read comments that are seeded with mistrust or potential ideological flavor injected by a foreign government? Future research should address questions such as how to account for extended notions of trolling when ‘trolling’ becomes part of the expression of political ideology.

 

++++++++++

Conclusion

This study has continued to investigate what Phillips (2015) calls a transition phase in trolling ideology, with a focus on Lithuanian contexts, where users adopt the ‘Kremlin troll’ frame to refer to astroturfing. The Kremlin troll was defined through two oppositional ideologies: users who called out the “Kremlin trolls” and users who used defensive rebuttal to the accusations placed on “Kremlin trolls.” Even if the offensive and defensive tactics might seem rather similar to the overall Internet troll tactics, as discussed by Herring, et al. (2002), the unique side of this sample was the emergence of ideological trolling, charged with accusations of astroturfing, i.e., some commentators being paid by a foreign government.

We conclude that the “Kremlin troll” in this study is an example of an ideological device, rather than the subcultural phenomenon that is prevalent in English-language contexts. Political paid trolls’ online behaviors have been found to differ from regular astroturfing, such as in the Indian election case reported by Ippolito (2013), which makes it difficult to say who is who. As Fichman and Sanfilippo (2016) suggest, political trolls are more ambiguous than regular community members because they may resemble a political opposition that merely represents a contrasting ideology.

Results of this study show that users used sarcasm and humor when referring to “Kremlin trolls,” which furthermore reinforces uncertainty. Moreover, the perceivably sarcastic or humorous self-identification as a troll downplays the astroturfing accusations presented by the media and some users. This finding shows how trolling can ultimately be seen as ongoing, and context-sensitive, tactic embedded in the fundamental components of personal identity and social construction whose definition changes in each circumstance, as suggested by previous research (Bishop, 2014; Phillips, 2015; Sanfilippi, et al., 2017). Similarly, the use of calling out trolls may be related to Bergstrom’s (2011) claim that labeling someone as a troll can be used as a justification for punishing those who transgress community behaviors.

It is still uncertain whether some of the people posting ‘trolling’ comments are actual paid trolls, or trolls that emerge from the news portal’s readership. Some news portals’ commentators call out trolls, the others identify themselves as trolls, and even call the hosting online news portal a troll. This uncertainty illustrates how people can no longer know who is an authentic commentator and who is a paid astroturfer, and which opinions are fake or financed by a foreign government.

Another theme emerged in the results was disappointment expressed by the commentators on the “Kremlin trolling” phenomenon, in the lack of actions by the news portals themselves, national security, or the community itself. Members of an online community may be pessimistic about a news portal’s ability to cover stories or have uncertainty about their own interpretations of news stories or use. Time and further research will show how users employ the term ‘troll,’ whether in political or apolitical modalities of discursion. Meanwhile, it is yet too soon to speculate if and how ‘troll’ will be used in relation to commenting for example, on American politics by delfi.lt or any foreign news outlets. It is still too soon to make ultimate judgements as to what constitutes trolling behavior in online contexts, especially when some ideologically and politically charged comments may represent voices of grassroot activists and the others express paid opinions (Phillips, 2015). End of article

 

About the authors

Asta Zelenkauskaite is an Assistant Professor of Communication at Drexel University. Her research inctersects between user-centric approaches to new media and information processing.
Direct comments to: az358 [at] drexel [dot] edu

Brandon Niezgoda is an Adjunct Professor and doctoral candidate in the Department of Communication and Culture at Drexel University. His primary research areas are neoliberalism, independent filmmaking, production of culture, medical humanities, and social network analysis.
E-mail: bcn23 [at] drexel [dot] edu

 

Notes

1. Shachaf and Hara, 2010, p. 357.

2. Renz, 2016, pp. 283–300.

3. Hardaker, 2010, p. 1.

4. The phrase Koloradu trolls requires an explanation. It is a reference to the Colorado potato beetle (Leptinotarsa decemlineata) that features a color pattern on its body of black and orange stripes. This color pattern is the same as the Ribbon of Saint George (Геоэгиевская лента, Georgiyevskaya lenta), a widely recognized military symbol in Russia, used by Russian citizens as a sign of public support for the Russian government. Further details can be found at Oushakine (2013) or UAWire (2016) at http://uawire.org/news/lithuanian-faction-st-george-ribbon-a-symbol-of-russian-aggression-and-imperialist-ambitions#.

5. Donetsk is a large, industrialized city in Ukraine known for its pro-Russian separatist movement.

6. The message refers to the fact that users can comment non-anonymously via their Facebook account log-in.

 

References

S. Almgren and T. Olsson, 2015. “‘Let’s get them involved’ ... to some extent: Analyzing online news participation,” Social Media + Society, volume 1, number 2.
doi: http://dx.doi.org/10.1177/2056305115621934, accessed 23 April 2017.

M. Bakardjieva, 2008. “Bulgarian online forums as carnival: Popular political forms and new media,” In: F. Sudweeks, H. Hrachovec, and C. Ess (editors). Proceedings Cultural Attitudes Towards Communication and Technology 2008 (Murdoch University, Australia), pp. 286–300, and at http://sammelpunkt.philo.at:8080/2373/1/bakardjieva_p.pdf, accessed 23 April 2017.

P. Baker, 2001. “Moral panic and alternative identity construction in Usenet,” Journal of Computer–Mediated Communication, volume 7, number 1.
doi: http://dx.doi.org/10.1111/j.1083-6101.2001.tb00136.x, accessed 23 April 2017.

K. Bergstrom, 2011. “‘Don’t feed the troll’: Shutting down debate about community expectations on Reddit.com,” First Monday, volume 16, number 8, at http://firstmonday.org/article/view/3498/3029, accessed 14 April 2014.
doi: http://dx.doi.org/10.5210/fm.v16i8.3498, accessed 23 April 2017.

D. Berkowitz and D. Schwartz, 2016. “Miley, CNN and The Onion: When fake news becomes realer than real,” Journalism Practice, volume 10, number 1, pp. 1–17.
doi: http://dx.doi.org/10.1080/17512786.2015.1006933, accessed 23 April 2017.

J. Bishop, 2014. “Representations of ‘trolls’ in mass media communication: A review of media-texts and moral panics relating to ‘Internet trolling’,” International Journal of Web Based Communities, volume 10, number 1, pp. 7–24.
doi: http://dx.doi.org/10.1504/IJWBC.2014.058384, accessed 23 April 2017.

M. Brandel, 2007. “Blog trolls and cyberstalkers: How to beat them,” Computerworld (28 May), pp. 32–33, and at http://www.computerworld.com/article/2552596/networking/blog-trolls-and-cyberstalkers--how-to-beat-them.html, accessed 23 April 2017.

Š. Černiauskas, 2013. „Trolių“ metai: interneto šmaikštuoliams 2013-ieji buvo kaip niekad derlingi [The year of the ‘trolls’: 2013 has been a fruitful year for Internet witty], at http://www.delfi.lt/news/daily/lithuania/troliu-metai-interneto-smaikstuoliams-2013-ieji-buvo-kaip-niekad-derlingi.d?id=63642420, accessed 23 April 2017.

A. Chen, 2015. “The agency,” New York Times Magazine (2 June), at http://www.nytimes.com/2015/06/07/magazine/the-agency.html, accessed 5 July 2015.

A. Cox, 2006. “Making mischief on the Web,” Time (16 December), at http://content.time.com/time/magazine/article/0,9171,1570701,00.html, accessed 23 April 2017.

L. Dahlberg, 2001. “Computer–mediated communication and the public sphere: A critical analysis,” Journal of Computer–Mediated Communication, volume 7, number 1.
doi: http://dx.doi.org/10.1111/j.1083-6101.2001.tb00137.x, accessed 23 April 2017.

J. Donath, 1999. “Identity and deception in the virtual community,” In: M. Smith and P. Kollock (editors). Communities in cyberspace. London: Routledge, pp. 29–59.

P. Fichman and M. Sanfilippo, 2016. Online trolling and its perpetrators: Under the cyberbridge. Lanham, Md.: Rowman & Littlefield.

R. Geiger, 2016. “Bot-based collective blocklists in Twitter: the counterpublic moderation of harassment in a networked public space,” Information, Communication & Society, volume 19, number 6, pp. 787–803.
doi: http://dx.doi.org/10.1080/1369118X.2016.1153700, accessed 23 April 2017.

B. Glaser and A. Strauss, 1967. The discovery of grounded theory: Strategies for qualitative research. Chicago: Aldine.

C. Hardaker, 2010. “Trolling in asynchronous computer-mediated communication: From user discussion to academic definitions,” Journal of Politeness Research, volume 6, number 2, pp. 215–242.
doi: https://doi.org/10.1515/jplr.2010.011, accessed 23 April 2017.

C. Harris, 2012. “Detecting deceptive opinion spam using human computation,” AAAI Technical Report, WS-12-08, at http://www.aaai.org/ocs/index.php/WS/AAAIW12/paper/download/5256/5607, accessed 23 April 2017.

A. Hern, 2015. “Twitter CEO: We suck at dealing with trolls and abuse,” Guardian (5 February), at http://www.theguardian.com/technology/2015/feb/05/twitter-ceo-we-suck-dealing-with-trolls-abuse, accessed 23 April 2017.

S. Herring, K. Job-Sluder, R. Scheckler, and S. Barab, 2002. “Searching for safety online: Managing ‘trolling’ in a feminist forum,” Information Society, volume 18, number 5, pp. 371–384.
doi: https://doi.org/10.1080/01972240290108186, accessed 23 April 2017.

N. Ippolito, 2013. “Fox News paid staffers to troll the hell out of the interwebs, according to new book,” PolicyMic (21 October), at https://mic.com/articles/69139/fox-news-paid-staffers-to-troll-the-hell-out-of-the-interwebs-according-to-new-book#.IdqX4OYJC, accessed 23 April 2017.

I. Khaldarova and M. Pantti, 2016. “Fake news: The narrative battle over the Ukrainian conflict,” Journalism Practice, volume 10, number 7, pp. 891–901.
doi: http://dx.doi.org/10.1080/17512786.2016.1163237, accessed 23 April 2017.

I. Kozachenko, 2015. “Bad news for Putin as support for war flags beyond Russia’s «troll farms’,” The Conversation (19 August), at http://theconversation.com/bad-news-for-putin-as-support-for-war-flags-beyond-russias-troll-farms-46235, accessed 30 March 2016.

M. Kroenig, 2015. “Facing reality: Getting NATO ready for a new Cold War,” Survival, volume 57, number 1, pp. 49–70.
doi: http://dx.doi.org/10.1080/00396338.2015.1008295, accessed 23 April 2017.

K. Lee, S. Webb, and H. Ge, 2014. “The dark side of micro-task marketplaces: Characterizing Fiverr and automatically detecting crowdturfing,” arXiv (3 June), at https://arxiv.org/abs/1406.0574, accessed 23 April 2017.

B. Mehta, T. Hofmann, and P. Fankhauser, 2007. “Lies and propaganda: Detecting spam users in collaborative filtering,” IUI ’07: Proceedings of the 12th International Conference on Intelligent User Interfaces, pp. 14–21.
doi: http://dx.doi.org/10.1145/1216295.1216307, accessed 23 April 2017.

J. Mindaugas, 2012. “Lietuva — tėvynė trolių ir varguolių [Lithuania – The land of the trolls and the hardworking]” (6 April), at http://www.delfi.lt/pilietis/voxpopuli/lietuva-tevyne-troliu-ir-varguoliu.d?id=57638265, accessed 23 April 2017.

R. Naraine, 2007. “The 10 biggest Web annoyances,” PCWorld (9 November), at http://www.pcworld.com/article/138872/article.html, accessed 23 April 2017.

S. Oushakine, 2013. “Remembering in public: On the affective management of history,” Ab Imperio, volume 2013, number 1, pp. 269–302.
doi: http://dx.doi.org/10.1353/imp.2013.0000, accessed 27 April 2017.

K. Pearce, 2015. “Democratizing kompromat: The affordances of social media for state-sponsored harassment,” Information, Communication & Society, volume 18, number 10, pp. 1,158–1,174.
doi: http://dx.doi.org/10.1080/1369118X.2015.1021705, accessed 23 April 2017.

W. Phillips, 2015. This is why we can’t have nice things: Mapping the relationship between online trolling and mainstream culture. Cambridge, Mass.: MIT Press.

J. Ratkiewicz, M. Conover, M. Meiss, B. Gonçalves, S. Patil, A. Flammini, and F. Menczer, 2010. “Detecting and tracking the spread of astroturf memes in microblog streams,” arXiv (16 November), at https://arxiv.org/abs/1011.3768, accessed 23 April 2017.

B. Renz, 2016. “Russia and ‘hybrid warfare’,” Contemporary Politics, volume 22, number 3, pp. 283–300.
doi: http://dx.doi.org/10.1080/13569775.2016.1201316, accessed 23 April 2017.

S. Rosenbloom, 2014. “Dealing with digital cruelty,” New York Times (23 August), at http://www.nytimes.com/2014/08/24/sunday-review/dealing-with-digital-cruelty.html, accessed 23 April 2017.

M. Sanfilippo, S. Yang, and P. Fichman, 2017. “Managing online trolling: From deviant to social and political trolls,” Proceedings of the 50th Hawaii International Conference on System Sciences, Critical and Ethical Studies of Digital and Social Media Minitrack; version at http://hdl.handle.net/10125/41373, accessed 23 April 2017.

P. Shachaf and N. Hara, 2010. “Beyond vandalism: Wikipedia trolls,” Journal of Information Science, volume 36, number 3, pp. 357–370.
doi: http://dx.doi.org/10.1177/0165551510365390, accessed 23 April 2017.

G. Stringhini, P. Mourlanne, G. Jacob, M. Egele, C. Kruegel, and G. Vigna, 2015. “EVILCOHORT: Detecting communities of malicious accounts on online services,” 24th USENIX Security Symposium, at https://www.usenix.org/node/190847, accessed 23 April 2017.

K. Thomas, D. McCoy, C. Grier, A. Kolcz, and V. Paxson, 2013. “Trafficking fraudulent accounts: The role of the underground market in Twitter spam and abuse,” 22nd USENIX Security Symposium, at https://www.usenix.org/conference/usenixsecurity13/technical-sessions/paper/thomas, accessed 23 April 2017.

UAWire, 2016. “Lithuanian faction: St. George Ribbon a symbol of ‘Russian aggression and imperialist ambitions’,” (30 January), at http://uawire.org/news/lithuanian-faction-st-george-ribbon-a-symbol-of-russian-aggression-and-imperialist-ambitions#, accessed 23 April 2017.

S. Walker, 2015. “Salutin’ Putin: inside a Russian troll house,” Guardian (2 April), at http://www.theguardian.com/world/2015/apr/02/putin-kremlin-inside-russian-troll-house, accessed 23 April 2017.

G. Wang, T. Wang, H. Zheng, and B. Zhao, 2014. “Man vs. machine: Practical adversarial detection of malicious crowdsourcing workers,” 23rd USENIX Security Symposium, at https://www.usenix.org/node/184406, accessed 23 April 2017.

W. Wei, K. Joseph, H. Liu, and K. Carley, 2015. “The fragility of Twitter social networks against suspended users,” ASONAM ’15: Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015, pp. 9–16.
doi: http://dx.doi.org/10.1145/2808797.2809316, accessed 23 April 2017.

C. Winter, 2015. The virtual ‘caliphate’: Understanding Islamic State’s propaganda strategy. London: Quilliam, at http://www.stratcomcoe.org/charlie-winter-virtual-caliphate-understanding-islamic-states-propaganda-strategy, accessed 23 April 2017.

E. Woodcock and N. Baum, 2015. “Marketing: Understanding the modern patient and consumer,” In: N. Baum, R. Bonds, T. Crawford, K. Kreder, K. Shaw, T. Stringer, and R. Thomas (editors). The complete business guide for a successful medical practice. Cham, Switzerland: Springer International, pp. 207–224.
doi: http://dx.doi.org/10.1007/978-3-319-11095-0_18, accessed 23 April 2017.

L. Ziegahn, 2001. “‘Talk’ about culture online: The potential for transformation,” Distance Education, volume 22, number 1, pp. 144–150.
doi: http://dx.doi.org/10.1080/0158791010220109, accessed 23 April 2017.

 


Editorial history

Received 24 March 2017; accepted 24 April 2017.


Creative Commons License
This paper is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

“Stop Kremlin trolls:” Ideological trolling as calling out, rebuttal, and reactions on online news portal commenting
by Asta Zelenkauskaite and Brandon Niezgoda.
First Monday, Volume 22, Number 5 - 1 May 2017
http://firstmonday.org/ojs/index.php/fm/article/view/7795/6225
doi: http://dx.doi.org/10.5210/fm.v22i15.7795





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2017. ISSN 1396-0466.