First Monday

Get lost, troll: How accusations of trolling in newspaper comment sections affect the debate by Magnus Knustad



Abstract
This qualitative study explores instances where someone is accused of being a troll or a bot in newspaper comment sections. Trolls have been known to create a hostile environment in comment sections, often motivated by attention seeking and amusement. In recent years, following the Brexit vote and the U.S. presidential election of 2016, trolls have also been accused of actively undermining the Western political climate by using social media to divide political opponents. Furthermore, technological development has led to the possibility of automated software, known as bots, playing a role in online debates. As social media users and participants of online comment sections become more digitally literate, the awareness of trolls and bots will hopefully make people less susceptible to online manipulation. But this awareness could also cause commenters to discredit and delegitimize opposing arguments in comment sections by accusing others of being a troll or a bot, without considering the merits of the argument itself. If this is the case, it constitutes a challenge in creating a democratically valuable debate in comment sections. In this study, comments from three U.S. news sites were sampled and analyzed to investigate how accusations of trolling are made, and how debates are affected by such accusations. The results showed that right-wing commenters were more likely to be accused of trolling, and that these accusations seem to have been motivated by political differences. Accusers would either challenge the suspected troll, critique the effectiveness of the perceived trolling, make fun of the suspected troll, or simply warn other commenters about their presence. Finally, while debates often continued after an accusation of trolling had been made, the accuser and the accused rarely participated further. The results suggest that accusations of trolling do not have any major impact on the debate. It is, however, problematic that such accusations seem to be used as a rhetorical tool to discredit opposing arguments, which could lower the deliberative quality of debates in comment sections.

Contents

Introduction
Methodology
Results
Discussion
Conclusion

 


 

Introduction

Newspaper comment sections have been described as a staple of the online experience (Finley, 2015). With approximately 90 percent of news sites having some form of comment section (Stroud, et al., 2017) it has become possible for readers of almost any newspaper to share their views to a large audience and add their voices to public debates (Artime, 2016). Newspaper comment sections provide an arena for public debate and have been found to shape the opinions of the readers and influence how journalists work (Toepfl and Piwoni, 2015). As with any digital platform with user-generated content, newspaper comment sections can be susceptible to trolling behavior. In recent years, mainstream media has given trolls and bots much attention, including how trolling may be used as a method for political influencing. As Internet users become more knowledgeable about these disruptive elements, they may expect to encounter trolls in comment sections. The availability heuristic, a psychological mechanism in which a person judges the likelihood of an event by how readily pertinent examples come to mind [1], could possibly affect the likelihood of a commenter judging the author of a disagreeable comment as a troll. Overreporting of a topic can lead to individuals experiencing a biased assessment of risk [2]. Because of the increased mainstream reporting on trolling, bots and social media being used for foreign political influence, individuals may form a biased assessment of the risk of encountering trolls or bots online, including in newspaper comment sections.

The increased focus on trolling could cause users of comment sections to react appropriately to divisive content and trolling behavior. However, it may also provide an opportunity for debaters to disregard arguments from people with opposing political views. Accusations of trolling could potentially be used to shut down opposing arguments, whether these are made by trolls or not. In some cases, a commenter may even be accused of being a bot. This qualitative study aims to explore accusations of trolling in the comment sections of three newspapers: Politico, Washington Post and New York Times. Comment sections have the potential for being a democratically valuable forum for public debate, where individuals can openly discuss topics of common interest and share experiences and information relevant to news stories. Therefore, it’s important not only to understand how debates in comment sections are affected by trolling, but also how participants react to the possibility of trolling taking place. At its core, an accusation of trolling represents a disbelief in a commenter’s intentions and credibility, and it is important to understand the motivations and effects of such accusations. To explore this topic, this study will investigate how accusations of trolling in newspaper comment sections are made, and how these accusations are responded to.

Research on trolling in newspaper comment sections has several methodological challenges. Firstly, comment sections are usually moderated by newspaper employees who may delete comments containing examples of trolling. In recent years, newspapers have begun taking editorial action against unwanted comments, such as increased moderation, and identifying commenters by requiring them to sign up for an account or having them sign their comments using their Facebook identity (Gonçalves, 2015; Ihlebæk, et al., 2013; Sonderman, 2011; Stroud, et al., 2017).

Secondly, identifying comments that are written by trolls can be problematic. The term trolling can refer to a variety of online activities, some of which may look innocent at first glance. For example, trolls can share positive content to gain an online following (Linvill and Warren, 2019), or pretend to agree with the opposing side of an issue to voice their disagreements with that side in the form of “concerns” (Castile, 2016). Internet trolls can also be considered a form of social hackers who, according to Kerr and Lee (2019), uses technical and soft skills, such as manipulating social interactions and dynamics, to manipulate their targets. For most people, however, the term trolling usually refers to uncivil or impolite online behavior. But such behavior in comment sections could be confused with sincere but uncivil or impolite comments, which is commonly found in newspaper comment sections (Graham and Wright, 2015; Reagle, 2015; Rowe, 2015). While identifying comments written by trolls can be difficult, identifying accusations of trolling is less challenging. The current study investigates such accusations, to better understand how accusations of trolling affect the debate in comment sections. The study has three goals: 1) to analyze comments that have been accused of being written by trolls or bots, 2) to analyze how such accusations are made, and 3) to investigate how such accusations are responded to. In this paper, I will go through current research on the topics of trolling and bots, and the mechanisms by which the increased mainstream attention to these topics could make commenters more likely to judge opposing arguments in comment sections as trolling behavior. I will then explain the methodology and results of the study, before discussing the results.

Trolls and bots

The online world provides us with an unprecedented amount of information. However, as Hardaker points out, that information can be dangerously wrong, and computer-mediated communication involves the possibility of deception [3]. Deception is at the core of trolling, which has been defined as “the practice of behaving in a deceptive, destructive, or disruptive manner in a social setting on the Internet with no apparent instrumental purpose” (Buckels, et al., 2014). Traditionally, trolls are jokesters who behave in an antagonistic way for their own amusement’s sake [4]. Their attempts to elicit reactions from their victims can be motivated by boredom, attention seeking, revenge, pleasure, and a desire to cause damage to a community (Shachaf and Hara, 2010). There has also been found a correlation between trolling behavior and certain personality traits. Using a variety of personality tests such as the Short Sadistic Impulse Scale, Varieties of Sadistic Tendencies Scale, Comprehensive Assessment of Sadistic Tendencies, Short Dark Triad Scale, and Big Five Inventory, researchers found that trolling correlates with sadism, psychopathy, and Machiavellianism (Buckels, et al., 2014). In a more recent study, Buckels, et al. (2019) found that trolls and sadists found pleasure in visual representations of people in physical or emotional pain, while downplaying the magnitude of that pain, and that trolls and sadists reacted more positively to reading about harmful scenarios.

In addition to trolling, bots have become a well-known online phenomenon. The term is defined by Bastos and Mercea as “automatic posting protocols used to relay content in a programmatic fashion” [5]. Bots are essentially computer programs that can use the Internet to add content to social media platforms. They have a wide variety of usage, including user interaction and automation of tedious tasks (Lebeuf, et al., 2018). Bots created for interaction with humans have been found to lack authenticity and social competence (Neururer, et al., 2018). It has been found, however, that bots can be used successfully to spread disinformation on Twitter. One study found that bots were used to spread anti vaccine messages on the social media platform (Broniatowski, et al., 2018). The researchers found that the strategy used by trolls was to generate several tweets about the same topic to flood the discourse, and that the bots posted content at a higher rate than the average Twitter user. The bots were primarily used for spreading content, while the human trolls promoted discord by targeting both sides of the vaccine debate. Another study on the U.K. Brexit referendum found that bots on Twitter were effective at creating small- to medium-sized retweet cascades, that content retweeted by bots compromised user-generated hyperpartisan news, and that clusters of bots in a botnet could replicate active users (Bastos and Mercea, 2019).

A much-discussed topic in recent years is the idea of foreign influence on Western politics. Organized cyber operations have been used to influence European politics, though the effect of such activities is described as limited (Karlsen, 2019). According to Stewart, et al. (2018), troll accounts on Twitter took advantage of the Black Lives Matter movement to create discord during the U.S. presidential election of 2016. The content produced by these accounts rarely crossed political divides, suggesting that filter bubbles and echo chambers keep disinformation within political camps. These types of findings help to fuel a general conception of divisive political influence through social media, sometimes perpetrated by foreign entities.

Accusations of trolling

While the history of trolling can be traced back to the 1980s, the concept didn’t receive much mainstream attention until 2010 [6]. In recent years, the topic of foreign influence on Western politics has received much attention by the mainstream media. Most people are aware of the existence of bots, if only because most Internet users will at one point have to prove their humanity by completing a captcha test to prove they’re not a robot — a test that bots have been known to pass (Sulleyman, 2017). In addition, bots have received much media attention in later years, with one study about bots’ influence on the 2016 Brexit referendum in the U.K. being reported on in over 250 news articles [7]. Terms such as trolling and bots have become widely used in digital communities, such as comment sections, and there is much awareness of these disruptive elements among internet users as anxiety about trust, facts, and democracy is intensifying (Dimock, 2019).

Having knowledge and understanding of online phenomena is considered by researchers as an important skill and a requirement for democratic participation, as well as by public institutions such as the European Union (European Commission, 2016). In the early days of the Web, Wang (1996) argued that educating the ignorant would help against the negative effects of flaming. Howard Rheingold writes that “those who understand the fundamentals of digital participation, online collaboration, informational credibility testing, and network awareness will be able to exert more control over their own fates than those who lack this lore.” [8]. Graham and Wright (2015) are among the researchers who expect that user behavior have evolved as people gain more experience with, for example, trolling. Kerr and Lee (2019) claims that lack of technical literacy is one of the aspects of their targets that trolls take advantage of, meaning that increased technical literacy should make Internet users less susceptible to trolling.

When Internet users have more knowledge about trolls and bots, accusations of trolling are expected to increase. Accusations of trolling may function as a tool to delegitimize extremist point of views or actual trolling behavior in comment sections. However, they may also be used simply to discredit and delegitimize arguments one does not agree with. Having knowledge about disruptive elements such as trolls and bots may provide an opportunity for debaters to disregard opposing arguments by claiming they are made by people or bots with sinister intentions. In any online discussion, there will be disagreements. When faced with arguments that go against their preconceptions, a person may rationalize their beliefs by discrediting opposing arguments [9]. When having knowledge about the existence of trolls and bots, a person can discredit and delegitimize an opposing argument by accusing its author of being a troll or a bot, without having to consider the merits of the argument itself. If, for example, a person who identifies as a liberal sees a comment that they find offensive because it’s written in support of conservative ideals, that person may be tempted to think the comment is written by a troll simply because they are aware of the issues with trolling from mainstream media. This may be problematic in creating a democratically valuable online debate, as accusations of trolling could become a form of exclusion that decreases the value of online political debates. It may also be problematic on a personal level for any real person making an argument, only to be met with accusations of being a troll or a foreign agent, or not even being human. It could be uncomfortable for a person to have their arguments dismissed, and to be accused of being something that they are not.

There has been little research on how accusations of trolling are responded to by the person being accused, or by other commenters. This has caused a gap in our understanding of online debates. Comment sections are the target of much research on how incivility and toxic disinhibition affects their deliberative value. But I would argue that if accusations of trolling are used as a rhetorical tool to devalue opposing arguments, this could also affect the deliberative value of comment sections. However, despite the lack of research into accusations of trolling, some research has been done on how trolls are responded to by others. Hardaker considered accusations of trolling on Usenet and identified seven types of responses to trolling behavior: 1) Engaging by responding sincerely to the troll; 2) Ignoring the trolling attempt; 3) Exposing the troller to the rest of the group; 4) Challenging the troller directly or indirectly; 5) Critiquing the effectiveness, success, or quality of the troller; 6) Mocking or parodying the trolling attempt; and, 7) Reciprocating by trolling the troller [10]. Hardaker’s study focuses on creating a taxonomy of different ways people respond to perceived trolling, which makes it interesting in the current study. Hardaker’s response types is one of the methods that will be used in the current study to investigate accusations of trolling in comment sections.

 

++++++++++

Methodology

The three newspapers chosen for this study were Politico, Washington Post, and New York Times. These newspapers were chosen because they provide different venues for studying comment sections, with different levels of anonymity. Politico is a free-to-read newspaper that uses a Facebook plug-in as a comment section. This means that commenters on Politico must use their Facebook account when commenting. The Washington Post and New York Times do not use Facebook for their comment sections. Commenters on these news sites must create an account and choose a username, which can either be a pseudonym or their real name. The Washington Post and New York Times also have online subscription models that pose a barrier for some commenters, as a subscription is required to be able to read and comment on any article.

Constructed week sampling was used to create two constructed weeks from February of 2018 to February of 2019 for each newspaper being studied. This involved selecting two random Mondays, two random Tuesdays, etc., during the specified timeframe. This method of sampling is recommended for studying daily newspapers because it creates a randomly selected issue for each day of the week. Two constructed weeks have been found to be sufficient for representing a year’s content [11]. A total of 3,851 comments were collected from politically themed articles and stored in a database using this method. To ensure the anonymity of the commenters, names were continuously replaced with numeric identifiers.

After the comments were sampled, search queries were devised to identify accusations of trolling. Through a combination of SQL-queries and free search, comments containing any combination of the words “troll”, “bot”, and “Russian” were identified. While this was a thorough method for searching the comments, it does not guarantee that all accusations were found. Some misspelled words or accusations using unknown analogies may have been missed.

To analyze the sampled comments, a descriptive approach was used for each identified case (n=24). First, the different commenters and their roles were established; accused commenter, accuser, and other commenters. Then the accused and accuser’s comments were analyzed carefully for further details about how they communicate, and the seven response types identified by Hardaker (2015) were used to categorize the accusations of trolling. While these response types are not a crucial part of the current study, I would argue that the incorporation of existing taxonomies could serve a function by highlighting aspects of the data that I would not have considered otherwise. Hardaker’s taxonomy was used because it provides established categories for identifying different types of responses to perceived trolling. Finally, the general discussion was mapped out with special emphasis being put on how the different commenters respond to accusations of trolling and how such accusations affected the discussion. After having described each case, general trends were identified.

This methodology was approved by the Norwegian Centre for Research Data (Norsk senter for forskningsdata), which has imposed constraints to protect the privacy of the commenters whose data has been sampled. Even anonymized datasets can contain personal information that can cause a person to be identifiable (Markham and Buchanan, 2012). Therefore, in the following presentation of the results of this study, no comments will be quoted. Paraphrasing and descriptions will instead be used to illustrate the findings.

 

++++++++++

Results

In total, 30 accusations of trolling were found in the studied data, written by 31 accusers, and directed at 24 accused commenters. The reason for the discrepancy between the number of accusations and accusers is that one commenter accused someone of being both a troll and a bot. 24 (1.71 percent) of the comments from Politico contained some form of accusation, while only five (0.35%) from the Washington Post and one (0.09%) from the New York Times contained accusations. In other words, Politico had far more accusations of trolling in its comment sections than the other two news sites. This could be because Politico is the only Web site of the three that uses a Facebook plug-in for their comment sections. However, the observed difference could also be explained by demographic differences between the commenters on the different news sites. It is also worth noting that both the Washington Post and the New York Times have several barriers for commenting that Politico does not. Both papers require users to create a dedicated account on their Web sites to be able to comment. In addition, they have subscription plans that limit the activity of non-paying readers. This may create a barrier for trolls, which inadvertently reduces the number of accusations of trolling.

The accusations of trolling showed great variation in length, argumentative and rhetorical style, as well as temperament. Some of them were short — sometimes one-word long accusations of someone being a troll or a bot. Others were longer and argumentative. At times, an accuser seemed agitated by the perceived trolling, while other accusers seemed to find amusement in it. Some comments were directed at the person being accused of trolling, while some were directed at other commenters.

 

Table 1: Overview of the results, where each row represents a case of someone being accused of trolling or of being a bot.
Note: *Political leaning in this context refers to whether the accused commenter had expressed views in favor of a political side in American politics and may not reflect the right-to-left political spectrum of other countries and regions.
CaseNumber of accusationsPolitical leaning of accused*Continued discussion after accusationAccused replies or continues to participate in discussionWords used about the accused commenterType of response (Hardaker)
11LeftYesNoTroll/Bot3
21LeftYesNoBot3
31RightYesYesRussian troll 3
41LeftYes No Bot3
51LeftYesNoTroll6
61RightNoNoTroll5
71RightYesNoTroll3
81RightYesYesRussian troll4
91RightYesNoRussian troll3
101RightYesYesTroll3
112LeftYesNoRussian troll/Troll6,4
121RightYesYesTroll4
131RightYesNoTroll5
141RightYesNoRussian Troll3
155RightYesYes3x Troll/2x Bot5,5,3,3,3
161RightYesNoTroll6
171RightYesNoTroll4
181RightYesYesTroll3
191RightNoNoRussian Troll5
202RightYesNoRussian Troll/Troll3,6
211RightNoNoTroll3
221RightNoNoTroll3
231RightNoNoRussian Troll5
241RightNoNoTroll4
Total30R: 19, L: 5Yes: 18, No: 6Yes: 6, No: 18T: 18, RT: 6, B: 5 

 

Trolls, Russian trolls, and bots

As can be seen in Table 1, 18 of the accusations made were accusations of trolling. In addition to this, there were eight accusations of someone being specifically a Russian troll, and five accusations of someone being a bot. As illustrated by the word cloud in Figure 1, the most common accusation was when a commenter used the word troll. Sometimes this was the only word found in a comment, but mostly it was used within a sentence. It was typical for the accuser to address the accusation to other commenters. A typical example is when an accuser writes that other commenters should “ignore this troll”, or “he is obviously a troll”. Other times, the accuser directed the comment at the accused. Examples of such accusations are “Get lost, troll” or “Do you really believe that people believe your crap, troll?” On two occasions, the word troll was used in combination with a hashtag: #faketrumptroll and #purgethetrolls.

 

Word cloud of the most used words in comments accusing someone of trolling
 
Figure 1: Word cloud of the most used words in comments accusing someone of trolling.

 

Accusations of someone being a Russian troll followed a similar pattern as described above, but with some added rhetoric. These accusations were sometimes used in combination with other derogatory rhetoric. One commenter was accused of being a “Russian teenage troll” that ate too many potato chips. Another was called a sock puppet of the Russian government, and yet another was called a whore in addition to being accused of being a Russian troll.

Accusations of someone being a bot were made in a slightly different way. Mainly, none of these accusations were directed at the accused commenters. Instead, they were seemingly written to inform other commenters that the accused was believed to be a bot, by writing something along the lines of “Do not engage with this commenter because it’s a bot”. Following the logic of the person making the accusation, this makes sense. There would be no point in calling out a bot by engaging with it as if it was a real human being.

Types of accusations

The types of accusations were investigated using Hardaker’s (2015) response types. Most replies made to commenters who were accused of trolling would be categorized as sincere engagement, as most commenters would argue with the accused troll without making any accusations. However, most of the accusations of trolling would fall into one of four of Hardaker’s response types to trolling: Exposing the troll to the rest of the group, challenging the troller, critiquing the troll, and mocking or parodying the troll attempt (Table 2).

 

Table 2: Results of coding using Hardaker’s (2015) types of responses to perceived trolling.
(3) Exposing the troller to the rest of the group15
(4) Challenging the troller directly or indirectly5
(5) Critiquing the effectiveness, success, or quality of the troller6
(6) Mocking or parodying the trolling attempt4

 

As can be seen in Table 2, half of the accusations of trolling fell within category 3 of Hardaker’s types of responses to perceived trolling. It seems the most common way to accuse someone of trolling is by informing the rest of the commenters about the perceived troll. In one case, the accuser wrote that the accused is “probably” just a bot or a troll, and then went on to explain how one can tell if a commenter is not being sincere. The accuser argued that a troll’s goal is to cause division and anger in the comment sections. In another case, the accuser wrote that the accused was using a fake Facebook account, arguing that this indicated that they must have been a Russian troll.

Another observed type of response to perceived trolling was when accusers challenged the accused commenter. In one case, the accuser told a commenter that he identified as a Russian troll to “get lost”. In another case, the accused was called a “coward” and a “zero” because he was trolling. There were also several instances of the accuser saying that the accused commenter “sounds like a troll”.

Several accusers would use critique to call out perceived trolling. One such case involved the accuser calling a commenter a Russian troll because he had misunderstood the difference between American 800 and 900 phone numbers. Telephone numbers starting with the prefix 800 are usually toll-free in the U.S., while 900 numbers are identified as premium-rate telephone numbers, because additional services are provided (U.S. Federal Communications Commission, 2019). The accuser pointed out that any real American should know the difference between the two. This was sufficient for the accuser to believe that the accused was not a real American. Another such case was when an accuser thought that the accused commenter’s English skills were not good enough for him or her to be a native English speaker, and therefore the commenter must have been a Russian troll. The remaining accusations in this category simply contained phrases such as “low-effort trolling” or “you trolls are becoming pathetic”.

As mocking can be used for criticism, the final observed type of responses, mocking or parodying the trolling attempt, is similar to the previous category. But in these cases, the accuser seemed to have found some humorous enjoyment in the perceived attempts at trolling. One such commenter wrote “hahaha” before asking if the troll expected people to believe him. In another case, the accuser told the accused that he found the trolling hilarious, before asking the commenter to go back to troll school.

Vulgar, divisive, or conspiratorial comments

Of the 24 comments that were met with accusations of trolling, about half (n=13) were divisive, conspiratorial, or contained vulgarity. These were comments written by commenters who showed an attitude of non-cooperation and divisiveness. The remaining comments were more argumentative or informative but had a clear political leaning.

Vulgar comments were easily identified because they contained some form of vulgar language, usually derogatory curse words directed at other commenters or political figures. Divisive comments were those that displayed hostility and non-cooperativeness. An example of this was when a commenter accused of trolling wrote a very hostile comment about Canada. In another case, the accused commenter wrote that Trump supporters were psychopaths.

Conspiratorial comments were written by commenters sharing conspiracy theories, such as when one commenter accused of trolling wrote about collusion between the FBI and the Democratic Party. Another commenter accused of trolling wrote that President Obama had given Iran nuclear weapons, and even specified an exact number of weapons that had been given.

Political differences

Commenters accusing others of trolling rarely explained why they made accusations. Five of the accusers in this study specifically explained that the accused person’s Facebook account seemed fake. In addition, as noted above, two accusations were based on poor English skills or not understanding a particular part of American culture. Most accusations, however, contained no such explanation. In combination with the fact that half of the accused commenters did not write obviously divisive or vulgar comments, it seems that many accusations of trolling were made because of political disagreements. Most accusations of trolling were made towards commenters expressing politically right-wing views, i.e., commenters who showed support for Republicans and/or criticized Democrats. Twenty-five accusations were made by left-wing commenters, directed towards 19 right-wing commenters. In comparison, only five accusations of trolling were made by right-wing commenters.

How accusations of trolling affect the debate

Accusations of trolling were rarely replied to by the accused person or other commenters. The general trend seems to be that such accusations are ignored by other commenters, as the discussion tends to continue as before without any further comments being made by the accused or the accuser. This suggests that accusations of trolling do not have much effect on the discussion. Only seven times did a person being accused of trolling continue the discussion. Of these, only one addressed the actual accusation of trolling. This commenter, who had been accused of being a Russian troll, tagged the accuser in a response where he wrote “Nice try”.

 

++++++++++

Discussion

The current study had three goals: to analyze comments that were accused of being written by trolls or bots, to analyze how such accusations are made, and to investigate how these accusations are responded to. While the scope of this study is limited, it does show a pattern that allows for the following conclusions to be made about the data:

  1. Most accusations of trolling are made by left-wing commenters and directed towards commenters expressing right-wing views. This suggests that there is a political divide between those accusing and those being accused of trolling. There may be several explanations for this finding. Firstly, it may be that right-wing commenters more often behave in a way that would elicit accusations of trolling. However, as I will discuss in more detail, this may not be true in all cases because only half of the accused commenters wrote divisive or vulgar comments. Secondly, it may be that the mainstream attention on trolling influence commenters’ expectations about who could be a troll. The possibility of foreign influence on the 2016 U.S. presidential election has been given much mainstream attention, and much of this coverage has focused on trolling and fake news possibly helping the right-wing candidate to win. This could make left-wing commenters more suspicious about the intentions of right-wing commenters and make them more likely to accuse right-wing commenters of trolling.

  2. It is common for accusations of trolling to be motivated by political differences. Half of the accused commenters wrote comments that were clearly divisive, conspiratorial, or vulgar. While this may seem like trolling behavior, it is worth noting that many similar comments were written by commenters who were not accused of trolling. The other half of the accused commenters wrote argumentative or informative comments expressing their opinions in a way that might be expected in a comment section. In addition, many accusers also argued against the views of the accused commenters, and only a few of them specifically made claims of fake profiles when explaining the reason for their accusations. It seems therefore that most accusations of trolling were made because of a political disagreement. This finding is particularly troubling, because it suggests that people with a certain political viewpoint are at higher risk of having their arguments dismissed with accusations of trolling. When arguments in favor of a right-wing opinion are dismissed without being challenged by opposing argumentation, the deliberative value of a given debate suffers.

  3. Most of the commenters accusing someone of trolling will either challenge the accused troll’s arguments, mock or critique the troll, or warn other commenters about the presence of a troll. This conclusion was made using Hardaker’s response types to perceived trolling [12]. Four of her seven categories were identified in the current study; 1) Exposing the troller to the rest of the group; 2) Challenging the troller directly or indirectly; 3) Critiquing the effectiveness, success, or quality of the troller; and, 4) Mocking or parodying the trolling attempt. It is difficult to say why only four out of seven response types were identified in this study. It could be that there was a lack of data for the remaining three categories to be identified, that there were differences between the coders, differences between the platforms being studied, or that Hardaker created too many and too narrow categories. It should also be noted that some of Hardaker’s response types can overlap or blend together. An example of this is the response type Critiquing the effectiveness, success, or quality of the troll. Such critique can often be expressed by mocking or parodying the trolling attempt — which is a different response type.

  4. Accusations of trolling are rarely responded to by the accused person or other commenters. In only a few cases did the accused person continue the discussion, and in only one of them did the accused person confront the accusation of trolling. It would be tempting to suggest that the lack of further commenting from people accused of trolling suggests that the accusation has discouraged them from further participation. However, there are several other possibilities; perhaps they never intended to write more than the one comment, perhaps they never even saw the accusation, or perhaps they were indeed trolls. What is certain, however, is that other commenters mostly ignored accusations of trolling. Even when the accuser specifically encourages them to ignore the perceived troll, the discussion tends to continue as before. This suggests that accusing someone of being a troll does not discourage other commenters from engaging with the troll, and that such accusations have little effect on the debate. This would mean that false accusations of trolling are mostly ignored, but also that any legitimate accusations will not discourage others from engaging with the troll.

  5. Accusations of trolling were more common on Politico. Politico had by far the most accusations of trolling in their comment section. Politico is the only one of the three news sites that use a Facebook plug-in as a comment section. This means that anyone with a Facebook account can easily comment without having to create a separate account, as opposed to the Washington Post and the New York Times who use their own comment section plugins that require the creation of a separate account. While this is only speculation, it could be that the lower barrier for commenting on Politico leads to more actual trolling and more accusations of trolling. Furthermore, fake Facebook accounts have been discussed in mainstream media (Kottasová, 2017; Shane and Goel, 2017; Weise, 2017), which could lead to distrust in commenters using Facebook for identification. It is worth noting, however, that this study has only investigated three news sites and cannot make definitive conclusions about how the type of comment section being used affects the frequency of accusations of trolling.

Limitations of the study

The results of this study do not reveal a complete picture of the topic of trolling accusations in newspaper comment sections. The relatively few cases do not allow for broad conclusions to be made. A more quantitative study, using content analysis to categorize and quantify different types of accusations and responses, could shed further light on this topic. Another problem with the current study is that it has been difficult to validate the identities of people accused of having fake Facebook accounts. This is because of the technical and ethical limitations of the study that required the data to be anonymized in such a way that further investigations into the commenters themselves was impossible. Finally, it is worth mentioning that when studying data from comment sections, the data may be incomplete. Some comments that could have shed more light on the subject may have been deleted, ether by the commenters themselves or by moderators.

 

++++++++++

Conclusion

This study has explored a topic of research that has received little previous attention. As discussed in the literature review, while trolling has been a topic of research, little attention has been given to accusations of trolling. I have theorized that the increased mainstream attention to topics like trolling, foreign political influence, and bots can lead to individuals becoming more aware of these concepts. This in turn could lead to them identifying certain behaviors as trolling, whether or not they are caused by actual trolls. Newspaper comment sections, where strangers engage in political debates, is an arena where this can happen. Political disagreements may lead to accusations of trolling, and such accusations could be used as a rhetorical tool to dismiss opposing arguments. If this were true, it would constitute a challenge to the deliberative value of comment sections. By using real-world examples of comments from Politico, Washington Post, and New York Times, this study has analyzed accusations of trolling and how such accusations affect the debate.

This study has uncovered trends that have led to several conclusions about how accusations of trolling affect the debate in newspaper comment sections. Accusations of trolling often targeted right-wing commenters, were made because of political disagreements, were rarely responded to by the accused, and were mostly ignored by other commenters as the debates continued. If these conclusions were confirmed by further research, they would further illuminate a topic of public interest; the democratic value of comment sections. If comment sections are to serve as a forum for public debate, inclusion and openness should be valued. The activities of trolls, real or imaginary, and how they are responded to, can affect how people communicate in comment sections, the trust between commenters, and the inclusion of all those who want to participate. End of article

 

About the author

Magnus Knustad is a Ph.D. candidate in digital culture at the University of Bergen (Universitetet i Bergen), researching comment sections on news articles.
E-mail: magnus [dot] knustad [at] uib [dot] no

 

Notes

1. Gilovich, et al., 2016, p. 137.

2. Gilovich, et al., 2016, p. 139.

3. Hardaker, 2010, p. 223.

4. Hardaker, 2015, p. 202.

5. Bastos and Mercea, 2018, p. 2.

6. Hardaker, 2015, p. 202.

7. Bastos and Mercea, 2018, p. 2.

8. Rheingold, 2012, p. 2.

9. Gilovich, et al., 2016, p. 239.

10. Hardaker, 2015, p. 223.

11. Riffe, et al., 2014, pp. 85&nndash;86.

12. Hardaker, 2015, p. 223.

 

References

M. Artime, 2016. “Angry and alone: Demographic characteristics of those who post to online comment sections,” Social Sciences, volume 5, number 4.
doi: https://doi.org/10.3390/socsci5040068, accessed 17 July 2020.

M.T. Bastos and D. Mercea, 2019. “The Brexit botnet and user-generated hyperpartisan news,” Social Science Computer Review, volume 37, number 1, pp. 38–54.
doi: https://doi.org/10.1177/0894439317734157, accessed 17 July 2020.

M. Bastos and D. Mercea, 2018. “The public accountability of social platforms: Lessons from a study on bots and trolls in the Brexit campaign,” Philosophical Transactions of the Royal Society A, volume 376, number 2128 (13 September).
doi: https://doi.org/10.1098/rsta.2018.0003, accessed 17 July 2020.

D.A. Broniatowski, A.M. Jamison, S. Qi, L. Alkulaib, T. Chen, A. Benton, S.C. Quinn, and M. Dredze, 2018. “Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate,” American Journal of Public Health, volume 108, number 10, pp. 1,378–1,384.
doi: https://doi.org/10.2105/AJPH.2018.304567, accessed 17 July 2020.

E.E. Buckels, P.D. Trapnell, and D.L. Paulhus, 2014. “Trolls just want to have fun,” Personality and Individual Differences, volume 67, pp. 97–102.
doi: https://doi.org/10.1016/j.paid.2014.01.016, accessed 17 July 2020.

E.E. Buckels, P.D. Trapnell, T. Andjelovic, and D.L. Paulhus, 2019. “Internet trolling and everyday sadism: Parallel effects on pain perception and moral judgment,” Journal of Personality, volume 87, number 2, pp. 328–340.
doi: https://doi.org/10.1111/jopy.12393, accessed 17 July 2020.

E. Castile, 2016. “Watch out for this kind of troll,” Bustle (26 February), at https://www.bustle.com/articles/144447-what-is-concern-trolling-watch-out-for-this-subtle-form-of-shaming, accessed 17 July 2020.

M. Dimock, 2019. “An update on our research into trust, facts and democracy,” Pew Research Center (5 June), at https://www.pewresearch.org/2019/06/05/an-update-on-our-research-into-trust-facts-and-democracy/, accessed 17 July 2020.

European Commission, 2016. “Digital Skills at the core of the new Skills Agenda for Europe” (10 June), at https://ec.europa.eu/digital-single-market/en/news/digital-skills-core-new-skills-agenda-europe, accessed 17 July 2020.

K. Finley, 2015. “A brief history of the end of the comments,” Wired (8 October), at https://www.wired.com/2015/10/brief-history-of-the-demise-of-the-comments-timeline/, accessed 17 July 2020.

T. Gilovich, D. Keltner, S. Chen, and R.E. Nisbett, 2016. Social psychology. Fourth edition. New York: W.W. Norton.

J. Gonçalves, 2015. “A peaceful pyramid? Hierarchy and anonymity in newspaper comment sections,” Observatorio, volume 9, number 4, pp. 1–13, and at at http://www.scielo.mec.pt/scielo.php?lng=en, accessed 17 July 2020.

T. Graham and S. Wright, 2015. “A tale of two stories from ‘Below the Line’: Comment fields at the Guardian,” International Journal of Press/Politics, volume 20, number 3, pp. 317–338.
doi: https://doi.org/10.1177/1940161215581926, accessed 17 July 2020.

C. Hardaker, 2015. “‘I refuse to respond to this obvious troll”: An overview of responses to (perceived) trolling,” Corpora, volume 10, number 2, pp. 201–229.
doi: https://doi.org/10.3366/cor.2015.0074, accessed 17 July 2020.

C. Hardaker, 2010. “Trolling in asynchronous computer-mediated communication: From user discussions to academic definitions,” Journal of Politeness Research, volume 6, number 2.
doi: https://doi.org/10.1515/jplr.2010.011, accessed 17 July 2020.

K.A. Ihlebæk, A.S. Løvlie, and H. Mainsah, 2013. “Mer åpenhet, mer kontroll: Håndteringen av nettdebatten etter 22,” Norsk Medietidsskrift, volume 20, number 3, pp. 223–240, and at https://www.idunn.no/nmt/2013/03/mer_aapenhet_mer_kontroll_-_haandteringen_av_nettdebatten_e, accessed 17 July 2020.

G.H. Karlsen, 2019. “Divide and rule: Ten lessons about Russian political influence activities in Europe,” Palgrave Communications, volume 5, article number 19.
doi: https://doi.org/10.1057/s41599-019-0227-8, accessed 17 July 2020.

E. Kerr and C.A.L. Lee, 2019. “Trolls maintained: Baiting technological infrastructures of informational justice,” Information, Communication & Society (28 May).
doi: https://doi.org/10.1080/1369118X.2019.1623903, accessed 17 July 2020.

I. Kottasová, 2017. “Facebook targets 30,000 fake accounts in France,” CNN (21 April), at https://money.cnn.com/2017/04/14/media/facebook-fake-news-france-election/, accessed 17 July 2020.

C. Lebeuf, M.-A. Storey, and A. Zagalsky, 2018. “Software bots,” IEEE Software, volume 35, number 1, pp. 18–23.
doi: https://doi.org/10.1109/MS.2017.4541027, accessed 17 July 2020.

D. Linvill and P. Warren, 2019. “That uplifting tweet you just shared? A Russian troll sent it,” Rolling Stone (25 November), at https://www.rollingstone.com/politics/politics-features/russia-troll-2020-election-interference-twitter-916482/, accessed 17 July 2020.

A. Markham and E. Buchanan, 2012. “Ethical decision-making and Internet research: Recommendations from the AoIR Ethics Working Committee,” version 2.0, at https://aoir.org/reports/ethics2.pdf, accessed 17 July 2020.

M. Neururer, S. Schlögl, L. Brinkschulte, and A. Groth, 2018. “Perceptions on authenticity in chat bots,” Multimodal Technologies and Interaction, volume 2, number 3.
doi: https://doi.org/10.3390/mti2030060, accessed 17 July 2020.

J.M. Reagle, 2015. Reading the comments: Likers, haters, and manipulators at the bottom of the Web. Cambridge, Mass.: MIT Press.

H. Rheingold, 2012. Net smart: How to thrive online. Cambridge, Mass.: MIT Press.

D. Riffe, S. Lacy, and F. Fico, 2014. Analyzing media messages: Using quantitative content analysis in research. Third edition. New York: Routledge.

I. Rowe, 2015. “Civility 2.0: A comparative analysis of incivility in online political discussion,” Information, Communication & Society, volume 18, number 2, pp. 121–138.
doi: https://doi.org/10.1080/1369118X.2014.940365, accessed 17 July 2020.

P. Shachaf and N. Hara, 2010. “Beyond vandalism: Wikipedia trolls,” Journal of Information Science, volume 36, number 3, pp. 357–370.
doi: https://doi.org/10.1177/0165551510365390, accessed 17 July 2020.

S. Shane and V. Goel, 2017. “Fake Russian Facebook accounts bought $100,000 in political ads,” New York Times (6 September), at https://www.nytimes.com/2017/09/06/technology/facebook-russian-political-ads.html, accessed 17 July 2020.

J. Sonderman, 2011. “News sites using Facebook Comments see higher quality discussion, more referrals,” Poynter (18 August), at https://www.poynter.org/reporting-editing/2011/news-sites-using-facebook-comments-see-higher-quality-discussion-more-referrals/, accessed 17 July 2020.

L.G. Stewart, A. Arif,and K. Starbird, 2018. “Examining trolls and polarization with a retweet network,” Proceedings of WSDM Workshop on Misinformation and Misbehavior Mining on the Web (MIS2), at https://faculty.washington.edu/kstarbi/examining-trolls-polarization.pdf, accessed 17 July 2020.

N.J. Stroud, A. Muddiman, and J.M. Scacco, 2017. “Like, recommend, or respect? Altering political behavior in news comment sections,” ew Media & Society, volume 19, number 11, pp. 1,727–1,743.
doi: https://doi.org/10.1177/1461444816642420, accessed 17 July 2020.

A. Sulleyman, 2017. “Bot ‘break’ captcha, making the most annoying thing on the Internet pointless,” Independent (31 October), at https://www.independent.co.uk/life-style/gadgets-and-tech/news/captcha-puzzles-recaptcha-solve-problems-vicarious-bots-artificial-intelligence-a8029401.html, accessed 17 July 2020.

F. Toepfl and E. Piwoni, 2015. “Public spheres in interaction: Comment sections of news Websites as counterpublic spheres,” Journal of Communication, volume 65, number 3, pp. 465–488.
doi: https://doi.org/10.1111/jcom.12156, accessed 17 July 2020.

U.S. Federal Communications Commission, 2019. “Pay-per-call information services” (31 December), at https://www.fcc.gov/consumers/guides/faqs-900-number-pay-call-services-and-fees, accessed 17 July 2020.

H. Wang, 1996. “Flaming: More than a necessary evil for academic mailing lists,” Electronic Journal of Communication, volume 6, number 1, at http://www.cios.org/EJCPUBLIC/006/1/00612.HTML, accessed 17 July 2020.

E. Weise, 2017. “Russian fake accounts showed posts to 126 million Facebook users,” USA Today (1 November), at https://eu.usatoday.com/story/tech/2017/10/30/russian-fake-accounts-showed-posts-126-million-facebook-users/815342001/, accessed 17 July 2020.

 


Editorial history

Received 9 September 2019; revised 7 June 2020; revised 9 June 2020; accepted 28 June 2020.


Creative Commons-lisens
Dette verk er lisensieret under en Creative Commons Navngivelse-IkkeKommersiell-Ingen bearbeidelser 4.0 Internasjonal lisens.

Get lost, troll: How accusations of trolling in newspaper comment sections affect the debate
by Magnus Knustad.
First Monday, Volume 25, Number 8 - 3 August 2020
https://firstmonday.org/ojs/index.php/fm/article/download/10270/9576
doi: http://dx.doi.org/10.5210/fm.v25i8.10270