First Monday

Listening to lies and legitimacy online: A proposal for digital rhetorical listening by L. Corinne Jones



Abstract
As people scream past each other in an increasingly polarized public sphere, fake news emerges as problem for reception on the Internet. While scholars have posited rhetorical listening as a strategy to bridge these differences in offline spaces, it has not been fully explored online. Online spaces are becoming increasingly salient and important to theorize though, since polarized groups often communicate/miscommunicate on the Internet. Using the fake news that circulated in the wake of the shooting at Marjory Stoneman Douglas High School in Parkland, Florida as a case study, I demonstrate some of the complications for rhetorical listening that arise through algorithms, interfaces, and performances that perpetuates the spread of fake news. As such, I call for more robust digital listening practices and theories that account for complications of the Internet. I conclude that individuals, platforms, and institutions can all actively promote rhetorical digital listening practices. However, we also need to think about other motivations besides ignorance for spreading fake news.

Contents

Introduction
Background
Defining rhetorical listening
Methodology
Case studies
Conclusions and future directions

 


 

Introduction

There is little debate that the United States is becoming an increasingly polarized society. The Pew Research Center has documented political polarization along ideological lines since 2014 (Pew Research Center, 2016, 2014), and these divisions have only deepened after the 2016 election. As this schism grows more vitriolic, people on either side of this divide increasingly talk (or scream) past one another, with neither side listening or weighing the content of what the other has to say.

One effect of this polarization is fake news, which fabricates and misrepresents information to support just one side of the ideological debates. Though scholars have defined fake news in different ways, I draw from Tandoc, et al.’s (2018) definitions of fabrication, photo manipulation, and propaganda. Briefly, they defined fabrication as articles intended to deceive audiences by creating a veneer of credibility through publishing in mass media genres [1]. Photo manipulation refers to altering images to create false and misleading narratives [2], and propaganda refers to news that is based in factual events, but invokes ideological biases meant to persuade, rather than inform [3]. In all of these manifestations, the author’s intentions of authors are deceptive [4].

Polarization and the attendant disinformation that perpetuates it call for robust and reflective receptive practices. As one receptive tactic, scholars have theorized “rhetorical listening” in off-line spaces; however, the polarization of today’s political landscape often plays out in digital spaces, and therefore “listening” also needs to be understood as a digital practice. Digital rhetors listen to performances of identities and arguments through algorithms and interfaces in digital spaces, but the circulation of fake news challenges these listening practices, as rhetors must choose to what they listen and how they respond. Using fake news that circulated in the wake of the shooting at Marjory Stoneman Douglas High School in Parkland, Florida, I demonstrate some of the complications that arise for emerging digital listening practices, as digital listeners listened (or did not listen) to performative arguments and identities through algorithms and interfaces.

I first outline my case study of the shooting at Marjory Stoneman Douglas High School and give a brief background of rhetorical listening. I then explore how three public figures engaged in non-rhetorical listening by perpetuating and spreading fake news. Specifically, I argue rhetorical listening online may be complicated by algorithms, interfaces, and identity performances. The aim of this paper is to begin to theorize digital rhetorical listening practices and to outline ways that digital contexts complicate existing listening practices through a brief case study. I argue that YouTube’s algorithms and interface promote a valuation of popularity over content. Furthermore, I argue that ambiguous meanings online complicate listening and that digital photo manipulation and its algorithmically-driven circulation on platforms like Twitter can perpetuate stereotypes about identities. These photo manipulations enable people to set up digitally Photoshopped People arguments, in which debaters knock down performative arguments that did not exist in the first place. Finally, I contend that individual users, platforms, and institutions can take steps to create online environments more conducive to listening, but that we also need to think more critically about the impetus behind the spread of fake news online.

 

++++++++++

Background

On 14 February 2018, a 19-year old armed with an AR-15 rifle killed 17 people (Wamsley and Gonzalez, 2018) and wounded at least 17 others (Fleshler and Valys, 2018) at Marjory Stoneman Douglas High School in Parkland in Broward Country, Florida. Despite a history of “red-flags,” (Rose and Booker, 2018), the former student of the school purchased the assault-style weapon legally (Allen, 2018).

After the massacre, students used social media platforms to start the March For Our Lives movement. A week after the shooting on 21 February, students met with lawmakers in Tallahassee with a diverse list of demands (Allen, 2018), and on 14 March students from 3,130 schools across the nation held a 17-minute walk-out in commemoration of the 17 victims to call for legislative action (Domonoske, 2018). Activists held more marches across the country on 24 March. With a diverse student population, demands and proposed solutions differed (Domonoske, 2018; Raphelson and Bowman, 2018; Brady, 2018) and others pointed out the marginalization and silencing of minority victims of gun violence (Garcia-Navarro, 2018). Generally speaking though, these marches were organized to end gun violence in schools and communities.

In response to the March For Our Lives Movement, then-Florida Governor Rick Scott signed controversial legislation into law on 9 March, tightening gun restrictions but allowing school personnel to be armed (McNulty, et al., 2018; Booker, 2018). As Van Dijck (2013) points out, social media co-evolves with social, cultural, and political processes. In this case, Scott’s listening and response to the students and their digitally-mediated movement points to these overlaps; Scott was also negotiating his political relationships with the conservative National Rifle Association (NRA) and President Trump’s insistence that teachers should be armed. However, other individuals failed to rhetorically listen to the activists and chose instead to only listen to the fabricated, manipulated, and propagandistic fake news that circulated after the event.

According to an investigative report by the Washington Post, 47 minutes after the shooting, right-wing extremists began conspiring to undermine the media surrounding the event on an anonymous chat board on the Web site 8chan (Timberg and Harwell, 2018). They sewed false stories claiming that the student activists were “crisis actors,” or trained actors who played false parts to advance a specific political agenda (Timberg and Harwell, 2018). Their disinformation campaign spread to conservative Web sites such as Gateway Pundit, but also to more mainstream social media sites including Facebook, Twitter, and YouTube (Timberg and Harwell, 2018).

These fabricators targeted several students personally, connecting their attacks to the FBI and the investigation into Donald Trump’s 2016 presidential election campaign (Timberg and Harwell, 2018). They also altered the shooter’s image on Photoshop to make him look “non-white” to promote racist ideologies (Timberg and Harwell, 2018).

People also created and circulated altered images of student activist Emma González after the March For Our Lives rally in Washington D.C.. González helped organize the March For Our Lives rally in Washington D.C. where she delivered a powerful speech that garnered national attention (Reilly, 2018). As the 8chan users altered images of the shooter to promote racist ideologies, they also latched onto González’s non-heteronormative identities as a point of attack. González self-identifies as a bisexual Cuban woman (Morales, 2018). Far-right U.S. Second Amendment (gun rights) activists altered an image of González that was originally shared by Teen Vogue. The original image featured González tearing a shooting target, but the altered image made it look like González was tearing the U.S. Constitution. Once the image circulated, social media users attacked González along racist, sexist, and homophobic lines (Reilly, 2018).

As these bigoted and false accusations spread across social media sites, people reported and took down many of the fake news articles and images. However, users shared a single false article more than 111,000 times on Facebook (Arkin and Popken, 2018). Additionally, on 21 February, a conspiracy video attacking activist David Hogg as a crisis actor was the top trending clip on YouTube (Arkin and Popken, 2018).

Public figures who were embroiled in existing power struggles in political, social, and cultural domains, also listened to these messages. In this paper, I examine Jerome Corsi, Alex Jones, and Adam Baldwin’s failure to rhetorically listen to the students. Though I maintain they had the agency to act otherwise, I argue they failed to listen rhetorically through the algorithms, interfaces, and performances that complicate rhetorical listening in digital spaces.

 

++++++++++

Defining rhetorical listening

Krista Ratcliffe drew from feminist and critical race studies and developed a theory of “rhetorical listening,” which she defined as a “trope for interpretive invention” and “code of cross-cultural conduct” [5]. In her conclusion, Ratcliff highlights some important features of rhetorical listening on which I will draw for this paper.

First, rhetorical listening implies a desire to genuinely understand. However, “understanding” is more than Burkean identification. Broadly speaking, Kenneth Burke sought to bridge differences and saw identification as a “consubstantiation” in which one person identifies themselves with another person [6]. Ratcliffe criticized “identification” for eliding important differences between lived experiences [7], and claimed that understanding requires that people listen with intent to understand the speaker’s motivations and the cultural logics that underlie both the claim of their argument, and the rhetoric which they employ to make that argument [8].

Secondly, rhetorical listening is dialogic. Ratcliffe claimed that learning requires listening to people and ideas with which we do not agree. Likewise, Candace Spigelman, claimed that listening uncritically to narratives is ill-advised [9]. For both Spigelman and Ratcliffe, rhetorical listening is not understanding and then passively accepting what a speaker says, but instead it creates a space for dialogue.

As with any productive dialogue though, speakers must be self-reflexively aware of their own epistemological positions, pointing to a third key feature of rhetorical listening: self-reflexivity. Listeners must ask themselves about their own self-interested motivations and why they hear what they hear [10]. Similarly, Julie Jung extended Ratcliffe’s call for self-reflection and connected self-reflexive listening with revision. For Jung, self-reflection requires listeners to engage in a process of continually acknowledging their own “partial worldviews” to revise their own understandings [11]. As such, listening is a reflective revisionary practice that acknowledges the limits of epistemology.

Finally, listening is active and implies responsibility. Rather than passive, Ratcliffe posited rhetorical listening as “interpretive invention,” [12] implying that listening is both receptive and creative. Listening is not just a single action, but an ongoing process in which listeners choose to take a stance of openness. Since rhetorical listening is an active process in which listeners have agency, Ratcliffe and Jung both claimed that rhetorical listening implies responsibility [13].

In addition to these characteristics, Jung drew from Ratcliffe to offer a useful outline for what rhetorical listening is not. She claimed that rhetorical listening is not “easy” responses that prove to a speaker that we have heard them, but not necessarily that we have listened to them. These “easy” responses are self-interested and include:

  1. Validation (“I hear you”).
  2. Appropriation (employing a text for one’s own end)
  3. Burkean identification (smoothing over differences)
  4. Agreement (only confirming one’s own view of reality) [14].

These “easy” responses are important because they point to some distinctions between rhetorical listening and ways that people feign listening and cast-off responsibility. However, I acknowledge that rather than a binary differentiation between listening and not listening, there are inevitably gray areas between the two.

While listening as a rhetorical art has been theorized and examined in off-line spaces, as people increasingly create content, communicate, and consume information in digital spaces which coalesce with existing power dynamics [15], we must rethink rhetorical listening for these evolving rhetorical ecologies. Specifically, digital rhetorical listening must account for interfaces, algorithms, and digital performances. These digital features all intersect to complicate how people practice rhetorical listening in off-line spaces.

 

++++++++++

Methodology

Considering the deepening social, political, and economic divides in the United States and across the globe, it is important for citizens, scholars, and researchers to think about how people listen to each other in the digital ecosystems in which they are communicating. This is especially important to consider as fake news clearly undermines listening practices. As such, broadly speaking, my goal is to explore how fake news in digital ecosystems complicates rhetorical listening practices and how we can begin to formulate rhetorical listening practices for the Internet.

Specifically, I examine how the Internet complicates rhetorical listening using a rhetorical case study of the disinformation spread in the wake of the shooting at Marjory Stoneman Douglas High School. As such, my research question is: how do these specific instances of fake news show emerging complications for rhetorical listening in online contexts? Furthermore, how might our practices shift to accommodate these complications?

As James Porter and Heidi McKee (2009) contended, Internet research is too complex and contextual for one approach to be appropriate for all circumstances. Additionally, Phillips and Milner (2017) claimed that as an ambivalent, but not indifferent, space [16], Internet researchers should engage ambiguity and refrain from ascribing intentionality since what looks like one behavior may actually be another [17]. Rather, they asserted that researchers should consider each act individually and on its own terms [18].

As such, I use a qualitative research method to allow for flexible and inductive analysis [19] to adapt to these complexities. Since my question refers to specific instances, case studies are appropriate because they are useful for more specific questions [20]. I chose to conduct case studies to limit the scope of my analysis within this complex space. Though difficult to generalize, case studies allow depth not attainable in large quantitative studies, and I use this affordance to more fully explore the complexities of online rhetorical listening in practice, rather than focus on a large, general scale. However, these specific instances can point to larger emerging paradigmatic issues.

I conduct rhetorical analyses of these case studies to allow for flexibility by examining the interfaces, algorithmic results, and a manipulated photograph. I chose to rhetorically analyze how these examples demonstrate Jung’s categories of non-rhetorical listening to show how these features complicate rhetorical listening online.

I chose these two instances of fake news for how widely they spread and the amount of media attention they received. I also chose them for their differences; the first story circulated right after the event and the second image circulated after the March For Our Lives, when audiences knew about the previous fake news about the students. Finally, the modes of falsification differ in these two stories. The first makes false claims and uses video clips taken out of context. The second makes false claims supported by an altered image.

I examine these stories as they were spread by Alex Jones, Jerome Corsi, and Adam Baldwin because these individuals shared these fake news stories and because I can access this information through publicly available channels, rather than through semi-private social media profiles. In alliance with McKee and Porter (2009), I view the Internet as a place where people communicate, rather than a public text-based space [21]. As such, I derive my analysis from clearly publicly available information in news sources and public profile information (follower counts, “Related Channel” feeds, attendant video descriptions posted by the video creator, rather than everyday users). I do not quote individual everyday users to respect their privacy, as not all online users view their online communication as public, nor can I guarantee that these users know how to manage their privacy settings [22]. I use follower counts from publicly available profiles of high-profile figures who clearly use these platforms for broadcasting.

Additionally, in alignment with Navar-Gill and Stanfill (2018), though I recognize that readers can go online to find the Web sites that peddled fake news, I am making an ethical choice to omit the links in this article.

One obvious limitation to this study is that it is difficult to say why so many users listened to these stories and passed them on. Inevitably, the ways in which these factors complicated digital rhetorical listening differed from person to person. As a rhetorical analysis, these case studies reflect my interpretation as a white heteronormative woman in the United States, though I make my analysis and reasoning explicit. Additionally, asynchronous and synchronous digital rhetorical listening could differ, though the platforms on which I focus are mostly asynchronous.

Due to limitations in scope, my paper does not adequately address the fact that the media predominantly listened to the white victims of gun violence, while voices of color and their experiences were largely left out of these conversations (Green, 2018). As my case study focuses on only a few individuals and stories, I am unable to fully explore these silences here. However, this would prove to be a fruitful area of study for further inquiry into digital rhetorical listening.

Finally, an important ethical issue that arose as I conducted my research centered on revenue-generating clicks. As I elaborate below, clicks generate revenue for YouTubers, and as I had to watch YouTube videos about these conspiracy theories, I generated revenue for these users. This will be an important ethical issue for future Internet researchers to consider.

 

++++++++++

Case studies

Alex Jones, Jerome Corsi, and Adam Baldwin all espoused conspiracy theories about the student activists from Parkland. Table 1 demonstrates their sphere of influence on YouTube and Twitter.

 

Follower counts for Alex Jones and Jerome Corsi on YouTube and follower counts for Adam Baldwin on Twitter
 
Table 1: Follower counts for Alex Jones and Jerome Corsi on YouTube and follower counts for Adam Baldwin on Twitter as of 6 May 2018 as well as total number of YouTube views for Jones and Corsi and Baldwin’s total tweets as of 6 May 2018. (@AdamBaldwin, n.d.)

 

Alex Jones and Jerome Corsi

Though many sources shared the false content that circulated in the wake of the Parkland shooting, I focus here on two prominent YouTubers: Alex Jones and Jerome Corsi. Alex Jones’ channel is associated with the popular conspiracy Web site and television show, InfoWars. His YouTube channel circulated the false claim that the student activists were crisis actors (Murphy, 2018). Notably, YouTube, Apple, and Facebook all removed Alex Jones’ InfoWars on charges of hate speech in August 2018 (Chappell and Tsioulcas, 2018). Likewise, Jerome Corsi falsely accused the survivors of being crisis actors on his YouTube channel (Martin, 2018).

As of 7 April 2018, InfoWars’ The Alex Jones Channel had 2.3 million subscribers, which had increased from 2.2 million on 15 March 2018 (Jones, n.d.). Jerome Corsi had 45,000 subscribers, which was up from the 39,000 subscribers he had on 15 March 2018 (Corsi, n.d.). In both of these cases, I argue YouTube’s algorithms and interfaces worked in conjunction with off-line forces to legitimize and support their non-rhetorical listening, especially by valuing popularity.

Algorithms and ecologies

Jenny Edbauer (2005) and Collin Brooke (2009) problematized the digital environments in which users listen by positing digital spaces as postmodern ecologies with unstable subjects and ongoing circulation. These digital ecologies are mediated by non-human elements including algorithms and bots.

Tarleton Gillespie (2014) contended that algorithms affect the ways in which people access information and claimed that they are a “key logic governing the flows of information on which we depend” [23] and we problematically come to rely on algorithmic logic as the only way of knowing [24]. Schou and Farkas (2016) extended this argument, claiming that algorithms are mediators of information and shape epistemologies by determining what is represented and what is representable to the user [25]. Likewise, Safiya Noble (2018) argued that search engine algorithms perpetuate racist and sexist representations and prejudices, thereby reinforcing and re-inscribing off-line power dynamics, and demonstrating the co-evolution of social media and other forces (van Dijck, 2013).

Furthermore, Bucher (2012) claimed that algorithms not only affect epistemologies but incentivize behavior through the threat of invisibility as a disciplinary tactic. Bucher (2014) argued that algorithms coerce people into participating online. This participation generates data about users that companies then commodify for a profit. Likewise, van Dijck (2013) explored YouTube’s algorithm, and claimed that YouTube does not allow for fair competition among users, but tweaks algorithms to benefit some users and not others [26] and van Dijck and Poell (2013) contended that algorithms condition popularity rankings [27].

In tandem with algorithms, bots — computer code that mimic human behavior [28] — further complicate online listening and popularity. In addition to algorithms, people can buy social bots as followers, making users appear more popular. Bots can affect public opinion to make politicians appear more popular by acting as “friends” and followers, using hashtags, and recirculating fake news (Laquintano and Vee, 2017). Like Bucher claimed that social media incentivizes behaviors, Laquintano and Vee claimed that social media incentivizes bots through their emphasis on popularity [29].

This emphasis on popularity is important. By van Dijck and Poell’s (2013) formulation, it is a key feature of social media logic, which extends mass media logic. Like mass media logic, social media logic values popularity, though this popularity is based on how many times something has been shared or “liked,” rather than the veracity or quality of the information, building a “like-economy” [30], a point on which Laquintano and Vee (2017) further elaborated. Likewise, Rini (2017) argued that the popularity of a news article lends it unwarranted credibility because people are less inclined to think that something is false if hundreds of people testify to its veracity, especially if the testifiers share our beliefs and values.

Algorithms and bots therefore play a role in how people listen to information online as they capitalize on social media logic’s value of popularity and present inflated and false information about a user’s popularity. In the case of the disinformation about the Parkland shooting, YouTube’s “trending” algorithm picked up a conspiracy videos claiming that David Hogg and the other activists were “crisis actors” (Martin, 2018). This “trending” features exemplifies the value of popularity (van Dijck and Poell, 2013), but also implicates algorithms that may artificially amplify certain content (Laquaintano and Vee, 2017).

Besides amplifying one video, algorithms also amplified and linked to other videos. As Jonathan Albright (2018) illuminated, this disinformation lived and spread in a digital ecosystem that connected videos to other conspiracy videos. According to Albright, by 25 February, less than a week and a half after the shooting, YouTube’s Application Programming Interface (API), or the server that receives Internet requests and then sends responses back to users, generated a network of 8,842 conspiracy-related videos for YouTube’s “Next-Up” section. Albright claimed that these videos have become their own genre and explored how YouTube’s algorithms monetize these clicks by connecting all of the videos together (Albright, 2018).

As of 31 December 2018, algorithms still linked together conspiracy videos which perpetuated the same viewpoints. The algorithmically generated “Related Channels” section on the right side of Jerome Corsi’s YouTube homepage suggests channels of other well-known conspiracy theorists (See Figure 1). The first channel in the lineup is the channel for Michael ‘Lionel’ Lebron, a prominent supporter of President Trump and promoter of conspiracy theories about the “deep state” and the QAnon movement, which claims that popular Democrats and media personalities are a part of a secret pedophile cult (Drury, 2018; Feldscher, 2018).

Fox News’ YouTube channel appears next. As of 2014, Fox News was consistently ranked as a prominent conservative news source, (Mitchell, et al., 2014) thereby aligning with Corsi’s views. Following Fox News is the X22Report, which runs daily reports about an impending economic collapse by a man named Dave out of New York (X22.report.com, n.d.). In his “Welcome to the X22 Report” video, the video to which YouTube links users if they click on X22Report’s channel from Corsi’s page, Dave claims that the central bank is controlling the world and seeks to create a one-world government [31]. Likewise, other popular video titles include “U.S. is Now In the Final Stages of Having Its Society Closed Down — Episode 601.” [32]. These videos suggest that, like Corsi’s channel, the X22Report also peddles conspiracy theories.

The next related channel is H.A. Goodman’s channel. As demonstrated by his videos advocating for an indictment of Hillary Clinton (President Trump’s opponent in the 2016 election), in videos like “TRUMP DOJ MUST INDICT CLINTON BEFORE NEW YORK SOUTHERN DISTRICT INDICTS TRUMP” (Goodman, 2018), Goodman also aligns with the political right. He falsely suggests that Trump’s presidency is in imminent danger and the only course of action is to indict Hillary Clinton.

 

Screenshot taken on 31 December 2018 of Jerome Corsi YouTube homepage
 
Figure 1: A screenshot taken on 31 December 2018 of Jerome Corsi’s YouTube homepage. The image shows the “Related Channels” feature on the right side of the page. (Corsi, n.d.)

 

In the case of Corsi’s “Related Channels” column on YouTube, algorithms continue to link similar political views together and determine what is knowable to users. As noted by Gillespie (2014), these algorithmically driven linkages are politically important because they make decisions about what to include and index to make knowable to users [33]. In the case of Corsi’s “Related Channels,” that which is made knowable to users are similar viewpoints perpetuating conspiracy theories. Likewise, as Gillespie claimed, this “Related Channels” page and Albright’s network of videos tries to predict YouTube viewers’ future behavior by suggesting the next thing that they could watch [34]. Finally, as Gillespie illuminated, they attempt to determine relevance for users [35].

Since the YouTube algorithms on Corsi’s page perpetuate conspiracy theories that support the NRA and gun rights advocates’ political agendas, this shows how these algorithms coalesce with and perpetuate other systems. However, these algorithms coalesce with and perpetuate political propaganda in ways impossible off-line. In their seminal book on the topic, Jowett and O’Donnell defined propaganda as a form of ideologically driven persuasion that seeks to benefit the propagandist, regardless of the effect on those to whom the propaganda is targeted [36]. By their definition, propaganda tries to manipulate perceptions of reality through language and images [37] with the pre-established goal of affecting beliefs, attitudes, and behaviors [38]. Likewise, Tandoc, et al. (2018) define propaganda as having some basis in reality but including “bias that promotes a particular side or perspective” [39]. As the videos and channels in my analysis all demonstrate similar ideological biases (right-wing, conspiracy theories), I claim that YouTube’s algorithms link propaganda videos together that build on each other in ways impossible for mass media propaganda. While off-line propaganda may try to determine what is knowable to the user, it cannot select the next thing that a viewer sees, nor does it try to determine relevance for the user.

Some critics might claim that the algorithmically-driven linkages between propagandistic videos on Corsi’s page demonstrate “filter bubbles” (Pariser, 2011) and echo chambers, which limit what people see, and by extension to what they can listen. However, this overlooks user agency. Though scholars like Jeremey Johnson (2018) claimed that agency is distributed differently among humans and non-human ‘actants’ online, Johnson still maintained that people retain agency in interaction. If people do not retain some agency, then Alex Jones’ 2,354,932 followers and James Corsi’s 52,744 followers would simply be at the mercy of YouTube’s algorithms with little agency to whom and to what they listen. However, as Ratcliffe noted, rhetorical listening is an active process in which listeners choose to take a stance of openness [40]. As such, digital rhetorical listening would require listeners to maintain this agency, rather than just listen to algorithmically-determined feeds.

Despite algorithmically-generated echo chambers, scholars have recently challenged the idea of echo chambers as overly deterministic. Studies have shown that people with political interest and diverse media diets do see media that challenges their existing viewpoints across platforms (Dubois and Blank, 2018). In other words, users still have agency in their rhetorical listening and in their engagement with echo chambers; the content to which they listen is not solely determined by algorithms. Users do not have to buy into the social media logic that popular videos are more credible, nor do they have to watch the “Related Channels.”

Interfaces

Algorithms are often invisible to users, but they generate interfaces with which users interact. Interface is broadly defined as the point of interaction between two subjects, but for the purposes of this paper I focus specifically on the graphic user interface (GUI), which is the point of contact between the software and human users. Cynthia and Richard Selfe argued that interfaces present an “interested vision of reality, at least partly constructed from the perspective of, and for the benefit of, dominant forces in our culture” [41]. Specifically, interfaces perpetuate a white-middle class culture of professionalism that commodifies information, thereby perpetuating capitalist values (Selfe and Selfe, 1994).

Lori Emerson (2014) also claimed that interfaces not only perpetuate value-frames and power dynamics [42], but that interfaces become invisible to users as they are naturalized by ideology. Comparing interfaces to translators, she claimed interfaces only present “a predetermined set of interactions designed to appear as though it is a full range of interactions from which the user must choose” [43]. Likewise, van Dijck (2013) claimed that interfaces steer users towards certain content [44].

These authors point to the epistemological limits of interfaces and how they affect human actions. Schou and Farkas (2016) claimed that epistemological limits are important because they have implications for what people can know and how they know it, though the authors contend that users are not technologically-determined either [45]. They claimed that interfaces can moderate knowledge and perpetuate hierarchical power dis-symmetry [46] by offering different epistemological opportunities to different users through different interfaces [47].

If interfaces affect users’ epistemologies and reflect specific logics and views of reality, I claim that in this case, YouTube’s interface perpetuates van Dijck and Poell’s (2013) formulation of social media logic as valuing popularity.

On YouTube’s interface, the number of each channel’s followers are displayed in bright red to draw users’ attention to these numbers (see Figure 2). The interface also shows how many times specific videos have been viewed (see Figure 3). Aligning with van Dijck and Poell’s (2013) formulation then, YouTube’s interface highlights how popular these people and videos are, implicitly asserting that these numbers are important and can vouch for the content of the videos. Since the interface highlights popularity in bright red, it rhetorically promotes this value and naturalizes it as a legitimate way to weigh information. If this is the case, highlighting follower counts and video popularity could have contributed to legitimizing the false claims that the Parkland students were “crisis actors.”

However, as van Dijck and Poell (2013) claim, popularity in social media logic differs from mass media logic in that it both measures and creates popularity through algorithmic conditioning [48]. Popularity is determined by both users and by the algorithms and companies who promote content for their own economic interests and ends [49]. Since algorithms are “black-boxed,” there is no way to tell if algorithms helped to generate Jones and Corsi’s popularity, but it does raise significant questions.

 

Screenshot taken on 6 May 2018 of the Alex Jones Channel on YouTube
 
Figure 2: A screenshot taken on 6 May 2018 of the Alex Jones Channel on YouTube. The picture demonstrates the red highlighting of the number of followers (Jones, n.d.).

 

 

Screenshot taken on 6 May 2018 of Jerome Corsi YouTube homepage
 
Figure 3: A screenshot taken on 6 May 2018 of Jerome Corsi’s YouTube homepage. The picture demonstrates the number of views for each video is displayed (Corsi, n.d.).

 

YouTube’s interface also neglects important information. Though YouTube has since taken steps to ban users who violate their policies against hate speech, including Alex Jones (Chappell and Tsioulcas, 2018), there is nothing on YouTube’s home page interface about the veracity of information (as Facebook has recently implemented in their interface).

YouTube’s interface also neglects to show the vested interests of Jones and Corsi. Besides peddling propaganda to further their own ideological purposes, van Dijck and Poell’s (2013) outline of social media logic point to one other possible reason which Jones and Corsi may have sought to gain popularity through YouTube’s algorithms. As defined by van Dijck and Poell (2013), datafication refers to the ways that both social and mass media collect data about people to sell to advertising companies who then use the information to target customers [50]. However, in social media logic, datafication systems are built into underlying algorithms, rather than relying on polls, as is the case for mass media [51]. Since YouTube monetizes content through ad revenue (Albright, 2018), popularity and clicks become important because they are connected to datafication and profits.

As shown above, both Jones and Corsi have a large following on YouTube. Even after their conspiracy videos were debunked, Jones and Corsi gained followers; people viewed not only their videos, but also any monetized ad content that supported their videos. With such a large following, the number of clicks and ad views were significant. Though ads do not always generate substantial revenue for YouTubers, Jones and Corsi likely profited from listening to and sharing false content about the Parkland activists in YouTube’s algorithmic ecosystem and interface that peddled popularity over truth. This points to how fake news is embroiled in capitalist economic systems as well.

YouTube’s interface does not display how much Jones and Corsi profited from their increased popularity, but their financial goals are apparent in the comments which they (not everyday users) upload with their videos. While Phillips and Milner (2017) argue researchers should be cautious in ascribing intentionality, Corsi makes his intentions clear. In his video “Current Events LiveStream — Tuesday, December 11, 2018, guest host DC Hammer,” Corsi links users to a GoFundMe Page, a DonorBox page, and a Paypal account. He additionally links users to sites to purchase his books (Corsi, 2018; see Figure 4). This demonstrates Corsi’s financial interests in YouTube. Though financial interests are not inherently negative, they do show that Corsi also had a stake in promoting videos claiming that the Parkland activists were “crisis actors,” if these claims generated more clicks, ad revenue, and let him promote his products.

 

Screenshot taken on 31 December 2018 of Jerome Corsi comment on his own video
 
Figure 4: A screenshot taken on 31 December 2018 of Jerome Corsi’s comment on his own video entitled “Current Events LiveStream — Tuesday, December 11, 2018, guest host DC Hammer.” The image illustrates links to GoFundMe, DonorBox, and PayPal as well as sites to purchase his books (Corsi, 2018).

 

As shown in the examples above, YouTube’s interface does not show users how much money Jones and Corsi made from the YouTube videos, thereby obscuring their vested interests. Without making their profits explicit, the rhetorical situation and the economic interests of the rhetors are not apparent to YouTube users. This affects the information that YouTube users can weigh in making decisions about the information to which they listen.

The absence of anything on YouTube’s interface promoting active interrogation of claims and the lack of transparency about Jones and Corsi’s profits, in conjunction with the implicit valuation of popularity, builds the norm that credibility is not as important as popularity into the interface and into users’ experiences. The interface, and the designers by extension, promote popularity, not content, as important and this lends credibility to these false videos. The design falls into an argumentum ad populum fallacy, in which people believe that something is true because of the number of other people who believe it. Like the designers, users still have the ability to act otherwise, but the design promotes popularity as a source of credibility.

In this case, algorithms and interfaces work together to promote the argumentum ad populum fallacy and to obscure Jones and Corsi’s economic interests. While not entirely different than mass media logic which also values popularity, YouTube’s interface perpetuates popularity as a legitimizing source of credibility while also conditioning that popularity and perpetuating similar viewpoints in algorithmically generated links in the “Up Next” and “Related Channels” feeds.

Here, algorithms complicate rhetorical listening practices. As algorithms link biased and similar propagandistic videos together, they do not promote genuine understanding and they complicate dialogue. Though they do not trap users in echo chambers, the constant feed of information that perpetuates a single viewpoint, also changes how users can self-reflect.

YouTube’s interface also complicates listening practices because it promotes a value of popularity, rather than seeking to genuinely understand different, and potentially unpopular, positions. Furthermore, if interfaces present users with interested versions of reality that highlight the popularity of a page and mask the profits that creators get from this popularity, then it becomes difficult to engage in meaningful dialogue.

As such, Jones and Corsi’s non-rhetorical listening and their circulation of disinformation demonstrates “appropriation.” Rather than engage in dialogic rhetorical listening that critically engaged with the disinformation to determine its credibility, they employed the disinformation and fake news for their own ends to gain followers, clicks, and by extension, money. YouTube’s interface and algorithms works together to support their self-interested and non-rhetorical listening.

In addition, their listening is also “agreement.” Since both Jones and Corsi have espoused conspiracy theories and hate speech in the past, the fake news about the crisis actors confirmed their already existing views of reality. They therefore listened to the disinformation through the non-rhetorical listening strategy of “agreement.”

Adam Baldwin

After the “crisis actors” theory was debunked, fake news continued to spread about the student activists. Actor Adam Baldwin retweeted an altered image of Emma González to 250,000 followers with the hashtag “#Vorwärts!” meaning “forward” in German, and which some claimed referenced a Hitler Youth march song (Horton, 2018). Baldwin has a large following on Twitter for his roles in successful films (IMDB, n.d.). While some conservative Twitter users later claimed that the image was circulated as “satire” (Horton, 2018), this remains unproven.

Notably, Baldwin was involved in the Gamergate controversy in 2014, in which a misogynistic gaming community attacked Zoë Quinn, Brianna Wu, and Anita Sarkeesian. The controversy exemplified the toxic and violent hegemonic masculinity within the gaming community (Chess and Shaw, 2015). In this case, Baldwin’s participation in this incident suggests that he previously held prejudiced views about women and minority groups.

Performance and identification

Another complication for rhetorical listening arises in the ways that people perform and develop identities in online spaces. In Judith Butler’s concept of performativity, people do not have inherent “selves” or stable ontological positions; rather they continually (and in Butler’s formulation, compulsively) enact their identities through their actions [52].

One of the ways in which people may choose to perform these identities is through language [53]. Language itself becomes a performative act, and as online writing “remediates” or reorganizes existing writing practices for new cultural atmospheres [54], some linguists and media scholars claim that users shape their co-evolving online and off-line identities online through linguistic and extra-linguistic means (Barton and Lee, 2013; boyd, 2007). However, as social media is multimodal, identity performance is not limited to just language, but extends to pictures and popular culture references as well (Williams, 2008).

As such, another way in which social media users can listen (or not listen) online is to these identity performances. Since González self-identified as a queer woman, her marginalized identifications were more central to the attacks on her (Reilly, 2018). Additionally, as the manipulated image of González showed her performing an act that she did not actually perform, González’s case points to how digital technologies complicate listening to performances.

Though Baldwin claimed that his retweet was satire, Tandoc, et al. (2018) define satire as something which mocks news programs through obvious over-exaggeration of real events [55]. Similarly, parody mocks mainstream news with clearly false stories [56]. In their formulation, audiences know that the story is false or over-exaggerated [57]. Since the apocryphal nature of the image should be apparent to the audience to be classified as satire or parody and there is nothing to indicate that the image is false in the tweet itself, the tweet is not satire or parody, despite Baldwin’s claim.

While one might argue that Baldwin’s tweet constituted propaganda, as it appears to be an ideologically motivated act meant to benefit Baldwin, in alignment with Phillips and Milner (2017), as the intention of Baldwin’s tweet is unclear, I am also cautious of classifying the retweet as propaganda. As Rini (2017) pointed out, the intentions of retweets are vague.

Since Baldwin’s retweet consisted of a slogan associated with Nazism, considering the Nazi’s history of murdering millions of people on the basis of race and sexuality, this tweet could be read as a threat to González’s identity. If, as his previous participation in the Gamergate scandal suggests, Baldwin previously held sexist and racist ideologies, then his tweet attacking a queer Cuban woman, as well as the possibly Nazi slogan attached to the tweet, could suggest a threat to González.

However, in alignment with Phillips and Milner’s (2017) caution about ascribing intentionality to online communication, the tweet could also be read as calling González a Nazi. If popular cultural logics and historical narratives set up the United States in opposition to Nazism, then a picture showing González destroying a symbol of the United States with a possibly Nazi slogan could suggest that González is the Nazi and is a threat to the United States. However, there is no evidence to support this association between González and Nazism. In this case, even if Baldwin was not threatening González as a Nazi, calling her a Nazi would still be spreading false information.

Here, the Internet’s ambiguity complicates ascertaining certain meaning and listening. Is Baldwin identifying himself as a Nazi and saying that the Nazis should move “forward,” or is he ascribing the slogan to González and identifying her as a Nazi?

This retweet speaks to the ways that online spaces complicate how users listen to performances in at least two ways. First, if Baldwin continued to espouse racist and sexist ideologies after the Gamergate scandal (and there is no indication that he changed his views), then this altered image perpetuates the Nazi trope that non-heteronormative individuals are a threat. In this case, digital technologies would allow users to perpetuate stereotypes about identities that conform to bigoted ideologies. Second, if Baldwin is calling González a Nazi, then he is ascribing an identity and a performance to her that there is no evidence to support.

While there is no way to tell if Baldwin meant the retweet in either of these ways, he could have easily spread either piece of disinformation off-line. However, on Twitter, he spread it to 250,000 followers with the click of a button. These followers could have interpreted the retweet with either of these meanings, as well as a myriad of other meanings. However, as “follower” has come to mean “believer,” rather than someone with whom a Twitter user converses [58], the people to whom Baldwin tweeted this message were likely to already sympathize with his conservative ideology since they decided to “follow” him, despite his documented history of racist and sexist behavior. Rather than mass media, which broadcasts to a wide audience with different beliefs who could have challenged and questioned whatever Baldwin was asserting, Baldwin’s tweet reached a targeted group of people who were probably already “believers,” further reifying whatever beliefs they hold about González’s identifications.

In addition to satire and parody, Tandoc, et al. (2018) also define photo manipulation, in which people alter images to create fake narratives [59]. In this case, the altered image of González creates a false narrative of González’s real performance. As noted above though, the exact narrative for this image is ambiguous. This case clearly aligns with their definition of photo manipulation.

In the original image, González is performing an argument by tearing a target as a statement about gun control. However, Photoshop creators falsified González’s performance using digital tools to alter the narrative and make it look like her performance is sending a different (though ambiguous) message. Once they falsified her performance, attackers tied this performance to her ethnic and gender identity to support racist and homophobic beliefs. The image creators altered the original picture to create a false performance for González that conformed to bigoted ideas about her identities.

With whatever intention and meaning, with his retweet, Baldwin listened to this digitally falsified narrative and performance, rather than González’s actual performance. Whether for “satire,” a calculated attempt to spread fake news, or out of ignorance, Baldwin took the image out of context and retweeted it with what appeared to be a Nazi slogan. He did not rhetorically listen to the message that González meant to convey or her reasons for her argument, nor did he engage with her actual performative rhetoric; rather, he listened to a falsified performance.

Rather than rely on her actual argument, Baldwin and the image creators use Photoshop to set González up as a straw man. In rhetoric, the straw man fallacy refers to an argument that misrepresents the opposing view and makes it easy to knock down. The image misrepresents González’s performative argument about gun control and sets her up instead against the U.S. Constitution. Set up against the Constitution, her argument is easier to attack.

This is particularly disturbing since this falsification goes further than a straw man fallacy. It not only misrepresents González’s performative argument, but it creates a new falsified argument altogether. Rather than developing beliefs stemming from actual evidence and performances (i.e., González tearing a target), these Photoshop users create performances and evidence (i.e., González tearing the U.S. Constitution). Instead of a straw man, the opponent becomes a digitally Photoshopped Other. They create a Photoshopped Person argument, in which they try to knock down an argument that never existed in the first place. Baldwin used falsified evidence and a Photoshopped Person argument for unclear ends with his tweet.

We see here how listening to online performances differ in digital spaces. First, as Baldwin’s retweet makes apparent, the ambiguities of the Internet make the message unclear. Additionally, it is easy to re-affirm existing beliefs and only listen to stereotypes to which one already subscribes, rather than seek genuine understanding of an individual’s experience, identity, and argument. Finally, dialogue becomes more complicated with digital technologies that falsify arguments and allow people to circulate Photoshopped Person arguments to “believers.”

Baldwin’s ambiguous intentions and meanings make it difficult to classify his listening. If Baldwin previously held Nazi beliefs (unconfirmed) about minority groups and had espoused sexism in the past (confirmed), his view of reality would have framed González as an enemy other. As such, the false image of González tearing the U.S. Constitution would have confirmed his already existing viewpoint. In this case, he would have demonstrated the non-rhetorical listening tactic of “agreement.”

 

++++++++++

Conclusions and future directions

Despite these complications that arise for rhetorical listening in digital spaces and on social media in particular, rhetorical listening is not impossible in digital spaces. As Ratcliffe and Jung both claimed, rhetorical listening is inherently active; listeners have agency. Likewise, I contend that digital rhetorical listeners can also actively choose how they engage with the circulation of fake news online. In all of the complications noted above, users still have agency to choose to what and how they listen. Part of making this choice would involve acknowledging and learning about the algorithms and interfaces that drive our digital environments and create and hinder epistemological possibilities. As agents, digital rhetors have the responsibility to learn about the rhetorical ecologies in which they are listening.

In the case of Jones and Corsi on YouTube, individual users do not have to watch the “Up-Next” video or click on the “Related Channels.” Rather, they could actively seek out alternative channels. Additionally, individual users could reflectively recognize that popularity on YouTube does not make the information presented in the video credible.

On a platform level, YouTube could choose to rework its algorithms to not link together related videos or channels. Alternatively, instead of a “Related Channels” column, it could have a column for “Alternative Channels.” YouTube could also change its interface to be more transparent about the vested interests of content creators and include how much ad revenue a particular creator or video has made.

On an institutional level, administrators and educators could work to develop pedagogies to teach students to rhetorically listen online. Stuart Selber (2004) claimed that professionally responsible English departments should teach students to effectively use, critique, and make arguments in digital spaces through functional, critical, and rhetorical literacy. Likewise, educators at all grade levels, not just universities and post-secondary education, should focus on teaching students to rhetorically listen online by thinking about their own positions and identities, as well as the identities of others in algorithmically-driven digital ecologies with value-laden interfaces. Educators need to explore pedagogies that promote self-reflexive listening online. This is especially important as social media is increasingly becoming a popular source for news (Shearer, 2018).

I want to be clear; the disinformation campaigns against the Parkland student activists were inexcusable under any circumstance. However, beyond thinking about why people listen to and believe fake news, we also need to think about other possible reasons that people create and share fake news. As rhetorical listeners, we need to think about the intention of these messages and ask ourselves why people would choose to listen to and share Jones, Corsi, and Baldwin’s disinformation and hate speech. As Rini (2017) contended, retweets and shares, especially without comment, are ambiguous; people can share something to express agreement or because they think it is outlandishly funny. This aligns with McKee and Porter’s (2009) and Phillips and Milner’s (2017) approach to Internet research that embraces ambiguity. The intention behind why a person shares a story may be propaganda, but fake news stories may also offer a way for the creators to contribute to a collective storytelling process that reworks existing folkloric narratives (Phillips and Milner, 2017). While it is impossible to say for certain why someone would choose to engage in the spread of hateful and false disinformation, this suggests that social media users might see sharing fake news as a way to contribute their voices to emerging narratives, and to avoid the threat of invisibility (Bucher, 2012).

If this is the case, then this suggests that fake news creators and circulators may themselves feel unheard. They may spread fake news not because they believe the content, but as Chaput (2010) points out, they may instead be driven by affect, and the desire to raise their own voices. As social, cultural, and political divisions off-line deepen, resulting in fear and anger towards those with different views (Pew Research Center, 2016), people feel that their experiences are invalidated and unheard by the other side.

If people share fake news because they believe the content, then teaching people digital rhetorical listening and digital news literacy will help alleviate the spread of fake news. However, if they are spreading fake news because they also feel unheard, then the problem becomes more difficult to tackle. In this case, rhetorical listening needs to address more than just the veracity of the content, but the larger economic, social, cultural, affective, and political divides in which social media participates. Feeling unheard is never an excuse for spreading disinformation or for attacking activists. However, we also need to think about the impetus for why people would spread these attacks beyond assuming that they are ignorant and address those issues as well. End of article

 

About the author

L. Corinne Jones completed her M.A. in English with an emphasis in composition at the University of Missouri-St. Louis. She is now working on her Ph.D. coursework specializing in rhetoric and composition in the Texts and Technology Ph.D. program at the University of Central Florida.
E-mail: l [dot] corinne [dot] jones [at] gmail [dot] com

 

Acknowledgements

I would like to acknowledge all of my incredibly supportive professors at both the University of Missouri-St. Louis and University of Central Florida and thank Dr. Anastasia Salter and my peers in the Texts and Technology program for their insightful feedback. I would also like to acknowledge and thank students from Marjory Stoneman Douglas High School for their brave activism. I would also like to thank my reviewers on the initial draft of this paper for their helpful comments and feedback as well.

 

Notes

1. Tandoc, et al., 2018, p. 144.

2. Tandoc, et al., 2018, p. 145.

3. Tandoc, et al., 2018, p. 147.

4. Tandoc, et al., 2018, p. 148.

5. Ratcliffe, 1999, p. 203.

6. Burke, 1950, p. 21.

7. Ratcliffe, 1999, p. 205.

8. Ibid.

9. Spigelman, 2004, p. 93.

10. Ratcliffe, 1999, p. 198.

11. Jung, 2005, p. 28.

12. Ratcliffe, 1999, p. 203.

13. Ratcliffe, 1999, p. 208; Jung, 2005.

14. Jung, 2005, p. 17.

15. Van Dijck, 2013; Schou and Farkas, 2016, p. 39.

16. Phillips and Milner, 2017, p. 10.

17. Phillips and Milner, 2017, pp. 202–203.

18. Phillips and Milner, 2017, p. 14.

19. Maxwell, 2013, p. 2.

20. Maxwell, 2013, p. 78.

21. McKee and Porter, 2009, pp. 81–83.

22. McKee and Porter, 2009, p. 89.

23. Gillespie, 2014, p. 4.

24. Gillespie, 2014, p. 168.

25. Schou and Farkas, 2016, pp. 40–41.

26. Van Dijck, 2013, p. 116.

27. Van Dijck and Poell, 2013, p. 7.

28. Laquintano and Vee, 2017, p. 46.

29. Laquintano and Vee, 2017, p. 53.

30. Van Dijck and Poell, 2013, p. 7.

31. X22Report, 2015a, “Welcome,” 1:37–1:55.

32. X22Report, 2015b, “U.S. is now”.

33. (Gillespie, 2014, p. 172.

34. Gillespie, 2014, pp. 172–174.

35. Gillespie, 2014, p. 175.

36. Jowett and O’Donnell, 1999, p. 1.

37. Jowett and O’Donnell, 1999, p. 6.

38. Jowett and O’Donnell, 1999, p. 9.

39. Tandoc, et al., 2018, p. 147.

40. Ratcliffe, 1999, p. 203.

41. Selfe and Selfe, 1994, p. 486.

42. Emerson, 2014, p. 51.

43. Emerson, 2014, p. 83–84.

44. Van Dijck, 2013, pp. 31–32.

45. Schou and Farkas, 2016, p. 38.

46. Schou and Farkas, 2016, p. 46.

47. Schou and Farkas, 2016, p. 45.

48. Van Dijck and Poell, 2013, p. 7.

49. Ibid.

50. Van Dijck and Poell, 2013, pp. 9–10.

51 Van Dijck and Poell, 2013, p. 10.

52. Butler, 2006, p. 185.

53. Wolfram and Schilling, 2015, p. 40; Barton and Lee, 2013.

54. Bolter, 2001, p. 23.

55. Tandoc, et al., 2018, pp. 141–142.

56. Tandoc, et al., 2018, p. 142.

57. Tandoc, et al., 2018, p. 148.

58. Van Dijck, 2013, p. 13.

59. Tandoc, et al., 2018, pp. 144–145.

 

References

@AdamBaldwin, n.d. “Adam Baldwin,” at https://twitter.com/AdamBaldwin, accessed 6 May 2018.

J. Albright, 2018. “Untrue-Tube: Monetizing misery and disinformation,” Medium (25 February), at https://medium.com/@d1gi/untrue-tube-monetizing-misery-and-disinformation-388c4786cc3d, accessed 8 April 2018.

G. Allen, 2018. “Students from Marjory Stoneman Douglas High School rallying for gun control in Tallahassee,” NPR.org (21 February), at https://www.npr.org/2018/02/21/587731782/students-from-marjory-stoneman-douglas-high-school-rallying-for-gun-control-in-t, accessed 8 April 2018.

D. Arkin and B. Popken, 2018. “How the Internet’s conspiracy theorists turned Parkland students into ‘crisis actors’,” NBCnews.com (21 February), at https://www.nbcnews.com/news/us-news/how-internet-s-conspiracy-theorists-turned-parkland-students-crisis-actors-n849921, accessed 8 April 2018.

D. Barton and C. Lee, 2013. Language online: Investigating digital texts and practices. Milton Park, Abingdon, Oxon.: Routledge.

J. Bolter, 2001. Writing space: Computers, hypertext, and the remediation of print. Second edition. Mahwah, N.J: Lawrence Erlbaum Associates.

B. Booker, 2018. “Florida gov. Rick Scott signs bill that tightens gun restrictions in the state,” NPR.org (9 March), at https://www.npr.org/2018/03/09/592423999/florida-gov-rick-scott-signs-bill-that-tightens-gun-restrictions-in-the-state, accessed 8 April 2018.

d. boyd, 2007. “Why youth (heart) social network sites: The role of networked publics in teenage social life,” In: In D. Buckingham (editor). Youth, identity, and digital media. Cambridge, Mass.: MIT Press, pp. 119–142, and at http://www.danah.org/papers/WhyYouthHeart.pdf, accessed 11 November 2017.

J. Brady, 2018. “Students to walk out for gun control,“ NPR.org (14 March) at https://www.npr.org/2018/03/14/593398916/students-to-walk-out-for-gun-control, accessed 8 April 2018.

C. Brooke, 2009. Lingua fracta: Toward a rhetoric of new media. Cresskill, N.J.: Hampton Press.

T. Bucher, 2012. “Want to be on top? Algorithmic power and the threat of invisibility on Facebook,” New Media & Society, volume 14, number 7, pp. 1,164–1,180.
doi: https://doi.org/10.1177/1461444812440159, accessed 10 January 2018.

K. Burke, 1950. A rhetoric of motives. New York: Prentice-Hall.

J. Butler, 2006. Gender trouble: Feminism and the subversion of identity. New York: Routledge.

B. Chappell and A. Tsioulcas, 2018. “YouTube, Apple, and Facebook ban InfoWars which decries mega-purge,” NPR.org (6 August), at https://www.npr.org/2018/08/06/636030043/youtube-apple-and-facebook-ban-infowars-which-decries-mega-purge, accessed 24 December 2018.

C. Chaput, 2018. “Rhetorical circulation in late capitalism: Neoliberalism and the overdetermination of affective energy,” Philosophy & Rhetoric, volume 43, number 1, pp. 1–25.
doi: https://doi.org/10.5325/philrhet.43.1.0001, accessed 7 January 2018.

S. Chess and A. Shaw, 2015. “A conspiracy of fishes, or, how we learned to stop worrying about #GamerGate and embrace hegemonic masculinity,” Journal of Broadcasting & Electronic Media, volume, 59, number 1, pp. 208–220.
doi: https://doi.org/10.1080/08838151.2014.999917, accessed 25 April 2018.

J. Corsi, 2018. “Current Events LiveStream — Tuesday, December 11, 2018, guest host DC Hammer,” YouTube.com, at https://www.youtube.com/watch?v=D0QW26ntuS8, accessed 31 December 2018.

J. Corsi, n.d. “Jerome Corsi,” YouTube.com, at https://www.youtube.com/channel/UC3QohndFXgxn8uoGViHrghA, accessed 8 April 2018.

C. Drury, 2018. “Trump meets ‘paedophile cult’ QAnon conspiracy theorist at White House,” Independent.co.uk (25 August), at https://www.independent.co.uk/news/world/americas/qanon-trump-white-house-meeting-michael-lionel-lebron-conspiracy-theory-paedophile-ring-a8507766.html, accessed 26 December 2018.

C. Domonoske, 2018, “Across the country, students walk out to protest gun violence,” NPR.org (14 March) at https://www.npr.org/sections/thetwo-way/2018/03/14/593433911/across-the-country-students-walk-out-to-protest-gun-violence, accessed 8 April 2018.

E. Dubois and G. Blank, 2018. “The echo chamber is overstated: The moderating effect of political interest and diverse media,” Information, Communication & Society, volume 21, number 5, pp. 729–745.
doi: https://doi.org/10.1080/1369118X.2018.1428656, accessed 21 December 2018.

J. Edbauer, 2005. “Unframing models of public distribution: From rhetorical situation to rhetorical ecologies,” Rhetoric Society Quarterly, volume 35, number 4, pp. 5–24.
doi: https://doi.org/10.1080/02773940509391320, accessed 4 February 2018.

L. Emerson, 2014. Reading writing interfaces: From the digital to the bookbound. Minneapolis: University of Minnesota Press.

K. Feldscher, 2018. “QAnon-believing ‘conspiracy analyst’ meets with Trump in the White House,” CNN.com (25 August), at https://www.cnn.com/2018/08/25/politics/donald-trump-qanon-white-house/index.html, accessed 26 December 2018.

D. Fleshler and P. Valys, 2018. “Named for the first time: All 17 who survived Nikolas Cruz’s bullets,” South Florida SunSentinel (7 March), at http://www.sun-sentinel.com/local/broward/parkland/florida-school-shooting/fl-florida-school-shooting-wounded-list-20180307-story.html, accessed 8 April 2018.

L. Garcia-Navarro, 2018. “Voices from the ‘March For Our Lives’ rally in Washington,” NPR.org (25 March), at https://www.npr.org/2018/03/25/596805295/voices-from-the-march-for-our-lives-rally-in-washington, accessed 8 April 2018.

T. Gillespie, 2014. “The relevance of algorithms,” In: T. Gillespie, P. Boczkowski, and K. Foot (editors). Media technologies: Essays on communication, materiality, and society. Cambridge, Mass.: MIT Press.
doi: https://doi.org/10.7551/mitpress/9780262525374.001.0001, accessed 21 December 2018.

H. Goodman, 2018. “TRUMP DOJ MUST INDICT CLINTON BEFORE NEW YORK SOUTHERN DISTRICT INDICTS TRUMP,” YouTube.com (10 December), at https://www.youtube.com/watch?v=pq3-jtJ_9Ko, accessed 31 December 2018.

N. Green, 2018. “Black Parkland students feel they’re not being heard in gun violence discussion,” NPR.org (9 April) at https://www.npr.org/2018/04/09/600938150/black-parkland-students-feel-theyre-not-being-heard-in-gun-violence-discussion, accessed 11 April 2018.

A. Horton, 2018. “A fake photo of Emma González went viral on the far right, where Parkland teens are villains,” Washington Post (26 March) at https://www.washingtonpost.com/news/the-intersect/wp/2018/03/25/a-fake-photo-of-emma-gonzalez-went-viral-on-the-far-right-where-parkland-teens-are-villains/, accessed 8 April 2018.

IMDb, n.d. “Adam Baldwin,” at http://www.imdb.com/name/nm0000284/, accessed 5 May 2018.

J. Johnson, 2017. “Ethics, agency, and power: Toward and algorithmic rhetoric,” In: A. Hess and A. Davisson (editors). Theorizing digital rhetoric. New York: Routledge Publishing, pp 196–208.

A. Jones, n.d. “Alex Jones Channel,” YouTube.com, at https://www.youtube.com/channel/UCvsye7V9psc-APX6wV1twLg, accessed 8 April 2018.

G. Jowett and V. O’Donnell, 1999. Propaganda and persuasion. Third edition. London: Sage.

J. Jung, 2005. Revisionary rhetoric, feminist pedagogy, and multigenre texts. Carbondale: Southern Illinois University Press.

T. Laquintano and A. Vee, 2017. “How automated writing systems affect the circulation of political information online,” Literacy in Composition Studies, volume 5, number 2, pp. 43–62.
doi: http://dx.doi.org/10.21623%2F1.5.2.4, accessed 15 May 2018.

R. Martin, 2018. “YouTube restores account for far-right activist Jerome Corsi,” NPR.org (5 March), at https://www.npr.org/2018/03/05/590803674/youtube-restores-account-for-far-right-activist-jerome-corsi, accessed 11 April 2018.

J. Maxwell, 2013, Qualitative research design: An interactive approach. Third edition. Thousand Oaks, Calif.: Sage.

H. McKee and J. Porter, 2009. The ethics of Internet research: A rhetorical, case-based process. New York: Peter Lang.

E. McNulty, T. Brown, and B. Campbell, 2018. “Florida governor signs package of new gun restrictions,” NPR.org (9 March) at https://www.npr.org/sections/thetwo-way/2018/03/09/592393010/florida-gov-rick-scott-signs-gun-package, accessed 8 April 2018.

A. Mitchell, K. Matsa, J. Gottfried, and J. Kiley, 2014. “Political polarization & media habits,” Pew Research Center (21 October), at http://www.journalism.org/2014/10/21/political-polarization-media-habits/, accessed 26 December 2018.

E. Morales, 2018. “Emma González: La nueva cara of Florida Latinx,” Washington Post (1 March), at https://www.washingtonpost.com/news/post-nation/wp/2018/03/01/emma-gonzalez-la-nueva-cara-of-florida-latinx/, accessed 5 May 2018.

P. Murphy, 2018. “InfoWars’ main YouTube channel is two strikes away from being banned,” CNN.com (24 February), at https://www.cnn.com/2018/02/23/us/infowars-youtube-videos-trnd/, accessed 8 April 2018.

A. Navar-Gill and M. Stanfill, 2018. “‘We shouldn’t have to trend to make you listen’: Queer fan hashtag campaigns as production interventions,” Journal of Film and Video, volume, 70, numbers 3–4, pp. 85–100.
doi: http://dx.doi.org/10.5406/jfilmvideo.70.3-4.0085, accessed 2 February 2019.

S. Noble, 2018. Algorithms of oppression: How search engines reinforce racism. New York: New York University Press.

E. Pariser, 2011. The filter bubble: What the Internet is hiding from you. New York: Penguin Press.

Pew Research Center, 2016. “Partisanship and political animosity in 2016” (22 June), at http://www.people-press.org/2016/06/22/partisanship-and-political-animosity-in-2016/, accessed 23 December 2018.

Pew Research Center, 2014. “Political polarization” (12 June), at http://www.pewresearch.org/packages/political-polarization/, accessed 10 April 2018.

W. Phillips and R. Milner, 2017. The ambivalent Internet: Mischief, oddity, and antagonism online. Cambridge: Polity Press.

S. Raphelson and E. Bowman, 2018. “Hundreds of thousands march for gun control across the U.S,” NPR.org (24 March), at https://www.npr.org/sections/thetwo-way/2018/03/24/596679790/hundreds-of-thousands-march-for-gun-control-across-the-u-s, accessed 8 April 2018.

K. Ratcliffe, 1999. “Rhetorical listening: A trope for interpretive invention and a ‘code of cross-cultural conduct’,” College Composition and Communication, volume 51, number 2, pp. 195–224.
doi: http://dx.doi.org/10.2307/359039, accessed 25 January 2018.

K. Reilly, 2018. “No, Parkland student Emma González did not rip up the U.S. Constitution,” Time (26 March) at http://time.com/5215433/emma-gonzalez-march-for-our-lives-fake-photo/, accessed 8 April 2018.

R. Rini, 2017. “Fake news and partisan epistemology,” Kennedy Institute of Ethics Journal, volume 27, number 2 supplement, pp E-43–E-64.
doi: http://dx.doi.org/10.1353/ken.2017.0025, accessed 31 December 2018.

J. Rose and B. Booker, 2018. “Parkland shooting suspect: A story of red flags, ignored,” NPR.org (1 March) at https://www.npr.org/2018/02/28/589502906/a-clearer-picture-of-parkland-shooting-suspect-comes-into-focus, accessed 8 April 2018.

J. Schou and J. Farkas, 2016. “Algorithms, interfaces, and the circulation of information: Interrogating the epistemological challenges of Facebook,” KOME>, volume 4, number 1, pp. 36–49.
doi: https://doi.org/10.17646/KOME.2016.13, accessed 31 December 2018.

S. Selber, 2004. Multiliteracies for a digital age. Carbondale: Southern Illinois University Press.

C. Selfe and R. Selfe, 1994. “The politics of the interface: Power and its exercise in electronic contact zones,” College Composition and Communication, volume 45, number 4, pp. 480–504.
doi: https://doi.org/10.2307/358761, accessed 3 September 2018.

E. Shearer, 2018. “Social media outpaces print newspapers in the U.S. as a news source,” Pew Research Center (10 December), at http://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/, accessed 31 December 2018.

C. Spigelman, 2004. Personally speaking: Experience as evidence in academic discourse. Carbondale: Southern Illinois University Press.

E. Tandoc, Jr., Z. Lim, and R. Ling, 2018. “Defining ‘fake news’: A typology of scholarly definitions,” Digital Journalism, volume 6, number 2, pp. 137–153.
doi: https://doi.org/10.1080/21670811.2017.1360143, accessed 21 December 2018.

C. Timberg and D. Harwell, 2018. “We studied thousands of anonymous posts about the Parkland attack — and found a conspiracy in the making,” Washington Post (27 February), at https://www.washingtonpost.com/business/economy/we-studied-thousands-of-anonymous-posts-about-the-parkland-attack---and-found-a-conspiracy-in-the-making/2018/02/27/04a856be-1b20-11e8-b2d9-08e748f892c0_story.html, accessed 8 April 2018.

J. van Dijck, 2013. The culture of connectivity: A critical history of social media. Oxford: Oxford University Press.

J. van Dijck and T. Poell, 2013. “Understanding social media logic,” Media and Communication, volume 1, number 1, pp. 2–14.
doi: https://doi.org/10.17645/mac.v1i1.70, accessed 21 December 2018.

B. Williams, 2008. “‘What South Park character are you?’: Popular culture, literacy, and online performances of identity,” Computers and Composition, volume 25, number 1, pp. 24–39.
doi: https://doi.org/10.1016/j.compcom.2007.09.005, accessed 13 May 2018.

W. Wolfram and N. Schilling, 2015. American English: Dialects and variation. Third edition. Malden, Mass.: Wiley-Blackwell.

X22Report, 2015a. “Welcome to the X22Report,” YouTube.com (2 January), at https://www.youtube.com/user/X22Report, accessed 31 December 2018.

X22Report, 2015b. “U.S. is now in the final stages of having its society closed down — Episode 601,” YouTube.com (25 February), at https://www.youtube.com/watch?v=wzK3RCFoCn8, accessed 31 December 2018.

 


Editorial history

Received 8 May 2018; revised 5 January 2019; revised 8 February 2019; accepted 12 January 2019.


Copyright © 2019, L. Corinne Jones. All Rights Reserved.

Listening to lies and legitimacy online: A proposal for digital rhetorical listening
by L. Corinne Jones.
First Monday, Volume 24, Number 3 - 4 March 2019
https://firstmonday.org/ojs/index.php/fm/article/download/9170/7723
doi: http://dx.doi.org/10.5210/fm.v24i3.9170