First Monday

Alt-right pipeline: Individual journeys to extremism online by Luke Munn



Abstract
The rise of the alt-right as a potent and sometimes violent political force has been well documented. Yet the journey of an individual towards upholding these ideologies is less well understood. Alt-righters are not instantly converted, but rather incrementally nudged along a particular medial pathway. Drawing on video testimonies, chat logs, and other studies, this paper explores the interaction between this alt-right “pipeline” and the psyche of a user. It suggests three overlapping cognitive phases that occur within this journey: normalization, acclimation, and dehumanization. Finally, the article examines the individual who has reached the end of this journey, an extremist who nevertheless remains largely unregistered within traditional terrorist classifications.

Contents

Introduction
The alt-right pipeline
Three cognitive phases
Collapse of the extremist/troll boundary

 


 

Introduction

Shortly before entering two mosques and killing 50 worshippers in Christchurch, New Zealand, the shooter posted one last time on the message board site 8chan: “Well lads, it’s time to stop shitposting and time to make a real life effort post.” Such an eruption of violence was shocking and certainly should be condemned. Yet the manifesto of the shooter suggests that this abominable act was the “logical” conclusion of a long journey in which racist ideas were constructed, formalized, and amplified. To frame the shooting as a “real life effort post” emerging from the “shitposting” of the years prior to it, is to already betray a fundamental cognitive shift linked with a medial environment.

What is the nature of this environment, and how did it transform the individual’s psychological or cognitive sphere? As the alt-right continues its rise, such a question is not just relevant to this case, but to comprehending the masses which have been drawn to it and the increase of right-wing violence these conditions produce. The everyday medial environment that surrounds and influences us is the Internet. Quietly but profoundly, this environment reengineers “whole realities that the user is then enabled to inhabit” and thus requires a form of re-ontologization [1]. This environment influences the way we “conceive and shape our sense of self” and the “way we understand and rationally organise our experience of the world” [2]. And this environment has the capacity to be transformative, to trigger “deep, structural shifts in the basic premises of our thought” [3]. But if the figures above stress the abilities of this environment to enhance human life, recent events have revealed more pathologic influences anticipated by critical media theory (Rushkoff, 2011) or earlier strains of technopessismism (Ellul, 1964). The toxic Web can produce toxic selves.

Thus, while much research has analyzed the rise of the alt-right as a broad political movement (Neiwert, 2017; Nagle, 2017; Pollard, 2018), this article argues that the incursion of the alt-right is not only a set of meta-political maneuverings, but occurs at the micro-level of the individual. Alt-right radicalization is a slow colonization of the self, a steady infiltration of heart and mind. Wendy Chun has asserted that media exerts force over a “creepier, slower, more unnerving time,” effectively “disappearing from consciousness” [4]. Such an influence is subtle or even subliminal. It recalls Foucault’s earlier thoughts on power as something that “reaches into the very grain of individuals, touches their bodies and inserts itself into their actions and attitudes, their discourses, learning processes and everyday lives” [5]. Rather than a knee-jerk reaction against racism, then, the aim is to understand online radicalization from the inside out. Without this understanding, any programme striving to impede the alt-right’s growth risks an ineffective intervention.

The alt-right describes radicalization as “taking the red pill,” a term now widely understood within mainstream journalism, academic research, and indeed broader Internet discourse (Read, 2019; Tait, 2017). For the uninitiated, the language of the red pill is drawn from the 1999 film The Matrix. The protagonist of Neo, an ordinary office worker, is presented with two pills. “You take the blue pill and the story ends. You wake in your bed and believe whatever you want to believe,” Morpheus says. “You take the red pill and you stay in Wonderland, and I show you how deep the rabbit hole goes” [6]. In the rhetoric of the alt-right, the red pill means awakening to the true nature of reality. What this entails exactly is always left productively undefined. The ideology to “wake up to” is a toxic melange of conspiracy theories, racism, and anti-Semitism: the mainstream media is propaganda, leftists, and SJWs (social justice warriors) are imposing their agenda, the Jewish people are a powerful cabal that pose an existential threat to white nationalism. While such ideologies should certainly be condemned, understanding online radicalization necessitates we understand their “logic” for alt-right initiates.

The origins of the red pill imply a decisive moment of conversion, a single event that radically transforms the subject forever. Before taking the red pill, the world is taken for granted, accepted as the normal nature of things. After taking the red pill, the world is revealed to be an all-encompassing lie, a fake, a forgery. In an instant, the individual’s eyes are opened and they finally see the world around them as it really is. Yet if the alt-right has certainly adopted the red pill concept, such a transformative rupture never really occurs. Instead, individuals speak about being red pilled on this issue or that issue. “Everyone makes their journey differently,” explains one alt-righter, “I definitely received some redpills from more extreme sources at the beginning, but the little ones along the way really helped” (Ronny TX, 2017). Rather than a single dose that irrevocably alters a belief, there are many red pills, taken over a longer period of time.

Individuals thus refer to a gradual process of becoming alt-right, a journey with many red pilling waypoints. In this sense, this article extends radicalisation scholarship that foregrounds it as a process. A recent study on “transitional journeys into extremism,” for example, describes transitions as “psychological processes involved in adaptation,” with the “reconstruction of a valued identity” being one key part of this process (Sieckelinck, et al., 2019). Similarly, earlier work by John Horgan [7] highlights the psychosocial journey leading an individual towards terrorism; rather than a dramatic conversion, he frames it as “a series of incremental steps (each of which if taken in isolation would rapidly diminish in overall significance).” Instead of attempting to identify the definitive profile of a radical, Horgan focuses on the pathway to radicalisation, one with “a sense of gradual progression” [8].

However this scholarship fails to adequately encompass the role of the networked environment as a new psychosocial force in this radicalisation process. The networked environment — understood here as a broad ecology of platforms, sites, and services, along with their attendant communities — is barely mentioned. Yet, in a contemporary Western context at least, this networked milieu of social media, online gaming, message boards, and live streams is precisely the environment where time is spent and redpilling takes place. Each day, with each meme or message, the self becomes assimilated into an alt-right ideology. From Anders Breivik to Lane Davis, the Web was a formative sphere for recent far-right attackers. “From where did you receive/research/develop your beliefs?” reads a line in the Christchurch shooter’s FAQ style manifesto. His answer was straightforward: “The Internet, of course.”

 

++++++++++

The alt-right pipeline

To demonstrate this process, I turn to video testimonies and comments from YouTube. Here, users testify that their worldviews or beliefs were changed as they were exposed to media content over a period of time. In a long confessional titled “My Journey Into the Alt-Right Pipeline,” a user called Faraday Speaks (2019) described his radicalization as a set of gradual moves. Lonely and depressed, he began watching self help videos on YouTube in order to deal with some of his issues. “Then one day,” he recalls, “a small thumbnail appeared on my sidebar” in the page area where related videos are automatically recommended by the platform. The video was from Stefan Molyneux, the prolific anti-SJW podcaster and show host, and dealt tangentially with self-help. “This was about 2014 and his ideas were much milder then” explains Faraday Speaks (2019). Indeed, as host of his own show, Molyneaux regularly features guests who speak on a variety of topics, acting as a nexus for alt-right ideologies. Each guest provides a social connection to another key individual.

“From there,” Faraday Speaks continues, “I got introduced to people like [Steven] Crowder and [Ben] Shapiro” and basic “conservative principles” such as a small government, banning abortion and same-sex marriage, tighter immigration policies, and so on. “And from there I got introduced to people like Lauren Southern and Gavin McInnes”: “By the end of it,” he finishes, “I’m listening to Jared Taylor talking about racial differences.” While political positions are imprecise, Faraday Speaks’ pipeline indicates a general movement from centrist to far-right. Beginning with the broad humour of Steven Crowder, it moves to more overt white-rights advocates like Lauren Southern, and then to openly white supremacist figures such as Jared Taylor. There is a smooth, step-wise quality to this process, an incremental progression. Each figure is more strident than the last, each video more radical than the one preceding it.

In a report for the Data & Society Research Institute, Rebecca Lewis (2018) identifies exactly this web of YouTube connections, labelling it as the “Alternative Influence Network.” Though disparate in their beliefs and styles, this assortment of Internet celebrities, scholars, comedians and pundits share a collective disdain for progressive politics. United by this red thread, and comprising 65 political influencers across 81 channels, this universe of content is vast enough for Lewis to describe it as a “fully functioning media system” [9]. Lewis stresses how guest appearances and collaborative productions are a primary means of audience building on YouTube. These links, as Lewis argues [10], make “it easy for audience members to be incrementally exposed to, and come to trust, ever more extremist political positions.” Indeed this pattern is precisely what Faraday Speaks describes. After months of watching Stefan’s show, Molyneaux became an “authority figure” for him; he assumed that guests who appeared on the show had been “vetted” and “so then I would set them up as authority figures” (Faraday Speaks, 2019). Hosting a guest not only provides them with a platform for their ideas, but transfers to them a degree of trust.

YouTube augments such word-of-mouth suggestions with its recommendation system. In theory, a user can choose any video from millions of options. In practice, users are steered towards a very small set of related videos. In a paper on the high-level workings of the recommendation system, YouTube engineers explain that it comprises two stages. In the first stage, “the enormous YouTube corpus is winnowed down to hundreds of videos” that are termed candidates [11]. These candidates are then ranked by a second neural network, and the highest ranked videos presented to the user. In this way, the engineers can be “certain that the small number of videos appearing on the device are personalized and engaging for the user” [12]. Such recommendation engines strive to eliminate “pain points” or frictions within the user experience. On a platform, one thing should lead to another, automatically and effortlessly. Rather than a manual search, for instance, users are recommended videos in the post-roll space, or related videos in the sidebar to the right. Consume content, and similar content will slide into place surrounding it, or even auto-play. Through engineering and design, these mechanisms privilege particular content. If platforms hold content on everything, they nevertheless steer users toward specific things.

Based on hundreds of signals, users are presented with content that is attractive by design: hooking into their interests, goals, and beliefs. However recommendation engines are not static entities, based on some fixed notion of our true self, but rather highly dynamic and updated in real-time. Your profile incorporates your history, but also whatever you just watched. As YouTube’s engineers explain, it must be “responsive enough to model newly uploaded content as well as the latest actions taken by the user” [13]. As content is consumed, a person’s cognitive sphere is shaped and new interests emerge.

Recommendation engines, then, are never “merely” technical but psychical in that they intersect with the mind. Recommendations are “the computational exploitation of a natural human desire: to look ‘behind the curtain,’ to dig deeper into something that engages us,” observes Zeynep Tufekci (2018): “As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.” Reacting to our dynamic desires, recommendation engines keep pace by serving up more intense content, which in turn feeds those desires and constructs new ones. As a New York Times investigation found: “the platform’s recommendation system consistently directed people toward extremist videos on the riots — then on to far-right videos on other subjects” (Naughton, 2018). In this sense, YouTube is not just a platform, but a pathway or pipeline.

Gradually, a user’s belief system is recalibrated. A progressive medial process accompanies an incremental psychological shift. In response to a video describing this process, commenters offered their own experiences. “I went really deep into that rabbit hole,” admitted one user: the atheist channels she was watching starting making anti-feminist videos, “so I watched them, and then I found myself following more and more ‘skeptical’ creators, many of whom proclaimed to be some type of leftist or liberal” (Three Arrows, 2018). She began identifying with alt-right values, despising the type of LGBT person who might be open about their identity or talk about being discriminated against. “I never realized how much I’d been radicalized. It just all seemed like the common sense logic they said it was. I’d make half joking remarks about Jews and think little of saying slurs” (Three Arrows, 2018). Of note here is the mention of common sense and logic, terms pointing to gradual cognitive changes that eventually became “self-evident.” Another user tells a similar story. “I fell down the same hole when I was like 12–13,” she wrote; she started watching atheist videos in order to feel better about her lack of religious belief, but in those channels atheism and anti-SJW sentiment were always being “lumped together”; over time, these connections became ingrained in her own beliefs: “I became indoctrinated in that thought process because I was trying to follow one thing, and was constantly bombarded by the other” (Three Arrows, 2018). Both users, then, testify about a medial power that reshaped their thought process incrementally. Yet rather than deterministic, their stories demonstrate the intersection between a curious subject and the alt-right content made available via a platform. While technical affordance underpin this experience, a user is never coerced, but rather finds something compelling.

The content comprising the alt-right pipeline is incredibly varied. Ranging from Islamophobia to reproduction rates, immigration debates or feminist “takedowns,” ideas are wide ranging. Far from being a single thread, there are any number of issues to dive into and identify with. Yet as Lewis noted, the system is ideologically cohesive, and, due to YouTube’s features, it is technically cohesive. In these technical environments, there is no sign indicating the switch from one ideology to another, no distinctive jolt when transitioning to the next waypoint in this process. The next video auto-plays. The next comment is shown. The next site is recommended. Based on the rules of recommendations, each piece of content must be familiar, suggested by a user’s previous history, but also novel, something not yet consumed. Thus, while these paths may fork and diverge, they ultimately shunt a user further along this pipeline. “You’ll take one piece of that, and that will take you to the next step and the next step and the next step,” Faraday Speaks (2019) explains, “and this is how you become radicalized.” Content, coordinated by algorithmic recommendations, turns radicalization into a stepwise process. Instead of the major “leap” into white supremacism, there are hundreds or thousands of micro-nudges over time.

While YouTube provides a prominent single example, this pipeline can bridge multiple sites and platforms. Journalist Robert Evans collected chat logs, forum posts, and social media comments in order to understand how 75 alt-right activists had been red pilled. From the “free speech” platform of Gab to gaming app Discord, what emerges from this material is a diverse ecosystem of online environments that each contribute to an individual’s radicalisation in their own distinct way. As Evans (2018) states: “We see a steady spiral, from arguments in comment sections to far-right YouTube personalities to ‘the_donald’ subreddit to 4chan’s /pol/ board and eventually to fascist Discord servers.” Rather than a sharp distinction between bright social media and the cesspit of the dark Web, such sites come together to form the “dark social Web” (Munn, 2019). This dense spectrum of right to far-right spaces offers a set of waypoints, allowing users to transition to more politically extreme environments without a sense of cognitive dissonance.

 

++++++++++

Three cognitive phases

Each individual’s journey along the alt-right pipeline is unique. Yet we might identify three phases shared by many users: normalization, acclimation, and dehumanization. Of course, these aspects of online hate and radicalization have appeared in other, earlier research (Oboler, 2008; Dalgaard-Nielsen, 2008; Smith, 2011; Oboler, 2014; Garpvall, 2017). Rather than claiming these are new concepts, the contribution here is a synthesis applied more explicitly to alt-right individuals. While these phases might be loosely mapped to the start, middle and end of an individual’s journey, they should not be seen as mutually exclusive or strictly linear. They may overlap or occur in more cyclical formations. Yet they begin to sketch out some of the key cognitive operations necessary to progress further along a far-right vector.

The first cognitive phase is normalization, a phase in which humour and irony play key roles. By themselves, concepts like the supremacy of the white race or the solution to the “Jewish Question” are too blunt, too forthright. While red-pillers pride themselves on having uncovered the harsh race-based reality, for those at the beginning of this journey, such overtly racist beliefs are repulsive. This ideology needs to be repackaged in the visual vernacular of the Web: animated GIFs, dumb memes, and clever references. The idea tumbler of the Internet provides the perfect environment for image or language play, for absurd juxtapositions and insider jokes. Impish and jocular, such practices trivialize and thus normalize racism and xenophobia. As one study asked (Schwarzenegger and Wagner, 2018): “can it be hate if it is fun?”

Memes, often under the guise of “edgy” humour, thus form a key medium for normalization. Far-right memes leverage technical functionality to increase their visibility and spread virally (Marwick and Lewis, 2017). Such memes are posted, adapted and reposted, being seen again and again. The incessant repetition of a meme produces familiarity. The first time a racial or misogynistic slur is encountered, it is shocking. The second time, the visceral disgust has been tempered. The third time, it is abhorrent but expected. And so on. As Fang and Woodhouse (2017) found, “when users post so many genocide and rape jokes, they become so detached from reality that they become susceptible to the messages of bonafide hate groups, a transformation referred to in forums as ‘irony poisoning.’” Shock cannot be sustained.

Because memes are often ridiculous or over the top, reposting them seems harmless. Sharing or even creating new versions becomes a way of reinforcing membership in the community. Yet the ideologies embedded within this content slowly edge their way into the psyche, normalizing fascist beliefs and transforming the individual, albeit at a subliminal level. “I saw ppl negging Jews so I joined in as a meme first off,” writes one alt-right initiate, “then all of a sudden it stopped being a meme” (FucknOathMate, 2017). Last year one popular YouTuber recommended the “E;R” channel, which features Nazi propaganda behind a thin veneer of humour. When the channel creator was asked if he redpilled others, he responded: “Pretend to joke about it until the punchline /really/ lands” (Romano, 2018).

If packaged as memes, such racist or sexist joking is nevertheless fraught and must be coupled with irony. Irony provides plausible deniability, a key benefit for alt-right initiates within a contested and highly controversial space. Intentions are shrouded online. The distinction between seriousness and satire becomes vague and uncertain. “The unindoctrinated should not be able to tell if we are joking or not” states the Daily Stormer writing guide (Anglin, 2017). This allows racist, sexist, or xenophobic statements to be made, but also enables a hasty retreat when the speaker comes under fire. “Irony has a strategic function,” asserts researcher Alice Marwick, “It allows people to disclaim a real commitment to far-right ideas while still espousing them” (Wilson, 2017).

For example YouTube star PewDiePie is regarded by some as being a gateway to the alt-right (Ward, 2019). With 90 million subscribers, he is both highly influential and no stranger to controversy. He has hired men to carry a “death to all Jews” sign, has used the n-word in one stream, and has called a female streamer a “crybaby and an idiot” for demanding equal pay (Romano, 2018). These actions have led to criticism and contracts being terminated. But the streamer is also affable and funny, emanating a care-free attitude, the perfect conduit for ironic racism. That meme was produced tongue-in-cheek. That content was shared to show how ridiculous racists are. Quit being overly sensitive. “Far from a harmless joke,” concludes one journalist, “I’ve come to understand that ‘ironic racism’ is integral to the alt-right’s indoctrination strategy” (Di Placido, 2019).

The second cognitive phase is acclimation. While acclimation overlaps with normalization, the term foregrounds how a user becomes successively conditioned to a series of environments. The common journalistic phrase of becoming “radicalized online” (e.g., Schager, 2017) tends to suggest a monolithic zone of dangerous content. Yet the flexibility of digital platforms enable a rich variety of spaces that are technically linked and incrementally extreme. As one user admitted, anti-feminist YouTuber Sargon of Akkad was an “‘easier step’ away from liberal views than outright Nazism. Once he’d taken that step and gotten used to Sargon’s rhetoric, it was easier for him to get used to the more extreme atmosphere of /pol/” (Evans, 2018). Like climbers or divers, users seem to pause at key stages of their alt-right journey, slowly become accustomed to the ideologies of each new atmosphere.

Acclimation occurs through psychological habituation rather than conscious, rational affirmation. After all, the sheer volume of right-wing content and the velocity at which it is posted ensures that each claim and counterclaim can never be individually assessed. As Franco Berardi notes, while the online sphere is unlimited, the mind is limited — if cybertime is infinite, our cognitive time is highly constrained [14]. Any ability to hold up each “fact” to scrutiny — to rationally process each trope and ethically weigh each stereotype — quickly becomes overwhelmed. Inundated in this way, the mind responds by becoming desensitized. Content and claims wash over the user. Racism moves from being a noticeable glitch to an environmental default, an accepted background to any online activity (Nakamura, 2013). Empathy dries up and the user becomes psychologically numbed. The mind adjusts to this new normal.

Acclimation to one stage establishes a new cognitive baseline for what is acceptable. Having grown accustomed to this environment, the user pushes onto the next stage, with more incendiary imagery and more provocative discourse. “It was people like Steven Crowder, Paul Joseph Watson, Milo Yiannopoulos, Black Pigeon Speaks, etc. that got me to where I am now,” explains one user: “They redpilled me a little bit, then I moved further and further right until I could no longer stand them” (Ronny TX, 2017). As a user progresses further along this pipeline, merely conservative pundits like Steven Crowder now appear too centrist. While their views once seemed controversial, now they are no longer outspoken enough. Milo Yiannopoulos, a key figure in the early alt-right movement, is now derided by some as being “alt-lite,” a watered down version that champions freedom of speech, for example, but stops short of embracing the so-called race realism of the alt-right (Anti-Defamation League, 2019). For those who have been remade in the alt-right image, these less strident figures may hold some of the same goals, such as a revitalization of “Western civilization” and traditional Christian values, but will make little headway until they swallow some of the more difficult pills, such as white supremacy or the “Jewish question.” The acclimated user thus proceeds through each successive stage, searching for less adulterated rhetoric and more uncompromising figures.

The third cognitive phase is dehumanization. Laid bare, the alt-right vision includes abhorrent actions done to certain clusters of the population, from deportation to genocide. Yet humans have rights, claims to life and liberty. Dehumanization thus becomes an important “psychological prerequisite” of violence, as in earlier fascist incarnations like National Socialism (Steizinger, 2018). Individuals on an alt-right journey must shift the “other” into another ethical category altogether. The Christchurch shooter’s manifesto, for example, refers to hordes of faceless enemies who invade white homelands: “Any invader you kill, of any age, is one less enemy your children will have to face.” His language clumps individuals into an anonymous army of attackers. In a similar vein, YouTube videos from prominent alt-right figures speak about encountering “a transgender.” Their tone and wording suggests coming into contact with a disgusting or ridiculous form of life, a lesser species.

Such dehumanizing rhetoric joins broader tropes in the alt-right community like the NPC, or non-playable character. In a videogame, a NPC is any character who cannot be controlled and who is not the protagonist. NPCs only exist to further the narrative arc of the protagonist. Game developers will often reuse the same sprites or geometry, meaning they are carbon copies of each other. And their movement is typically tightly scripted, allowing players to discover their patterns and defeat them. For the alt-right, the NPC perfectly epitomizes the broader populace who have not yet been red pilled — generic clones with no agency who do what they are programmed to do.

Thus, whether demeaned as a lesser lifeform or dismissed as a scripted automaton, alt-right tropes work to dehumanize. Dehumanization ensures that the alt-right initiate is never confronted with a human — a peer with a name, a claim, and a story. Instead, the “other” of the feminist or the Marxist, the SJW or the Jew is an indistinct assailant, an averaged figure that emerges from the constellation of media comprising an individual’s online environment. Thousands of sound bytes, quotes and ad hominem attacks are lifted from their original context and assembled together in the mind of the user over an extended time period. The composited result is an archetype which is monstrous but vague, able to justify the equally intense and imprecise hate levelled at it. Berardi [15] emphasizes that: “In these online echo chambers, real people are displaced by the phobic ghosts of otherness, and the possibility of tolerant, democratic debate is finally obliterated.”

Dehumanizers come to understand that their targets lack the special essence required to be human. Their human form is deceptive, because they are merely “humanoid or quasi-human beings — as human in appearance only” [16]. Because of their race, religion, gender or lifestyle, they have forfeited their humanity, and thus forfeited their rights. This cognitive operation shifts the enemy out of the category of ethical consideration altogether. As Zygmunt Bauman [17] observes: “Dehumanized objects cannot possibly possess a ‘cause’, much less a ‘just’ one; they have no ‘interests’ to be considered, indeed no claim to subjectivity.” Dehumanizing rhetoric transforms rights-bearing subjects into apolitical objects. It clears the way for its targets to be mistreated, as in rape threats, or managed by others, as in the deportation schemes of racial utopias. Through this cognitive shift, alt-right individuals can support such activities while retaining their moral superiority.

 

++++++++++

Collapse of the extremist/troll boundary

After proceeding far down the alt-right pipeline, the beliefs of an individual have been thoroughly transformed. The fascist creep has completed its gradual colonization. However our language to describe this person seems highly constrained. He is not labelled a terrorist, because he hasn’t yet committed an act of physical violence in the “real world.” He is not termed a white suprematist, because he has not formally joined any official hate group. As a recent governmental report on radicalization suggested: “radical individuals may hold hateful or anti-social ideas that many others might find offensive or disturbing. Nevertheless, if their ideas do not extend to using violence or advocating the use of violence, they should not be considered violent extremists” [18]. Such a person with extreme beliefs remains nebulous and unnamed, slipping between terminology established long before the Internet.

Yet what recent events should reveal is that this conventional criteria for categorising an individual no longer applies. Referring to an alt-right activist who began lashing out in acts of verbal and physical violence, one writer noted that “the extremist/troll boundary had started to collapse” (Bernstein, 2018). Whether the doxxing and death threats of GamerGate or the trolling of the Trump era, an individual with such radical views can certainly threaten and oppress others in “real” ways (Lees, 2016). When coordinated and combined with a community of other like-minded individuals, this aggression is only amplified. If such violence is textual or verbal rather than physical, its ability to impinge on the lives of others — many who are already marginalised in particular ways — is no less substantive.

Moreover, these individuals can lash out in acts of corporeal, physical violence. After counting violent incidents in the last decade and categorizing them according to ideology, a Washington Post article declared that “violence by white supremacists and other far-right attackers has been on the rise since Barack Obama’s presidency — and has surged since President Trump took office” (Lowery, et al., 2018). In July of 2017, Lane Davis, who produced hundreds of videos and articles for far-right publications, accused his parents of being “leftist pedophiles” before stabbing his father in the chest and neck (Bernstein, 2018). In October of 2018, a man walked past dozens of white shoppers before shooting two African-American customers in the back of the head at a grocery store in Kentucky; prior to being arrested he would confide to a bystander that “whites don’t shoot whites” (Zraick and Stevens, 2018). And in the same month, another man unloaded a semiautomatic rifle on Jewish worshippers at a Pittsburgh synagogue; just prior to the attack he would post on social media that he couldn’t “sit by and watch my people get slaughtered. Screw your optics, I’m going in”; killing 11 and wounding six makes it the deadliest anti-Semitic attack in the United States ever (Gormly, et al., 2018).

None of these men were registered members of an official hate group. None of them were formally affiliated with a terrorist organisation. While some had a history of domestic violence, these prior assaults were not terror in the sense of an ideologically motivated attack. In other words, all the way up to their attacks, they would qualify as extremists not terrorists. Indeed, while more recent terrorist scholarship has acknowledged the function of the Web in ISIS recruitment, for example, its role is downplayed as a mere precursor to the “real” commitment of meeting other members face to face. “The Internet provided the first steps into the movement before they actually had any off-line contacts to movement members,” asserts Daniel Koehler [19]. Yet the spate of recent alt-right attacks demonstrate that acts of terror require neither a cohesive “movement” nor any formal “members.”

Of course, alt-right individuals are relational in the sense of connecting to extensive social infrastructures and online communities. But they do not belong to an organisation or even a cell. Indeed, these young and often unemployed men intentionally withdraw from those around them, embracing their new social isolation as “volcels” (voluntary celibates) or “autistic loners.” The Facebook page of James Fields, the Charlottesville car attacker, was adorned with jokey images of “weaponized autism” (Fang and Woodhouse, 2017). And a recent comment on 8chan’s notorious /pol/ board mirrored such sentiments. In response to a Congressional finding that designated white supremacist groups as an emergent terrorist threat, one user countered: “autistic lone wolves (don’t) unite. Just shoot more people lmao.” While this antisocial stance certainly provides a foundation for trollish behaviour, the leaderless, autonomous individual is also a logical next phase for terrorism in an environment of increasing mass surveillance (Michael, 2012). Neither hierarchical like the IRA or the PLO in the 1970s and 1980s, nor networked like al-Qaeda in the 1990s and 2000s, these attackers are truly lone wolves.

The recent litany of alt-right violence thus appears highly unpredictable. Because the racism and xenophobia of these alt-right attackers was developed, nurtured, and sustained entirely within the technical environments of the Web, it never triggers any of the conventional red flags. These conditions enable “individuals to act anonymously online while keeping their beliefs from public view” writes security analyst Chris Hawkins (2019). However toxic their hate speech is, these individuals remain within the algorithmic designation of the general population. As normal subjects, they are not selected for special surveillance or scrutiny and their activities are never anticipated by data-driven security schemes. Hawkins (2019) observes that “the absence of an organised structure or parent group ... makes far-right extremism more difficult for security services” to detect and disrupt. Attacks do not conform to any cohesive pattern or overarching plan, but instead appear random, isolated, entirely unpredictable. Unregistered in the “real world,” extremism simmers online until it erupts — suddenly and unexpectedly.

At the same time, however, there is an inevitability to such attacks. Build up enough anger in enough people, and eventually it will spill out into an act of terror. The conditions that fuel outrage, that incite violence, that perpetuate racist stereotypes, will sooner or later land upon an individual that takes them to their logical extreme. A 2011 Daily Kos article (G2geek, 2011) anticipated exactly this form of violence: “Stochastic terrorism is the use of mass communications to stir up random lone wolves to carry out violent or terrorist acts that are statistically predictable but individually unpredictable.” [20] In the wake of far-right attacks, and the Christchurch shooting in particular, the term has been reinvigorated. Stochastic terrorism captures the rising frequency of alt-right violence alongside its “scattershot” quality. Each attacker chooses his own target, his own location, his own motive. Yet these attacks, while not directly ordered by the broader alt-right movement, dovetail smoothly into its broader vector of racialized hate. As Joshua Clover (2019) asserts, stochastic terrorism:

suggests that when some dramatic and singular act happens, it will look like an individual killer obeying their own disturbance, their own nature or character or conscious beliefs — but their actions will nonetheless be an effect of the larger structure, an assured product of the mesh that holds us all.

To conclude then, this article has investigated the intersection between those “conscious beliefs” and the “larger structure” — between the cognitive sphere of an alt-right initiate and the technological sphere of the alt-right pipeline. This exploration begins to reveal how medial environments are able to intensify hatred within an individual over time. Such conditioning is not deterministic, prescribed to a docile subject, but rather augments the decision-making capabilities of an individual, providing affordances that both induce and fulfill their desires. With this disclaimer in mind, the result should nevertheless foreground the transformative power of technical environments to reshape the psyche. Racists do not arrive ready-made, but emerge from a dynamic environment that will “continuously change from one moment to the other” in order to foster particular patterns of thought [21]. Individuals with alt-right beliefs are ordinary people who — exposed to an environment over time — have arrived at ideas they regard as common sense, self-evident. In the end, it might be this banality, this normalcy that is most threatening about the alt-right. Despite their rhetoric of persecution, these white men are not a marginalized group. Indeed a recent report suggested that 11 million Americans hold a strong sense of white identity, a belief in the importance of white solidarity, and a sense of white victimization, making them alt-right allies in belief if not in practice (Hawley, 2018). Situated within this broader segment of the population, the alt-right individual becomes a normal radical, a mainstream extremist. End of article

 

About the author

Based in Aotearoa New Zealand, Luke Munn uses both practice-based and theoretical approaches to explore the intersections of digital cultures, investigating how technical environments shape the political and social capacities of the everyday. He is currently completing a Ph.D. at Western Sydney University on algorithmic power.
E-mail: luke [dot] munn [at] gmail [dot] com

 

Acknowledgments

Thanks to First Monday’s anonymous reviewers for numerous references and helpful comments. Thanks to Geert Lovink for prompting me to write initially on this topic. Thanks also to Liam Magee for a discussion on the alt-right, which provided a historical perspective on terrorism and the lone wolf.

To avoid directing traffic to this material, quotes from anonymous 8chan users and the manifesto of an alt-right attacker are intentionally not linked.

 

Notes

1. Floridi, 2016, p. 97.

2. Hernández-Ramírez, 2017, p. 54.

3. Gualeni, 2015, p. 73.

4. Chun, 2017, p. x.

5. Foucault, 1980, p. 39.

6. Wachowski and Wachowski, 1998, p. 29.

7. Horgan, 2005, p. 82.

8. Horgan, 2008, p. 89.

9. Lewis, 2018, p. 4.

10. Lewis, 2018, p. 36.

11. Covington, et al., 2016, p. 192.

12. Ibid.

13. Covington, et al., 2016, p. 192.

14. Berardi, 2008, p. 39.

15. Berardi, 2015, p. 96.

16. Smith, 2011, p. 431.

17. Bauman, 2000, p. 96.

18. Angus, 2016, p. 2. Along with setting a “hard threshold” for extremists, the report is disturbingly blinkered in its focus on Muslim extremists. It outlines steps to document all known prayer groups and other responses to the threat of “jihadists” while far-right strains such as nationalism, populism, and white supremacism are barely mentioned.

19. Koehler, 2014, p. 121, emphasis mine.

20. The first mention of stochastic terrorism actually appears to be in a 2002 article, where the author compares terrorist risk with earthquake prediction: “No seismologist is capable of predicting the time, place, and magnitude of the next major earthquake in California, but it is possible for a seismic risk analyst to evaluate the annual exceedance probability of loss to a California property portfolio.” Gordon Woo, “Quantitative terrorism risk assessment,” Journal of Risk Finance, volume 4, number 1 (2002), p. 7. However that article focused on al-Qaeda cells as a primary unit and largely ignored the internet, while the Daily Kos article is widely referenced and brings together “lone wolves” with “the use of mass communications.”

21. Deleuze, 1992, p. 4.

 

References

A. Anglin, 2017. “Writing guide,“ Daily Stormer, at https://www.documentcloud.org/documents/4325810-Writers.html, accessed 22 May 2019.

C. Angus, 2016. “Radicalisation and violent extremism: Causes and responses,” New South Wales Parliamentary Research Service, e-brief, 1/2016, at https://www.parliament.nsw.gov.au/researchpapers/Documents/radicalisation-and-violent-extremism-causes-and-/Radicalisation%20eBrief.pdf, accessed 22 May 2019.

Anti-Defamation League, 2019. “From alt right to alt lite: Naming the hate,” at https://www.adl.org/resources/backgrounders/from-alt-right-to-alt-lite-naming-the-hate, accessed 25 April 2019.

Z. Bauman, 2000. Modernity and the Holocaust. Ithaca, N.Y.: Cornell University Press.

F. Berardi, 2015. Heroes: Mass murder and suicide. London: Verso.

F. Berardi, 2008. “(T)error and poetry,” Radical Philosophy, number 149, pp. 39–45, at https://www.radicalphilosophyarchive.com/article/terror-and-poetry, accessed 22 May 2019.

J. Bernstein, 2018. “Alt-right troll To father killer: The unraveling of Lane Davis,” BuzzFeed News (18 July), at https://www.buzzfeednews.com/article/josephbernstein/lane-davis-ralph-retort-seattle4truth-alt-right, accessed 24 April 2019.

W.H.K. Chun, 2017. Updating to remain the same: Habitual new media. Cambridge, Mass.: MIT Press.

J. Clover, 2019. “Four notes on stochastic terrorism,” Popula (3 April), at https://popula.com/2019/04/03/four-notes-on-stochastic-terrorism/, accessed 24 April 2019.

P. Covington, J. Adams, and E. Sargin, 2016. “Deep neural networks for YouTube recommendations,” RecSys ’16: Proceedings of the Tenth ACM Conference on Recommender Systems, pp. 191–198.
doi: https://doi.org/10.1145/2959100.2959190, accessed 22 May 2019.

A. Dalgaard-Nielsen, 2008. “Studying violent radicalization in Europe I: The potential contribution of socio-psychological and psychological approaches,” Danish Institute for International Studies, Working Paper, number 2008/2, at http://pure.diis.dk/ws/files/56375/WP08_2_Studying_Violent_Radicalization_in_Europe_I_The_Potential_Contribution_of_Social_Movement_Theory.pdf, accessed 22 May 2019.

G. Deleuze, 1992. “Postscript on the societies of control,” October, volume 59, pp. 3–7.

D. Di Placido, 2019. “What I don’t understand about PewDiePie,” Forbes (31 March), at https://www.forbes.com/sites/danidiplacido/2019/03/31/what-i-dont-understand-about-pewdiepie/, accessed 22 May 2019.

J. Ellul, 1964. The technological society. Translated by J. Wilkinson. New York: Vintage Books.

R. Evans, 2018. “From memes to infowars: How 75 fascist activists were ‘red-pilled’,” Bellingcat (11 October), at https://www.bellingcat.com/news/americas/2018/10/11/memes-infowars-75-fascist-activists-red-pilled/, accessed 24 April 2019.

L. Fang and L. Woodhouse, 2017. “How white nationalism became normal online,” Intercept (25 August), at https://theintercept.com/2017/08/25/video-how-white-nationalism-became-normal-online/, accessed 24 April 2019.

Faraday Speaks, 2019. “My descent into the alt-right pipeline” (21 March), at https://www.youtube.com/watch?v=sfLa64_zLrU, accessed 22 May 2019.

L. Floridi, 2016. The fourth revolution: How the infosphere is reshaping human reality. Oxford: Oxford University Press.

M. Foucault, 1980. “Prison talk,” In: C. Gordon (editor). Power/knowledge: Selected interviews and other writings, 1972–1977. New York: Pantheon, pp. 37–54.

FucknOathMate, 2017. “Message from FucknOathMate in Vibrant Diversity #general,” Unicorn Riot: Discord Leaks (29 January), at https://discordleaks.unicornriot.ninja/discord/view/754755?q=red+pilled#msg, accessed 26 April 2019.

J. Garpvall, 2017. “‘I’m tired of being sh-t on for being white’: Collective identity construction in the Alt-Right movement,” Master’s thesis, Swedish Defence University, at http://urn.kb.se/resolve?urn=urn:nbn:se:fhs:diva-6830, accessed 22 May 2019.

G2geek, 2011. “Stochastic terrorism: Triggering the shooters,” Daily Kos (10 January), at https://www.dailykos.com/story/2011/1/10/934890/-, accessed 26 April 2019.

K. Gormly, A. Selk, J. Achenbach, M. Berman, and A. Horton, 2018. “Pittsburgh shooting: Gunman kills 11, facing federal hate crime investigation,” Washington Post (27 October), at https://www.washingtonpost.com/nation/2018/10/27/pittsburgh-police-responding-active-shooting-squirrel-hill-area/, accessed 22 May 2019.

S. Gualeni, 2015. Virtual worlds as philosophical tools: How to philosophize with a digital hammer. New York: Palgrave Macmillan.

C. Hawkins, 2019. “The threat from far-right extremism: Rising, less predictable and hard to determine,” IHS Markit (22 March), at https://ihsmarkit.com/research-analysis/the-threat-from-far-right-extremism-rising.html, accessed 22 May 2019.

G. Hawley, 2018. “The demography of the alt-right,” Institute for Family Studies (9 August), at https://ifstudies.org/blog/the-demography-of-the-alt-right, accessed 22 May 2019.

R. Hernández-Ramírez, 2017. “Technology and self-modification: Understanding technologies of the self after Foucault,” Journal of Science and Technology of the Arts, volume 9, number 3, at http://artes.ucp.pt/citarj/article/view/423, accessed 22 May 2019.
doi: https://doi.org/10.7559/citarj.v9i3.423, accessed 22 May 2019.

J. Horgan, 2008. “From profiles to pathways and roots to routes: Perspectives from psychology on radicalization into terrorism,” Annals of the American Academy of Political and Social Science, volume 618, number 1, pp. 80–94.
doi: https://doi.org/10.1177/0002716208317539, accessed 22 May 2019.

J. Horgan, 2005. The psychology of terrorism. London: Routledge.

D. Koehler, 2014. “The radical online: Individual radicalization processes and the role of the Internet,” Journal for Deradicalization, number 1, at http://journals.sfu.ca/jd/index.php/jd/article/view/8, accessed 22 May 2019.

M. Lees, 2016. “What Gamergate should have taught us about the ‘alt-right’,” Guardian (1 December), at https://www.theguardian.com/technology/2016/dec/01/gamergate-alt-right-hate-trump, accessed 22 May 2019.

R. Lewis, 2018. “Alternative influence: Broadcasting the reactionary right on YouTube,” Data & Society Research Institute, at https://datasociety.net/wp-content/uploads/2018/09/DS_Alternative_Influence.pdf, accessed 22 May 2019.

W. Lowery, K. Kindy, and A. Ba Tran, 2018. “In the United States, right-wing violence is on the rise,” Washington Post (25 November), at https://www.washingtonpost.com/national/in-the-united-states-right-wing-violence-is-on-the-rise/2018/11/25/61f7f24a-deb4-11e8-85df-7a6b4d25cfbb_story.html, accessed 22 May 2019.

A. Marwick and R. Lewis, 2017. “Media manipulation and disinformation online,” Data & Society Research Institute, at https://datasociety.net/pubs/oh/DataAndSociety_MediaManipulationAndDisinformationOnline.pdf, accessed 22 May 2019.

G. Michael, 2012. “Leaderless resistance: The new face of terrorism,” Defence Studies, volume 12, number 2, pp. 257–282.
doi: https://doi.org/10.1080/14702436.2012.699724, accessed 22 May 2019.

L. Munn, 2019. “Algorithmic hate: Brenton Tarrant and the dark social Web,” Institute of Network Cultures (19 March), at http://networkcultures.org/blog/2019/03/19/luke-munn-algorithmic-hate-brenton-tarrant-and-the-dark-social-web/, accessed 26 April 2019.

A. Nagle, 2017. Kill all normies: The online culture wars from Tumblr and 4chan to the alt-right and Trump. Lanham: John Hunt Publishing.

L. Nakamura, 2013. “Glitch racism: Networks as actors within vernacular Internet theory,” Culture Digitally (10 December), at http://culturedigitally.org/2013/12/glitch-racism-networks-as-actors-within-vernacular-internet-theory/, accessed 30 April 2019.

J. Naughton, 2018. “However extreme your views, you’re never hardcore enough for YouTube,” Guardian (23 September), at https://www.theguardian.com/commentisfree/2018/sep/23/how-youtube-takes-you-to-extremes-when-it-comes-to-major-news-events, accessed 22 May 2019.

D. Neiwert, 2017. Alt-America: The rise of the radical right in the age of Trump. London: Verso.

A. Oboler, 2014. “The antisemitic meme of the Jew,” Online Hate Prevention Institute (6 February), at http://ohpi.org.au/the-antisemitic-meme-of-the-jew/, accessed 22 May 2019.

A. Oboler, 2008. “Online antisemitism 2.0: ‘Social antisemitism’ on the ‘social Web’,” Jerusalem Center for Public Affairs, number 67 (1 April), at http://jcpa.org/article/online-antisemitism-2-0-social-antisemitism-on-the-social-web/, accessed 22 May 2019.

T. Pollard, 2018. “Alt-right transgressions in the age of Trump,” Perspectives on Global Development and Technology, volume 17, numbers 1–2, pp. 76–88.
doi: https://doi.org/10.1163/15691497-12341467, accessed 22 May 2019.

M. Read, 2019. “How The Matrix’s red pill became the Internet’s delusional drug of choice,” Vulture (8 February), at https://www.vulture.com/2019/02/the-matrix-red-pill-internet-delusional-drug.html, accessed 22 May 2019.

A. Romano, 2018. “YouTube’s most popular user amplified anti-Semitic rhetoric. Again,” Vox (13 December), at https://www.vox.com/2018/12/13/18136253/pewdiepie-vs-tseries-links-to-white-supremacist-alt-right-redpill, accessed 18 March 2019.

Ronny TX, 2017. “Message from Ronny TX in Southern Front #general,” Unicorn Riot: Discord Leaks (23 July), at https://discordleaks.unicornriot.ninja/discord/view/197992?q=redpilled#msg, accessed 25 April 2019.

D. Rushkoff, 2011. Program or be programmed: Ten commands for a digital age. Berkeley, Calif.: Soft Skull Press.

N. Schager, 2017. “How Islamic terrorists are being radicalized online,” Daily Beast (25 March), at https://www.thedailybeast.com/articles/2017/03/25/how-islamic-terrorists-are-being-radicalized-online, accessed 25 April 2019.

C. Schwarzenegger and A. Wagner, 2018. “Can it be hate if it is fun? Discursive ensembles of hatred and laughter in extreme right satire on Facebook,” Studies in Communication and Media, volume 7, number 4, pp. 473–498.
doi: https://doi.org/10.5771/2192-4007-2018-4-473, accessed 22 May 2019.

S. Sieckelinck, E. Sikkens, M. van San, S. Kotnis, and M. De Winter, 2019. “Transitional journeys into and out of extremism. A biographical approach,” Studies in Conflict & Terrorism, volume 42, number 7, pp. 662–682.
doi: https://doi.org/10.1080/1057610X.2017.1407075, accessed 22 May 2019.

D.L. Smith, 2011. Less than human: why we demean, enslave, and exterminate others. New York: St. Martin’s Press.

J. Steizinger, 2018. “The significance of dehumanization: Nazi ideology and its psychological consequences,” Politics, Religion & Ideology, volume 19, number 2, pp. 139–157.
doi: https://doi.org/10.1080/21567689.2018.1425144, accessed 22 May 2019.

A. Tait, 2017. “Spitting out the Red Pill: Former misogynists reveal how they were radicalised online,” New Statesman (28 February), at https://www.newstatesman.com/science-tech/internet/2017/02/reddit-the-red-pill-interview-how-misogyny-spreads-online, accessed 22 May 2019.

Three Arrows, 2018. “How to fall down the anti-SJW rabbit hole” (2 November), at https://www.youtube.com/watch?v=69obN625Fjs, accessed 22 May 2019.

Z. Tufekci, 2018. “YouTube, the great radicalizer,” New York Times (10 March), at https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html, accessed 22 May 2019.

L Wachowski and A. Wachowski, 1998. The Matrix (29 March), at https://www.dailyscript.com/scripts/the_matrix.pdf, accessed 22 May 2019.

A. Ward, 2019. “PewDiePie’s alt-right ties are impossible to ignore,” Daily Dot (18 March), at https://www.dailydot.com/irl/pewdiepie-new-zealand-alt-right-nazi/, accessed 22 May 2019.

J. Wilson, 2017. “Hiding in plain sight: how the ‘alt-right’ is weaponizing irony to spread fascism,” Guardian (23 May), at https://www.theguardian.com/technology/2017/may/23/alt-right-online-humor-as-a-weapon-facism, accessed 22 May 2019.

G. Woo, 2002. “Quantitative terrorism risk assessment,” Journal of Risk Finance, volume 4, number 1, pp. 7–14.
doi: https://doi.org/10.1108/eb022949, accessed 22 May 2019.

K. Zraick and M. Stevens, 2018. “Kroger shooting suspect tried to enter black church before killing 2 in Kentucky, police say,” New York Times (25 October), at https://www.nytimes.com/2018/10/25/us/louisville-kroger-shooting.html, accessed 22 May 2019.

 


Editorial history

Received 13 May 2019; revised 16 May 2019; accepted 20 May 2019.


Creative Commons License
This paper is licensed under a Creative Commons Attribution-NonCommercial 2.0 Generic License.

Alt-right pipeline: Individual journeys to extremism online
by Luke Munn.
First Monday, Volume 24, Number 6 - 3 June 2019
https://firstmonday.org/ojs/index.php/fm/article/download/10108/7920
doi: http://dx.doi.org/10.5210/fm.v24i6.10108