Cyber Pearl Harbor: Analogy, fear, and the framing of cyber security threats in the United States, 1991-2016
First Monday

Cyber Pearl Harbor: Analogy, fear, and the framing of cyber security threats in the United States, 1991-2016 by Sean Lawson and Michael K. Middleton



Abstract
During the two and a half decades leading up to the Russian cyber attacks on the 2016 U.S. presidential election, public policy discourse about cybersecurity typically framed cybersecurity using metaphors and analogies to war and tended to focus on catastrophic doom scenarios involving cyber attacks against critical infrastructure. In this discourse, the so-called “cyber Pearl Harbor” attack was always supposedly just around the corner. Since 2016, however, many have argued that fixation on cyber Pearl Harbor-like scenarios was an inaccurate framing that left the United States looking in the wrong direction when Russia struck. This essay traces the use of the cyber Pearl Harbor analogy and metaphor over the 25-year period preceding the Russian cyber attacks of 2016. It argues that cyber Pearl Harbor has been a consistent feature of U.S. cybersecurity discourse with a largely stable meaning focused on catastrophic physical impacts. Government officials have been primarily responsible for driving these concerns with news media uncritically transmitting their claims. This is despite the fact that such claims were often ambiguous about just who might carry out such an attack and often lacked supporting evidence.

Contents

Introduction
Cyber Pearl Harbor in the securitization of cyberspace
The discursive construction of cyber Pearl Harbor: Sources and methods
The emergence and evolution of cyber Pearl Harbor
Cyber Pearl Harbor in the news
Analogy, framing, and fear
Conclusion

 


 

Introduction

Though cybersecurity has been a growing concern in the United States for decades, Russian interference in the 2016 U.S. presidential election has focused public attention on issues of cybersecurity like never before. As a result, a number of observers have begun to question the dominant thinking about cyber conflict in the U.S. national security discourse in the years leading up to the 2016 Russian cyber attacks. That thinking typically framed cybersecurity using metaphors and analogies to war and the military and tended to focus on catastrophic doom scenarios involving cyber attacks against critical infrastructure. In this discourse, the so-called “cyber Pearl Harbor” attack is always supposedly just around the corner. But even in the months leading up to the Russian cyber attacks on the 2016 election, some influential voices were beginning to raise serious doubt about this dominant conception of cyber threats. Since 2016, the number or voices critical of the cyber Pearl Harbor idea has grown, with many now arguing that fixation on cyber Pearl Harbor-like scenarios was a distraction that left the United States looking in the wrong direction when Russia struck in 2016.

This essay traces the use of the cyber Pearl Harbor analogy and metaphor over the 25-year period preceding the Russian cyber attacks of 2016. Doing so provides a window into the evolving nature of cyber threat perceptions in the United States in the two and a half decades leading up to a series of cyber attacks that have rocked the nation, seemingly confirming the U.S. Department of Defense’s classification of cyberspace as a “domain of battle.” Based on an analysis of the use of the cyber Pearl Harbor metaphor in a wide variety of documents, we argue that cyber Pearl Harbor-like scenarios have been a consistent feature of U.S. cybersecurity discourse for more than 25 years. During that time, cyber Pearl Harbor has had a largely stable meaning that is focused on catastrophic physical impacts from cyber attacks on critical infrastructure. Government officials have been primarily responsible for driving these concerns with news media uncritically transmitting their claims. This is despite the fact that such claims were often ambiguous about just who might carry out such an attack and often lacked supporting evidence.

In the next section, we explain why cyber Pearl Harbor is an appropriate lens for observing U.S. cybersecurity discourse, as well as the debate about cyber Pearl Harbor’s role in understanding the Russian attacks of 2016. Then, we describe the sources and methods for our analysis of cyber Pearl Harbor usage from 1991 to 2016. Next, we report on the findings of that analysis. Finally, we end with a discussion of the importance of metaphor and analogy to shaping thought and action on cybersecurity, as well as offering some alternative analogies that may be more apt for thinking about current cyber threats.

 

++++++++++

Cyber Pearl Harbor in the securitization of cyberspace

Scholars working within the critical constructivist tradition of security studies have explored the role of language and rhetoric in U.S. cyber security discourse (Bendrath, 2003, 2001; Bendrath, et al., 2007; Betz and Stevens, 2013; Dunn Cavelty, 2007; Eriksson, 2002; Hansen and Nissenbaum, 2009; Lawson, 2012). Much of this work has been informed by securitization theory, which posits that it is not predetermined which security threats will make it onto the political agenda. Instead, security threats are “constructed” because the process of identifying, understanding, and responding to them is the result of political discourse. That process involves a “securitizing actor” (usually a political leader or decision-maker) identifying “threat subjects” (the source of the threat), “referent objects” (that which is threatened), and the prospective impacts of a threat (Buzan, et al., 1998; Campbell, 1998). Additionally, Dunn Cavelty (2007) notes that in the case of cyber securitization, specific and sometimes dramatic events or conditions that she calls “focusing events” can serve to focus attention on cyber security threats.

Critical security scholars have been particularly interested in the construction of cyber threats because they exemplify the emergence of various “new threats” in the post-Cold War period [1]. These have included a host of seemingly ambiguous, uncertain, but dangerous threats related to environmental degradation, poverty, health, immigration, and technology [2]. Cyber threat perceptions have mirrored the ambiguity and uncertainty found in perceptions of other “new threats,” making it difficult to identify and then communicate clearly and precisely what it is that is threatened, by whom, and with what potential impacts, in and through cyberspace. As threat perceptions have shifted, so have claims about the primary subjects (e.g., foreign spies, criminals, terrorists, insiders), objects (e.g., corporate data, state secrets, critical infrastructure), and impacts (e.g., monetary loss, diminished economic competitiveness, physical disruption or destruction) of those threats (Dunn Cavelty, 2007; Bendrath, 2003, 2001; Bendrath, et al., 2007).

From the “wild west” to “space,” metaphors and analogies have been central to attempts to understand the Internet and its meaning for society and culture broadly (Johnston, 2009). It is unsurprising, therefore, that the framing of cyber threats in public policy discourse has also relied (perhaps overly so) on metaphors and analogies of various kinds, including to war, military, weapons, and natural disaster (Betz and Stevens, 2013; Lawson, 2012; Lapointe, 2011; Dunn Cavelty, 2013; Libicki, 1997). Among these, “cyber Pearl Harbor” is one of the most prominent in the U.S. cybersecurity debate [3] and is exemplary of the widespread use of “cyber-doom,” “shut down the power grid,” and “worst-case” scenarios by the full range of participants in these debates [4]. In particular, “cyber Pearl Harbor” is exemplary of the tendency to deploy hypothetical scenarios, but also to appropriate the fear and anxiety elicited by non-cyber events such as natural disasters, conventional military attacks, or terrorist attacks to promote fear of cyber-doom [5].

Existing research has yielded a number of important insights and raised a number of serious concerns about the U.S. cybersecurity debate. Scholars have noted that the ambiguity and uncertainty surrounding subjects, objects, means, and impacts of new threats like those to/through cyberspace can lead to a tendency toward possibilistic thinking and a logic of preventative or precautionary action (Füredi, 2009; Sunstein, 2002). In turn, these characteristics have resulted in increased concern over the possibility of “threat inflation,” defined as “the attempt by elites to create concern for a threat that goes beyond the scope and urgency that a disinterested analysis would justify” [6]. Having been identified as “emblematic of new threats,” [7] a number of scholars, journalists, and even some security professionals have argued that U.S. public policy discourse about cybersecurity has been particularly prone to threat inflation (Brito and Watkins, 2011; Blunden, 2015; Blunden and Cheung, 2014; Dunn Cavelty and Van Der Vlugt, 2015; Gartzke, 2013; Schneier, 2010). Still others have raised the concern that use of cyber-doom rhetoric in this debate may backfire, leading to fear-induced complacency instead of the motivation to act that users of such rhetoric presumably wish to elicit from audiences (Lawson, et al., 2016; Hamre, 2015).

To date, whether cyber terror, crime, or warfare, we have not seen any cyber attack approximate the level of destruction contemplated in cyber Pearl Harbor-like scenarios. For well over a decade, scholars have called attention to the inflated fears of cyber terror (Debrix, 2001; Weimann, 2008, 2005; Stohl, 2006; Conway, 2008). Other studies have pointed to inflated estimates of the impacts of cyber crime (Wall, 2008; Jardine, 2017, 2015; Anderson, et al., 2013; Florêncio and Herley, 2013; Romanosky, 2016; Jardine, 2018). Finally, even though the recent conflict in Ukraine has included the first documented cases of cyber attacks used to take down electrical grids, the immediate effects of these attacks were limited, as have been the effects of cyber attacks on the shape of the military conflict more generally (Kostyuk and Zhukov, 2019; Jensen, et al., 2019).

It is perhaps unsurprising; therefore, that such scenarios have come under increased scrutiny over the last several years. Even before the Russian cyber-interference in the 2016 U.S. presidential election, some current and former intelligence officials had rejected the use of “cyber Armageddon” or “cyber Pearl Harbor” descriptions of cyber threats facing the United States (Bussey, 2016; Clapper, 2015; Hamre, 2015). After the election, many more have argued that Russian cyber-interference shows that the U.S. had been looking in the wrong direction with respect to cyber attacks (Lewis, 2018, 2017; Lipton, et al., 2016; Nye, 2016; Orcutt, 2017; Pollard and Devost, 2016; Pollard, et al., 2018; Pollock, 2017; Von Drehle, 2017; Rid and Buchanan, 2018; Weinstein, 2018; Wolff, 2018). Nonetheless, some have called Russian election hacking the fulfillment of decades of cyber Pearl Harbor predictions (Carr, 2017; Chang and Osborne, 2017; Graham, 2017; Spring, 2017). In these cases, emphasis is placed on the element of surprise in the 2016 Russian cyber operations, not necessarily the physical destruction, of which there was none.

Existing scholarship, recent developments, and the debate that they have sparked, raise a number of important questions. Where does the “cyber Pearl Harbor” analogy come from? What is such a scenario supposed to entail? How has the analogy evolved during its 25 years of use by cyber security stakeholders? And, is it possible that use of this analogy can have negative implications for cyber security policy-making? In the next section, we detail the sources and methods that we have used to help answer these questions.

 

++++++++++

The discursive construction of cyber Pearl Harbor: Sources and methods

For over 25 years, Americans have been warned that the United States faces an imminent cyber Pearl Harbor, involving cyber attacks against critical infrastructure leading to mass destruction and disruption, followed by social and economic chaos. In a 1999 article, New York Times reporter John Markoff provided perhaps the most concise definition, noting that the “specter of simultaneous computer network attacks against banking, transportation, commerce and utility targets — as well as against the military — conjures up the fear of an electronic Pearl Harbor in which the nation is paralyzed without a single bullet ever being fired” (Markoff, 1999). Markoff’s definition echoed the doomsday scenario envisioned by computer security entrepreneur Winn Schwartau who, in a 1991 op-ed for Computerworld, first used the analogy (which was repeated in his testimony before the U.S. Congress that same year). Schwartau described the impacts of such an event as “truly crippling,” “devastating,” and “inflicting massive damage” on a scale that would undermine “the continuation of well-ordered society ... [to] function as we know it” (Schwartau, 1991).

Twenty years later, the metaphor was still strongly in circulation when former U.S. Defense Secretary Leon Panetta warned Congress, “I’ve often said that there’s a strong likelihood that the next Pearl Harbor that we confront could ... be a cyber attack that cripples our power systems, our grid, our security systems, our financial systems, our governmental systems. This is a real possibility” (U.S. Senate Armed Services Committee, 2011). The following year, Panetta doubled down, declaring that a cyber Pearl Harbor “could be as destructive as the terrorist attack of 9/11,” “cause panic, destruction, and even the loss of life,” “paralyze and shock the nation, and create a profound new sense of vulnerability” (Panetta, 2012).

To unpack both the staying power and implications of the cyber Pearl Harbor analogy over 25 years of cybersecurity discourse, this study combines a rhetorical analysis of key texts that utilize the cyber Pearl Harbor metaphor with a quantitative content analysis of U.S. newspaper articles from 1991 (when the concept first emerged) until 2016 (when Russian election tampering refocused national attention on cybersecurity). Both forms of analysis rely on theoretical insights from the critical constructivist tradition of scholarship in security studies (Peoples and Vaughan-Williams, 2010; see also Bendrath, 2003, 2001; Bendrath, et al., 2007; Dunn Cavelty, 2007; Eriksson, 2002; Hansen and Nissenbaum, 2009; Lawson, 2012; Betz and Stevens, 2013).

First, we identified ten key texts that we examined through close reading and rhetorical analysis (see Appendix C). These texts were chosen because they were exemplary of the use of the cyber Pearl Harbor analogy by critical stakeholders, including industry experts, government officials, large media outlets, or others with the ability to shape and influence U.S. cyber security discourse. The kinds of texts chosen included op-eds, Congressional testimony, speeches, a documentary film, and a declassified government document. Rhetorical analysis provides a tool for explaining “the relationship of persons and ideas within a situation” [8]. To arrive at these explanations, rhetoricians cast their net broadly, asking how “written as well as spoken discourse, nonverbal [e.g., images] as well as verbal symbols, movements as well as individual events, and functions other than those implied by a narrow conception of persuasion” and overtly persuasive texts provide a lens through which to understand how ideas and their relationship to both their producers and audiences are shaped and defined (Brockriede, 1974). Rhetorical analysis’ focus on textual features (metaphor, analogy, etc.) and argumentative strategies provides a means to deconstruct these texts, identifying how cyber Pearl Harbor comes to be defined and operationalized in political debates over its importance. In this regard, rhetorical analysis provides a technique for identifying an “unfolding sequence of arguments, ideas, and figures which interact through the text and gradually build a structure of meaning” [9]. For example, rhetorical analysis helps identify how threat subjects, referent objects, and focusing events that constitute cyber Pearl Harbor are discursively constructed through these key texts. However, beyond helping illuminate how these texts function to define cyber Pearl Harbor as a real and viable threat, a rhetorically-driven close reading also helps inform our analysis of 25 years of media discourse about cyber Pearl Harbor because “the close reading of specific texts often provides both data and methods for comprehending larger discursive formations” [10].

Second, for our quantitative content analysis, we collected examples of the cyber Pearl Harbor analogy in U.S. news media by searching the “U.S. Newspapers” category in LexisNexis Academic Universe using the search string “electronic pearl harbor” OR “digital pearl harbor” OR “cyber pearl harbor.” [11] This produced 214 results. After removal of duplicates and irrelevant results (e.g., letters to the editor), the final coded set included 203 articles. Content analysis complements a rhetorical analysis focused on key texts that participated in and constructed the cyber Pearl Harbor analogy in two ways. First, whereas rhetorical analysis focuses on a small number of key texts to identify the specific rhetorical strategies, content analysis provides a means to identify and draw inferences about how those texts circulate in media discourse and to broader audiences. Second, content analysis complements the interpretive and qualitative conclusions of rhetorical analysis with systematic and more objective accounts of how the cyber Pearl Harbor analogy has evolved. While content analysis operates as a method across a variety of disciplines and includes many variations, its essential characteristic is that it offers a methodology for “a summarizing, quantitative analysis of messages that relies on the scientific method, including attention to objectivity, a priori design, reliability, validity, generalizability, and replicability” [12].

When practicing content analysis, the dimensions of texts that are quantified can be identified and defined inductively or deductively [13]. In the case of the former, researchers derive variables to be quantified by developing familiarity with a body of texts to the extent that a set of exhaustive, mutually exclusive categories emerge. Deductive variables, on the other hand, are derived from a theoretical framework that informs the content analysis and which are relevant to the texts under examination based on related scholarship [14]. For example, in the present study, coding categories were informed by the main concepts of securitization theory, including securitizing actor and referent object, as well as the sentiment of the article and the focusing events or conditions identified as reason for taking such a threat seriously (see Appendix A). For each category, we collaboratively coded random samples from our overall pool of articles until intercoder reliability was established (see Appendix B). Once we achieved an acceptable level of reliability, we divided and independently coded the remainder of the articles. While this practice of quantification of the content of texts can offer a variety of measurement possibilities, most content analysis, like the present study, relies on frequency data to support its inferences about texts, the circumstances of their emergence, and the contexts and consequence of their circulation [15].

By combining these two methodological approaches, we are able to make complementary claims about the development of the cyber Pearl Harbor analogy within cyber security discourses. On the one hand, rhetorical analysis offers a “close-up” view of specific exemplars, or key texts, crafted by primary stakeholders in the evolution of cyber security policy and political discourse. On the other hand, the “satellite view” offered by content analysis of a broad swath of texts across a period of time enables us to identify variations in the intensity of discussion of cyber security via the cyber Pearl Harbor analogy, as well as who is participating in that discourse, based on what concerns, and with what aims (Hart, 1990; Hoffman and Waisanen, 2015). By working between these two levels of abstraction and critical insight, our approach to the use of the cyber Pearl Harbor analogy allows us to make inferences about how these two sources of discourse on the subject influence one another, how they are impacted by contextual factors, e.g., terror attacks, cyber security breaches, etc., and how certain dimensions of the cyber Pearl Harbor analogy are strategically given greater presence (or marginalized) to shape cybersecurity outcomes.

 

++++++++++

The emergence and evolution of cyber Pearl Harbor

There has been confusion over the origins of the Pearl Harbor analogy in U.S. cyber security discourse. In December 2015, former defense official, John Hamre, claimed to have been the first person to use the term publicly in his November 1997 testimony before the U.S. Congress. He further claimed that General Tom Marsh had been “the author of that phrase” as a result of having led the President’s Commission on Critical Infrastructure Protection, the final report from which was published in 1997 (Hamre, 2015). Others have claimed that president of the RSA computer security company, D. James Bidzos, was the first to use the term in 1991. However, as Tim Stevens notes, the weight of the available evidence supports the contention that it was the computer security entrepreneur, and novelist, Winn Schwartau, who first used the term “electronic Pearl Harbor” in 1991 in an op-ed, Congressional testimony, and a self-published novel [16].

Regardless of its likely emergence in the early 1990s from computer security industry experts, the Pearl Harbor analogy did not become widespread until the mid to late 1990s when it was mentioned in a string of Congressional testimonies by government officials, most notably Director of the C.I.A. John Deutch in 1996 and Deputy Secretary of Defense John Hamre in 1997 and 1998. A close reading of these early sources of cyber Pearl Harbor from the 1990s demonstrates that the seeds of the currently dominant use of that analogy and related themes emerged very early in the discourse. However, it also reveals the existence of subtle differences in early meanings of cyber Pearl Harbor, which were in turn associated in some cases with differing views of how one should respond to such a threat.

In June 1996, Senator Sam Nunn (D-GA) opened a hearing of the Senate Governmental Affairs Committee on cyber threats by informing listeners and participants that the proceeding would

“focus on the possibility that cyber-attacks on our national infrastructure could be used as a part of a coordinated strategic attack on the United States. How likely is such a scenario? Who has the capacity to launch such an attack? How do we defend against such an attack? Perhaps most important, would we even recognize the fact that such an attack was being carried and be able to determine who was behind the attack in a very timely manner?” (Deutch, 1996)

That is, the hearing would focus on the primary issues of concern for the construction of any threat narrative, including the identification of potential threat subjects and referent objects: Who might attack? What might they target? The answer to the referent object question was already assumed: “national infrastructure.” But, just which infrastructure would be targeted was less certain. What is more, Sen. Nunn raised questions related to evidence and uncertainty, as well as possible responses. How do we know, in the realm of cyber threats, where the real concerns exist and, in the end, what do we do? Beyond the hearing that day, all of the key texts that address the possibility of cyber Pearl Harbor provide answers to most of these questions.

Where threat subjects are concerned, the key texts that we analyzed from the 1990s and early 2000s were focused almost exclusively on non-state actors. Of course, Schwartau’s piece was titled “Fighting terminal terrorism” and contemplated that ”a motivated individual or organization” could carry out “an electronic Pearl Harbor” (Schwartau, 1991). In his 1996 testimony, DCI Deutch and his questioner, Sen. Sam Nunn, both worried about the possibility that not just states, but also “sub-national groups” and “terrorist groups” could obtain the capability to carry out cyber attacks against the United States’ vulnerable networks. DCI Deutch also mentioned the potential, but uncertain threat from “individual criminal elements or individual hacker activities” (Deutch, 1996). Finally, the 2003 PBS Frontline documentary, Cyber War!, which included interviews with prominent cyber security experts, focused entirely on non-state cyber threats, particularly the threat of from terrorists (Kirk, 2003).

Though they continue to mention possible non-state attackers, we see an increased concern with state actors over time in the later texts that we analyzed. Gen. Alexander’s 2012 memo about preventing a cyber Pearl Harbor mentions non-state actors, but is primarily focused on “nation states” (Alexander, 2012). Similarly, Secretary Panetta’s 2012 speech noted the threats from cyber criminals, but said “the even greater danger facing us in cyberspace goes beyond crime and harassment. A cyber attack perpetrated by nation states or violent extremist groups could be as destructive as the terrorist attack of 9/11” (Panetta, 2012). Finally, Hamre’s December 2015 op-ed warned, “Hostile intelligence and military establishments are prepared to wage war now, using cyber tools” (Hamre, 2015).

In both cases, however, specific actors are rarely mentioned, with a few exceptions. In the non-state category, al-Qa’ida is sometimes identified as a potential attacker in the wake of the 11 September 2001 terrorist attacks (Kirk, 2003). Later, when officials like Secretary Panetta begin to focus more on the threat from state actors, we see occasional mentions of China, Russia, and Iran as potential perpetrators of a cyber Pearl Harbor attack (Alexander, 2012; Panetta, 2012). However, the texts we analyzed far more often identified generic, unspecified state or non-state actors (or both) as the would-be authors of a cyber Pearl Harbor (Schwartau, 1991; Deutch, 1996; Hamre, 1998, 1997; Panetta, 2011). In short, those officials and experts raising the alarm about the threat of a cyber Pearl Harbor were often unclear about exactly who might carry out such an attack.

Far from being a rhetorical weakness, this inability or unwillingness to identify specific threat subjects may serve to heighten fear of a possible cyber Pearl Harbor. One might expect that inability or unwillingness to specify who is willing and able to carry out such an attack would weaken the case that such a threat is indeed real. Instead, the so-called “attribution problem” (Lindsay, 2015; Rid and Buchanan, 2014; Schulzke, 2018) — the idea that one could be cyber attacked without knowing who had carried out the attack — emerged as a consistent theme in the cyber Pearl Harbor discourse that only seemed to heighten the fear. This possibility was first raised by Schwartau, who cautioned that “such an attack can also be launched ... with little or no ability to identify ... the perpetrators” and that “the source of the attack is completely disguised” (Schwartau, 1991). In 1996, the difficulty in determining the source of cyber attack was a concern raised multiple times in Sen. Nunn’s questioning of DCI Deutch (Deutch, 1996). In his 1997 testimony, John Hamre noted, “Our knowledge of the origin of such attacks, and their sponsorship, is likely to be imprecise” (Hamre, 1997). The 2003 PBS documentary used the examples of the Slammer, Code Red, and Nimda malware attacks as evidence of the danger of the attribution problem (Kirk, 2003). By 2012, however, Secretary Panetta was warning would-be cyber attackers that the U.S. had “made significant investments in forensics to address this problem of attribution, and we are seeing returns on those investments” (Panetta, 2012).

The documents we analyzed were also, in the aggregate, ambiguous about which “national infrastructures” would be the target of such an attack. Civilian critical infrastructures like power, water, communications, and transportation were identified most often as likely targets for a cyber Pearl Harbor attack, followed closely by military command, control, and logistics systems. Some authors also worried that attacks against informational assets such as financial information, intellectual property, or government secrets could result in a cyber Pearl Harbor (Schwartau, 1991; Alexander, 2012). Most common, however, was to merely present a laundry list of potential civilian, military, and informational targets that may, if struck in some combination, result in a catastrophic and crippling cyber Pearl Harbor situation. Little distinction is made in these texts between different civilian infrastructure systems. What’s more, the line between civilian and military systems becomes blurry when some authors note the military’s reliance on civilian infrastructure systems. In the end, the effect is to leave largely unanswered the question of exactly what the target of a cyber Pearl Harbor might be.

Though they are often unclear about who or what is threatening/threatened in a would-be cyber Pearl Harbor, proponents of this threat point to various events and conditions that they believe warrant sensitizing us to, and focusing our attention on, the possibility of a cyber Pearl Harbor. First, and most obvious, is to point to actual cyber incidents as a warning of what might happen if we do not take cyber security more seriously. These include specific hacking incidents against government or private targets, reports of mass numbers of unspecified “intrusions” of networks each day, and reports of system vulnerabilities that create the possibility for cyber attack.

Second, those who warn of a coming cyber Pearl Harbor point to general, structural conditions of society and technology in the Information Age as reason for concern. In 1996, DCI Deutch mentioned twice during his testimony the dangers that come with “growing dependency” on information technology and networks (Deutch, 1996). In 1997 and again in 1998, John Hamre cited “dependence” or “reliance” on networks as a source of vulnerability (Hamre, 1998, 1997). In 2003, Richard Clarke told PBS flatly, “we depend upon the Internet for our national security and our national economy” (Kirk, 2003). In 2011, Senators Lieberman, Collins, and Carper called the Internet “a nearly indispensable tool of modern life” (Lieberman, et al., 2011). Finally, in 2012, Gen. Alexander warned, “The U.S. as a society is extraordinarily vulnerable because we rely on highly interdependent networks” (Alexander, 2012). In 1997, Hamre explained that concern about such vulnerability-inducing dependence on networked systems was due to the complexity of such systems and the possibility “that the fixes put in place to solve familiar problems may not be adequate for the more ‘closely coupled’ world in which we now find ourselves” (Hamre, 1997) [17].

Finally, there are other, more problematic events used to focus attention on the prospect of a cyber Pearl Harbor. These include other, non-cyber events appropriated in an effort to raise fear of a potential cyber Pearl Harbor. These included a 1996 power outage (Hamre, 2015, 1997), a 1998 failure of a communications satellite (Hamre, 1998), and the terrorist attacks of 11 September 2001 (Kirk, 2003; Panetta, 2012). They also include fictional scenarios such as simulations and war games. For example, at the same hearing at which DCI Deutch testified, Senator Nunn cited in his opening comments, “an actual war game scenario presented by our witnesses from the Rand Corporation” that he believed “will hopefully provide the subcommittee and the public at large with a better appreciation for the difficult issues which must be wrestled with when it comes to information warfare” (Deutch, 1996). Similarly, experts interviewed in PBS’s 2003 documentary pointed to prior war games as evidence of cyber threat. The documentary itself included a segment showing military personnel participating in a cyber war game exercise (Kirk, 2003). Hamre points to the importance of one war game in particular, Eligible Receiver 97, that served to “trigger” the “cyber worries” of many in government in the 1990s (Hamre, 2015). Finally, Secretary Panetta’s 2012 speech included a detailed but entirely fictional scenario meant to depict what might be possible in a cyber Pearl Harbor (Panetta, 2012).

Most problematic, however, is the use of the tactic of projection [18]. This rhetorical tactic for focusing attention on cyber threats to the United States involves pointing to actions carried out by the United States or their direct impacts as evidence or reasons why we should be concerned about the potential for a cyber Pearl Harbor attack on the United States by foreign adversaries. There are numerous examples over the years. In 1991, to support his claims of a threat of electronic Pearl Harbor, Schwartau noted, “The U.S. government has issued contracts for studies on methods of infecting enemy military computers with viruses in hopes of shutting down battlefield computing and communications capabilities” (Schwartau, 1991). Interviewees in the PBS documentary noted that the U.S. understood the threats it potentially faced from others because it had already conducted offensive cyber attacks itself (Kirk, 2003). Most egregious, however, have been instances when U.S. officials have pointed to the joint U.S.-Israel Stuxnet attack on Iranian nuclear facilities, and its direct implications, as evidence of a possible cyber Pearl Harbor. We see this tactic used in Senator Lieberman, et al.’s 2011 op-ed where Stuxnet serves as a primary piece of evidence in support of the supposed cyber Pearl Harbor threat (Lieberman, et al., 2011). We see it again in 2012 when Secretary Panetta points to a series of Iranian cyber attacks that we now know were Iran’s retaliation for Stuxnet [19]. Even some Obama administration officials acknowledged (at least internally) the hypocrisy of calling out others while the United States engages in attacks like Stuxnet [20].

Though the cyber Pearl Harbor narrative has been largely consistent over time, there were some key differences in the early period of its emergence. These sometimes subtle differences related to the potential impacts and likelihood of such an attack, as well as to the lessons that the analogy should teach in terms of preparedness and response. We can observe such differences, for example, between Schwartau and Hamre, as well as in an interaction between Sen. Nunn and DCI Deutch.

In the first case, in Schwartau’s account, “electronic Pearl Harbor” serves as a description of a possible attack and its impacts — i.e., sudden and catastrophic. There is no effective response to such an attack after the fact, he said. Therefore, our response to the possibility of such an attack should be to take action now to prevent such an attack from occurring in the first place (Schwartau, 1991). Comparatively, John Hamre’s 1997 and 1998 uses of the Pearl Harbor analogy are subtly but importantly different. In his testimony, Pearl Harbor does not serve as a description of what a future cyber attack might look like. Though Hamre would certainly hope to prevent such an attack, nonetheless, his use of the Pearl Harbor analogy serves instead as a lesson in the importance of preparedness to cope with and respond in the wake of a large attack, just as the United States had been able to do in the wake of the Pearl Harbor attack (Hamre, 1997).

Likewise, on both occasions, 1997 and 1998, Hamre was careful not to portray a catastrophic cyber attack as imminent. “I don’t think such an event is imminent,” he said, adding, “I am not warning that the sky is falling. We have time to prepare” (Hamre, 1998, 1997). Again in 2003, Hamre and other cyber security experts interviewed for PBS Frontline’s documentary, Cyber War!, expressed skepticism about the likelihood of a cyber Pearl Harbor-like attack. Hamre called into question the value of such attacks for terrorist groups that are primarily interested in causing a level of shock and destruction that he believed impossible to achieve at that time with cyber attacks alone. Similarly, James Lewis of the Center for Strategic and International Studies and John Arquilla of the Naval Postgraduate School also discounted the possibility of the kind of catastrophic attack contemplated with the use of the cyber Pearl Harbor analogy (Kirk, 2003).

We can observe similar differences in an interaction between Sen. Nunn and DCI Deutch during the latter’s 1996 Congressional testimony. Though some news media accounts of this interaction reported that DCI Deutch had warned of a possible cyber Pearl Harbor (“Cyber terrorists threaten US, CIA boss warns,” [Hobart (Tasmania, Australia) Mercury, 1996]; “CIA warns of cyber-terror attacks,” [Australian, 1996]), this is not correct. It was Sen. Nunn who raised the possibility of such an attack when he asked,

“There are some who believe we are going to have to have an electronic Pearl Harbor, so to speak, before we really make this the kind of priority that many of us believe it deserves to be made. Do you think we’re going to need that kind of real awakening, or are we fully alerted to this danger now, and are we allocating sufficient resources?”

In response, though he said that the United States and its allies faced serious cyber threats, Deutch downplayed the possibility of a cyber Pearl Harbor. He said,

“I think that we are fully alerted to it now. I don’t know whether we will face an electronic Pearl Harbor, but we will have, I’m sure, some very unpleasant circumstances in this area or our allies will have unpleasant circumstances in this area. So I think while we are fully alerted to it, it’s not as if we’re asleep on the subject, but I’m certainly prepared to predict some very, very large and uncomfortable incidents in the area. What about resources? I think resources are being allocated to this problem in its many different dimensions ... So the answer to your question is I think the resource stream is moving in that regard. The priority has been given, and it’s moving along, sir.” (Deutch, 1996)

Sen. Nunn, seemingly more concerned with the possibility of a cyber Pearl Harbor, pushed DCI Deutch on whether the United States must spend more on intelligence and adopt a strategy of cyber deterrence modeled on Cold War nuclear deterrence. DCI Deutch, less concerned with cyber Pearl Harbor, expressed his assessment that current efforts were sufficient and that deterrence was not the most appropriate model for constructing cyber strategy because it implied threats of military force in response to the vast majority of cyber threats that were really a “peacetime” problem, not a military concern (Deutch, 1996).

By 2011, however, Schwartau and Nunn’s uses and lessons of cyber Pearl Harbor seem to have won out at the highest levels of U.S. cyber security discourse. This is borne out in public statements by Secretary Panetta, as well as internal U.S. Cyber Command documents issued by Gen. Keith Alexander. Both adopt the cyber Pearl Harbor analogy, with Panetta describing such an attack in catastrophic terms (Panetta, 2012). In terms of response, both assert the need to develop a deterrent capability, in part through the development and use of “active defense” (what some would call “offensive”) capabilities [21]. Gen. Alexander recommends in 2012 that the “DOD must be capable of stopping attacks while in progress — or before,” to use NSA spying capabilities to “identify threats (exploits and attacks) before they are launched against us and enable USCYBERCOM to deploy defenses in advance of their use,” to “neutralize adversary capabilities affecting DoD systems at the point of origin,” and ultimately to “deter attacks in the long term” (Alexander, 2012). These goals were reflected in Secretary Panetta’s public statements in 2012. He reiterated, “In addition to defending the Department’s networks, we also help deter attacks” and that “we won’t succeed in preventing a cyber attack through improved defenses alone. If we detect an imminent threat of attack ... we need to have the option to take action to defend the nation” (Panetta, 2012).

 

++++++++++

Cyber Pearl Harbor in the news

The analysis above provides an account of the emergence and evolution of cyber Pearl Harbor among elite actors like government officials and industry experts. But, as James Wirtz notes, the notion of a cyber Pearl Harbor “is reinforced by recurring media reports” [22]. To map how those recurring reports foster the public imagination about what cyber Pearl Harbor entails and the seriousness with which it is invoked, we performed a content analysis of a sample of 203 articles from U.S. newspapers spanning the period from 1991 to early 2016.

First, cyber Pearl Harbor’s 25-year presence in news media reporting has been marked by certain moments of intense interest that correspond with the emergence of other prominent national security concerns (see Figure 1). Concern with cyber Pearl Harbor spiked after 1995, declined around 2003 and spiked again in 2011 and 2012. The first uptick in interest corresponds with U.S. officials’ growing concern with the possibility of mass casualty, new technology-enabled, “new terrorism.” Reasons for such concerns could be found in the 1993 World Trade Center bombing, the 1995 Aum Shinrikyo nerve gas attack on the Tokyo subway system, and the 1995 Oklahoma City bombing. The possibility for cyber terrorism piggy-backed off of concerns about other, similar kinds of terrorism, e.g., bioterrorism, agricultural terrorism, etc. (Carter, et al., 1998; Hoffman, 1998; Laqueur, 1999; Lifton, 1999). Similarly, Secretary Panetta’s public statements drove much of the coverage of cyber Pearl Harbor in 2011 and 2012. These statements were provoked by a series of retaliatory cyber attacks on the United States and its Persian Gulf allies by Iran in response to the U.S.-Israeli Stuxnet attack on Iranian nuclear facilities. In both cases, increased concern with cyber Pearl Harbor in the news corresponds with other, pressing (but not necessarily cyber-related) national security concerns of the day.

 

Mentions of cyber Pearl Harbor in major U.S. newspapers
 
Figure 1: Mentions of cyber Pearl Harbor in major U.S. newspapers.

 

Content analysis also reinforces the importance of our close analysis of cyber Pearl Harbor’s origins because government officials remain primarily responsible for spreading concern about the possibility of a cyber Pearl Harbor. An overwhelming 60 percent of cases where cyber Pearl Harbor received news attention as a realistic threat, government officials were promoting the idea. Interestingly, in 65 percent of cases where cyber Pearl Harbor was portrayed as unlikely or unrealistic, news reports cited private actors such as industry experts, academics, or journalists (see Figures 2 and 3).

 

Positive Pearl Harbor sentiment by actor
 
Negative Pearl Harbor sentiment by actor
 
Figures 2 and 3: Positive and negative Pearl Harbor sentiments by actor.

 

Officials promoting a cyber Pearl Harbor doomsday find willing allies in the news media. The vast majority of news stories (77 percent) presented only the perspective of those promoting the idea of possible cyber Pearl Harbor. On only a few occasions did news articles provide a balanced view, including perspectives skeptical of cyber Pearl Harbor alongside those promoting the idea (three percent), or providing a purely negative assessment of cyber Pearl Harbor (18 percent; see Figure 4). Interestingly, there has been less skepticism and more positive reporting about the potential for cyber Pearl Harbor over time (see Figure 5).

 

Cyber Pearl Harbor sentiment in major U.S. newspapers
 
Figure 4: Cyber Pearl Harbor sentiment in major U.S. newspapers.

 

 

Cyber Pearl Harbor sentiment over time
 
Figure 5: Cyber Pearl Harbor sentiment over time.

 

As we might expect based on the analysis of key texts above, news coverage of cyber Pearl Harbor is largely unclear about just who might carry out such an attack (see Figure 6). In a 38 percent plurality of articles, the potential perpetrator (threat subject) is illusive and unspecified. Next is a laundry list combination of perpetrators at 27 percent, meaning that in over half the mentions of a cyber Pearl Harbor no one is quite sure who is actually planning to implement such a tactic. Only at the most abstract levels can patterns suggest that an evolution in thinking about cyber Pearl Harbor has occurred. Generally, more concern is expressed over non-state actors; however, over time news articles demonstrate an increasing concern with states (see Figure 7).

 

Cyber Pearl Harbor threat subjects
 
Figure 6: Cyber Pearl Harbor threat subjects.

 

 

Cyber Pearl Harbor threat subjects (State vs. non-state), 1991-2016
 
Figure 7: Cyber Pearl Harbor threat subjects (State vs. non-state), 1991–2016.

 

In media reports that sustain the concern over cyber Pearl Harbor, like the key texts that promoted it, civilian infrastructure experiences the greatest likelihood of being a target of such an attack with 55 percent of mentions. However, there are also a significant number of articles that do not specify a target for such an attack or merely provide a laundry list of potential targets (35 percent), which has the effect of leaving the question of referent object, or who and what is at risk, largely unanswered (see Figure 8).

 

Cyber Pearl Harbor: Referent objects
 
Figure 8: Cyber Pearl Harbor: Referent objects.

 

The last notable similarity between key texts and news articles is promoting the seriousness of the cyber Pearl Harbor threat by pointing to prior cyber incidents or vulnerabilities (41 percent). Fictional scenarios (15 percent) and reference to non-cyber events (nine percent) are also deployed as reason for concern about cyber Pearl Harbor. Most problematic, however, is that many news articles (24 percent) provide no clear reason or evidence at all in support of the idea that cyber Pearl Harbor is realistic or likely (see Figure 9).

Cyber Pearl Harbor: Focusing event/sensitizing condition
 
Figure 9: Cyber Pearl Harbor: Focusing event/sensitizing condition.

 

 

++++++++++

Analogy, framing, and fear

Mapping the use of cyber Pearl Harbor in both elite stakeholder discourse and broader media coverage in our analysis highlights the ubiquity of its use and the preferred meanings that have attached to the analogy over time. The enduring nature of these meanings and uses has consequences for the cyber security terrain that they shape. Various observers, including promoters of the idea of a cyber Pearl Harbor, like John Hamre, have worried, rightly we will argue, that the metaphor’s use might have negative implications for our ability to understand and respond appropriately to contemporary cyber threats. This is a risk made more prescient by the 2016 interference by Russia in U.S. election systems and its campaign of information warfare carried out by social media manipulation and other cyber tactics.

Decades of research in a number of disciplines, from cognitive science to communication studies and more, argue that language and rhetoric has an important structuring effect on how we see and respond to the world around us; that “language, perception, and knowledge are inextricably intertwined” [23]. Far from being “a matter of ... mere words,” leading researchers Lakoff and Johnson argue that, “human thought processes are largely metaphorical ... the human conceptual system is metaphorically structured and defined” [24]. But, this is not just an individual affair; our use of language helps “to structure collective, human knowledge” and to “bridge the gap between individual human cognition and collective understanding and action” (Lawson, 2012). What’s more, like potato chips, with metaphors it is hard to have just one. This is because individual metaphors often come with “entailments” that necessarily implicate other, related metaphors [25]. Even more important is that because metaphors can also operate as normative “structuring devices” [26], they can entail, “often covertly and insidiously, natural ‘solutions’” [27]. The language we use, our metaphors and analogies, can enable or constrain our perceptions and understanding of the world around us, as well as our avenues of possible action in response [28].

Military and national security professionals have increasingly recognized the importance of language, including metaphor and analogy, for correctly framing and responding to national security threats. Top leaders in these services have promoted this idea and, in the case of the Army, have incorporated it into official doctrine and officer education (Mattis, 2008; U.S. Department of the Army, 2015; School of Advanced Military Studies, n.d.). As a consequence, the cyber security debate has not been immune to this rhetorical turn in national security discourse. As early as 1997, Martin Libicki worried about the use of facile metaphors in the information warfare debates of the day. He warned, “To use metaphor in place of analysis verges on intellectual abuse. It invites the unquestioning extension of a logic that works across the looking glass but lacks explanatory power in the real world. Those who forget this are apt to try to make their metaphors do their thinking for them” [29].

More recently, U.S. Cyber Command (USCYBERCOM) and its former parent organization, U.S. Strategic Command (USSTRATCOM), officially recognized the critical importance of language and analogy for understanding cyber threats and then developing and carrying out a cyber strategy. In 2009, USSTRATCOM released The Cyber Warfare Lexicon. The document began with a series of epigraphs. The first, from Dee Hock, read, “Language is only secondarily the means by which we communicate, it is primarily the means by which we think.” The second informed readers, “You can’t talk about a subject if you don’t have the words. And, some psychologists would argue, you can’t even think about it. At least not very productively.” And finally, the document quoted Lt. Gen. Paul Van Riper (USMC, Ret.) as saying, “The seeming inability to express ideas clearly, loose use of words, and ill-considered invention of other terms have damaged the military lexicon to the point that it interferes with effective professional military discourse.” [30]

In the world of cyber warfare, appropriate conceptual tools and metaphors for navigating emergent threats comes with the highest stakes:

“Without a shared understanding of the accurate meanings of a significant number of frequently used terms, it will be difficult to make progress on the more complex and unresolved technical and operational issues for non-traditional weapons: actionable requirements, technical and operational assurance, effective mission planning techniques, and meaningful measures of effectiveness.” [31]

Recognizing the gravity of this concern USCYBERCOM launched the Cyber Analogies Project at the Naval Postgraduate School in 2012 with the mission “to assist U.S. Cyber Command in identifying and developing relevant historical, economic, and other useful metaphors that could be used to enrich the discourse about cyber strategy, doctrine, and policy” [32]. Ultimately, the report argues that appropriate “analogies, metaphors, and parables” are necessary to facilitate learning, communicating, and “winning H.G. Well’s ‘race between education and catastrophe’” [33].

Second, we also know that news media not only plays an important role in promoting policy-makers’ preferred agendas and problem frames for a wider public, but also in shaping policy-makers’ understanding of the world. In this literature, “frames” are “schemata of interpretation” for individuals and groups to “locate, perceive, identify, and label” occurrences and events [34]. These “interpretive frameworks embedded in media messages” aid individuals and groups in “forming political attitudes and value judgments” by “evok[ing] as well as constrain[ing] the interpretative activities of audiences” [35]. Policy-makers are not immune from such effects; “In some such cases the media can participate in a positive feedback loop, which drives upward policymaking attention and outcomes very rapidly” (Wolfe, et al., 2013). This feedback loop is at work in the case of cyber Pearl Harbor too. Peter Singer of Brookings has been extremely critical of this situation, saying,

“[T]here is the histrionic — the “get scared” — category [of cyber security writing], then repeated back in the wider media, such as through the half-million references to “cyber 9/11.” Journalists need to be more discerning consumers when they hear that kind of thing. There is a joke in our field that there should be a drinking game based on any time someone references a “cyber Pearl Harbor.” More seriously, when someone says that, journalists should be prepared to follow up. Those phrases are the bumper stickers, not the end of the statement or argument. Yet they are used in business pitches, governmental speeches and Congressional hearings in that way. We’ve been caught between this state of ignorance and this fear factor. That’s not a good place for anybody, either in the public space or on the journalistic side.” (Singer and Wihbey, 2014)

There is reason to believe that reliance on cyber Pearl Harbor to frame our thinking and responses has had real, negative impacts. General Alexander’s internal 2012 memo about preventing a cyber Pearl Harbor indicates that this analogy and metaphor is not merely used by officials in public speeches, which is then picked up and repeated uncritically in news media, but that it also feeds back into the system of internal cyber security discourse and strategizing as a device that structures official thinking and planning behind the scenes (Alexander, 2012). We got a preview of those in 1996 in Sen. Nunn’s questioning of DCI Deutch, whose understanding of cyber threats had been shaped by the vision of cyber Pearl Harbor and whose “natural” solutions entailed militarization, deterrence, and even offense.

But there is reason to be concerned that these public appeals to fear of catastrophic cyber Pearl Harbor may also have negative impacts on discourse and decision-making more broadly. The continuing use of cyber Pearl Harbor, despite recognition of its failures, is exemplary of the fact that policy-makers and news media often rely on appeals to fear in their efforts to promote a policy agenda and frame issues for the public and themselves (Altheide, 2006, 2002; Glassner, 1999). However, a growing body of research in communication studies (Peters, et al., 2013; Pfau, 2007; Walton, 2000; Witte, 1996; Witte and Allen, 2000), psychology [36], and even information security (Lee, et al., 2006; Herath and Rao, 2009; Pfleeger and Caputo, 2012; Siponen, et al., 2014; Boss, et al., 2015), demonstrates that such rhetoric can have a negative impact. One recent study suggests that depictions of cyber doom scenarios in popular media “can lead to a sense of fatalism and demotivation to act” and “could impair efforts to motivate appropriate policy responses to genuine cyber security threats” [37].

Similarly, there is evidence to support Healey’s concern that constant worry about cyber Pearl Harbor has distracted us from the real threats we face, both cyber and non-cyber. In the cyber realm, Peter Singer writes, “Indeed, while the focus of US debate is more frequently on fears of a so-called ‘digital Pearl Harbor’, the more serious problem may actually be a long-term economic ‘death by a thousand cuts’.” [38]. Likewise, we can see repeated examples over the years of officials and experts warning of imminent cyber doom, or even ranking cyber threats higher than threats of terrorism or weapons of mass destruction, all while the real world continues to defy their predictions (Lawson, 2016). While officials and experts worried about cyber terror attacks against the World Trade Center, al-Qa’ida plotted to bring down the Twin Towers with box cutters and airplanes. Nonetheless, we worried that “the next attack” would certainly be a cyber attack. But still terrorists attacked using cars, bombs, guns, and their very bodies. In more recent times, we have been warned that state adversaries would soon carry out a cyber Pearl Harbor, or perhaps that they already have, as when Director of NSA called the North Korea hack of Sony a cyber Pearl Harbor. Meanwhile, in reality, the North Koreans have stunned the world with their brazen tests of missiles and nuclear weapons. For their part, in cyberspace, the Russians have not carried out catastrophic cyber attacks on the U.S. power grid, but have instead attempted to manipulate the U.S. presidential election in a manner to which Washington seems unprepared to respond.

 

++++++++++

Conclusion

For 25 years, cyber Pearl Harbor has remained a go-to analogy and metaphor for officials and others looking to raise awareness of, and motivate a response to, perceived cyber threats. Because of this effective service life, Cyber Pearl Harbor is not going away any time soon, no matter how inaccurate it is as a descriptive device. This is because its rhetorical force and appeal are not a product of its ability to accurately describe, but rather, of its users’ belief, rightly or wrongly, that the fear it evokes in listeners can call attention to and motivate a response to cyber threats, whatever their true nature may be. On the other hand, our ongoing, incorrect use of this analogy may have set in motion a self-fulfilling prophecy of militarization, including development and use of offensive cyber capabilities, which may have actually made the kinds of scenarios envisioned in cyber Pearl Harbor rhetoric more, rather than less, likely. More research is needed, however, that traces the ongoing use of the cyber Pearl Harbor analogy since 2016, as well as its resonance with public audiences.

In the face of increasing criticism combined with an ongoing penchant for appealing to fear, one option for the future of cyber Pearl Harbor is rehabilitation through redefinition. This involves redefining cyber Pearl Harbor as many small attacks, or even large attacks, that occur but somehow go unnoticed. In short, it involves exaggerating the impacts of actual events, as when the NSA Director called the hack of Sony a cyber Pearl Harbor in February 2015 (Lyngaas, 2015). Examples of this rhetorical move have been ubiquitous over the years (Deutch, 1996; Blitstein, 2007; Singel, 2009; Censer, 2012; Whitlock, 2014; Geraghty, 2015; Weisman, 2015; Robb, 2015; Carman, 2015). But, as Tim Stevens notes, these kinds of redefinitions of cyber Pearl Harbor serve to “pluralize[] the previously singular ‘event’ and locat[e] them across multiple sectors ... The integrity of the historical event is subverted still further by reference to the possibility of multiple ‘small-scale’ digital Pearl Harbors.” In particular, he calls Clarke’s assertion of daily digital Pearl Harbors “an idiosyncratic leap of logic too far, which all but destroyed any sensible use of the metaphor” [39]. It is hard to see, therefore, how redefining and “pluralizing” cyber Pearl Harbor will clarify our understanding of the real threats we face.

In the end, we would prefer to see the analogy abandoned altogether. What’s more, there is no shortage of other, potentially more appropriate and useful analogies and metaphors that may be enlisted to help us think about cyber security challenges. These can include other forms of conflict or weapons systems like piracy, counterinsurgency, political warfare, biological weapons, or even non-war analogies and metaphors such as the immune systems, public health, ecosystems, or complex adaptive systems. A number of scholars, technologists, and policy-makers have begun to think along these lines, offering many possibilities for moving away from the stale, inaccurate, and potentially detrimental threat of a cyber Pearl Harbor (Charney, 2011; McMorrow, 2010; Liles, 2010; U.S. Department of Homeland Security, 2011; Lapointe, 2011; Lawson, 2012; Axelrod, 2014; Jensen, et al., 2019).

The importance of language and rhetoric for appropriately framing and responding to problems is abundantly clear. Increasingly, military and national security professionals are coming to this understanding as well. This includes those who are wrestling with complex problems like threats to cyber security, where there is increasing recognition that hyperbolic analogies and metaphors like cyber Pearl Harbor are not only inadequate, but may actually be harmful to our ability to understand and respond to the cyber threats we face today. It is our hope that the research presented here can aid scholars and policy makers in understanding the emergence, evolution, and persistence of the cyber Pearl Harbor analogy as a first step towards developing more appropriate and productive frames for navigating cyber security threats. End of article

 

About the authors

Sean Lawson is Associate Professor of Communication at the University of Utah and Contributor at Forbes. His research and teaching focus on the intersections of science, information and communication technology (ICT), and national security, with a particular emphasis on cybersecurity.
E-mail: sean [dot] lawson [at] utah [dot] edu

Michael K. Middleton is Assistant Professor of Communication and Director of Forensics for the John R. Park Debate Society at the University of Utah. His research focuses on argumentation, public discourse and rhetoric, with a particular emphasis on social movements.
E-mail: m [dot] middleton [at] utah [dot] edu

 

Acknowledgements

We would like to thank both Dr. Sara Yeo and Dr. Brandon Valeriano for their helpful comments and advice during the research and writing of this essay.

 

Notes

1. Wall, 2008, p. 866; Dunn Cavelty, 2007, p. 5; Füredi, 2009, pp. 25–26.

2. Füredi, 2009, pp. 25–26; Bigo, 2006, 2000; Buzan, et al., 1998; Hardt and Negri, 2004.

3. Stevens, 2016, p. 131; Wirtz, 2014, p. 7.

4. Conway, 2008; Debrix, 2001; Dunn Cavelty, 2007, p. 2; Lawson, 2013; Stohl, 2006; Valeriano and Maness, 2015; Weimann, 2008, 2005.

5. Hasian, et al., 2015, pp. 136–146.

6. Kramer amd Thrall, 2009, p. 1.

7. Dunn Cavelty, 2007, p. 5.

8. Brockriede, 1974, p. 166.

9. Leff and Sachs, 1990, p. 256.

10. Leff and Sachs, 1990, p. 257.

11. The search was conducted and articles collected on 15 February 2016.

12. Neuendorf, 2011, p. 277; Krippendorff, 2004.

13. Benoit, 2011, p. 271.

14. Neuendorf, 2011, p. 277.

15. Benoit, 2011, p. 271.

16. Stevens, 2016, p. 131.

17. “Closely coupled” is a term often used in complexity science to describe a characteristic of complex systems, which is that their many linkages can make them particularly vulnerable to “cascading failures.”

18. Hasian, et al., 2015, pp. 143–144.

19. Panetta, 2012; Kaplan, 2016, p. 213.

20. Kaplan, 2016, p. 228.

21. Kaplan, 2016, p. 212.

22. Wirtz, 2014, p. 7.

23. Ortony, 1993, p. 2.

24. Lakoff and Johnson, 1980, p. 6.

25. Lakoff and Johnson, 1980, p. 9.

26. Wyatt, 2004, p. 245.

27. Ortony, 1993, pp. 5–6.

28. Lakoff and Johnson, 1980, p. 10.

29. Libicki, 1997, p. 6.

30. U.S. Strategic Command (USSTRATCOM), 2009, p. 4.

31. Ibid.

32. Goldman and Arquilla, 2014, p. 1.

33. Goldman and Arquilla, 2014, pp. 5–6.

34. Goffman, 1974, p. 21.

35. DeLuca, et al., 2012, p. 490.

36. For an overview of findings on fear appeals in psychology, see the Web site, “Fear appeals: Research into the effectiveness of threatening communication,” http://fearappeals.com/, accessed 15 September 2016.

37. Lawson, et al., 2016, p. 65.

38. Singer and Friedman, 2014, p. 70.

39. Stevens, 2016, pp. 131–132.

 

References

D.L. Altheide, 2006. Terrorism and the politics of fear. Lanham, Md.: AltaMira Press.

D.L. Altheide, 2002. Creating fear: News and the construction of crisis. New York: Aldine de Gruyter.

R. Anderson, C. Barton, R. Böhme, R. Clayton, M. van Eeten, M. Levi, T. Moore, and S. Savage, 2013. “Measuring the cost of cybercrime,” In: R. Böhme (editor). The economics of information security and privacy. New York: Springer, pp. 265–300.
doi: http://dx.doi.org/10.1007/978-3-642-39498-0_12, accessed 17 February 2019.

Australian, 1996. “CIA warns of cyber-terror attacks,” The Australian (27 June), p. 17.

R. Axelrod, 2014. “A repertory of cyber analogies,” In: E. Goldman and J. Arquilla (editors). Cyber analogies. Monterey, Calif.: Naval Postgraduate School, pp. 108–116.

R. Bendrath, 2003. “The American cyber-angst and the real world — any link?,” In: R. Latham (editor). Bombs and bandwidth: The emerging relationship between information technology and security. New York: Free Press, pp. 49–73.

R. Bendrath, 2001. “The cyberwar debate: Perception and politics in US critical infrastructure protection,” Information & Security, volume 7, pp. 80–103.
doi: http://dx.doi.org/10.11610/isij.0705, accessed 17 February 2019.

R. Bendrath, J. Eriksson, and G. Giacomello, 2007. “From ‘cyberterrorism’ to ‘cyberwar’, back and forth: How the United States securitized cyberspace,” In: J. Eriksson and G. Giacomello (editors). International relations and security in the digital age. London: Routledge, pp. 57–82.

W.L. Benoit, 2011. “Content analysis in political communication,” In: E.P. Bucy and R.L. Holbert (editors). Sourcebook for political communication research: Methods, measures, and analytical techniques. London: Routledge, pp. 268–279.

D.J. Betz and T. Stevens, 2013. “Analogical reasoning and cyber security,” Security Dialogue, volume 44, number 2, pp. 147–164.
doi: https://doi.org/10.1177/0967010613478323, accessed 17 February 2019.

D. Bigo, 2006. “Security, exception, ban and surveillance,” In: D. Lyon (editor). Theorizing surveillance: The panopticon and beyond. Cullompton, Devon: Willan Publishing, pp. 46–58.

D. Bigo, 2000. “When two become one: Internal and external securitisations in Europe,” In: M. Kelstrup and M. Williams (editors). International relations theory and the politics of European integration: Power, security, and community. London: Routledge, pp. 171–204.

R. Blitstein, 2007. “Part I: How online crooks put us all at risk,” San Jose Mercury News (8 November).

S.R. Boss, D.F. Galletta, P.B. Lowry, G.D. Moody, and P. Polak, 2015. “What do systems users have to fear? Using fear appeals to engender threats and fear that motivate protective security behaviors,” MIS Quarterly, volume 39, number 4, pp. 837–864.
doi: https://doi.org/10.25300/MISQ/2015/39.4.5, accessed 17 February 2019.

W. Brockriede, 1974. “Rhetorical criticism as argument,” Quarterly Journal of Speech, volume 60, number 2, pp. 165–174.
doi: https://doi.org/10.1080/00335637409383222, accessed 17 February 2019.

J. Bussey, 2016. “Gen. Michael Hayden gives an update on the cyberwar,” Wall Street Journal (9 February), at https://www.wsj.com/articles/gen-michael-hayden-gives-an-update-on-the-cyberwar-1455076153, accessed 9 February 2016.

B. Buzan, O. Wæver, and J. de Wilde, 1998. Security: A new framework for analysis. Boulder, Colo.: Lynne Rienner.

D. Campbell, 1998. Writing security: United States foreign policy and the politics of identity. Revised edition. Minneapolis: University of Minnesota Press.

A. Carman, 2015. “OPM breaches more serious to national security than 9/11, congresswoman argues during hearing,” SC Magazine (16 June), at http://www.scmagazine.com/house-committee-on-oversight-and-government-reform-hosts-hearing-on-data-breaches/article/421052/, accessed 16 June 2015.

A.B. Carter, J. Deutch, and P. Zelikow, 1998. “Catastrophic terrorism: Tackling the new danger,” Foreign Affairs (November/December), at https://www.foreignaffairs.com/articles/united-states/1998-11-01/catastrophic-terrorism-tackling-new-danger, accessed 17 February 2019.

M. Censer, 2012. “Little cyber attacks have big effects,” Washington Post (7 May), p. A10.

S. Charney, 2011. “Collective defense: Applying public health models to the Internet,” IEEE Security & Privacy, volume 10, number 2, pp. 54–59.
doi: https://doi.org/10.1109/MSP.2011.152, accessed 17 February 2019.

J.R. Clapper, 2015. “Statement for the record: Worldwide cyber threats,” U.S. House of Representatives, House Permanent Select Committee on Intelligence (10 September), at https://www.dni.gov/index.php/newsroom/congressional-testimonies/congressional-testimonies-2015/item/1251-dni-clapper-statement-for-the-record-worldwide-cyber-threats-before-the-house-permanent-select-committee-on-intelligence, accessed 17 February 2019.

M. Conway, 2008. “Media, fear and the hyperreal: The construction of cyberterrorism as the ultimate threat to critical infrastructures,” In: M. Dunn Cavelty and K.S. Kristensen (editors). Securing ‘the homeland’: Critical infrastructure, risk and (in)security. London: Routledge, pp. 109–129.

F. Debrix, 2001. “Cyberterror and media-induced fears: The production of emergency culture,” Strategies, volume 14, number 1, pp. 149–168.
doi: https://doi.org/10.1080/10402130120042415, accessed 17 February 2019.

K. DeLuca, S. Lawson, and Y. Sun, 2012. “Occupy Wall Street on the public screens of social media: The many framings of the birth of a protest movement,” Communication, Culture & Critique, volume 5, number 4, pp. 483–509.
doi: https://doi.org/10.1111/j.1753-9137.2012.01141.x, accessed 17 February 2019.

M. Dunn Cavelty, 2013. “From cyber bombs to political dallout: Threat representations with an impact in the cyber security discourse,” International Studies Review, volume 15, number 1, pp. 1053–122.
doi: https://doi.org/10.1111/misr.12023, accessed 17 February 2019.

M. Dunn Cavelty, 2007. Cyber security and threat politics: U.S. efforts to secure the information age. New York: Routledge.

M. Dunn Cavelty and R.A. Van Der Vlugt, 2015. “A tale of two cities: Or how wrong metaphors lead to less security,” Georgetown Journal of International Affairs, volume 16, pp. 21–29.

J. Eriksson, 2002. “Cyberplagues, IT, and security: Threat politics in the information age,” Journal of Contingencies and Crisis Management, volume 9, number 4, pp. 200–210.
doi: https://doi.org/10.1111/1468-5973.00171, accessed 17 February 2019.

D. Florêncio and C. Herley, 2013. “Sex, lies and cyber-crime surveys,” In: B. Schneier (editor). Economics of information security and privacy III. New York: Springer, pp 35–53.
doi: https://doi.org/10.1007/978-1-4614-1981-5_3, accessed 17 February 2019.

F. Füredi, 2009. Invitation to terror: The expanding empire of the unknown. London: Continuum.

J. Geraghty, 2015. “‘The OPM hack was just the start and it won’t be the last’,” National Review (12 June), at https://www.nationalreview.com/corner/opm-hack-was-just-start-and-it-wont-be-last-jim-geraghty/, accessed 12 June 2015.

B. Glassner, 1999. The culture of fear: Why Americans are afraid of the wrong things. New York: Basic Books.

E. Goffman, 1974. Frame analysis: An essay on the organization of experience. New York: Harper & Row.

E. Goldman and J. Arquilla (editors). Cyber analogies. Monterey, Calif.: Naval Postgraduate School.

L. Hansen and H. Nissenbaum, 2009. “Digital disaster, cyber security, and the Copenhagen school,” International Studies Quarterly, volume 53, number 4, pp. 1,155–1,175.
doi: https://doi.org/10.1111/j.1468-2478.2009.00572.x, accessed 17 February 2019.

R.P. Hart, 1990. Modern rhetorical criticism. Glenview, Ill.: Scott, Foresman.

M. Hasian, S. Lawson, and M. McFarlane, 2015. The rhetorical invention of America’s national security state. Lanham, Mass.: Lexington Books.

M. Hardt and A. Negri, 2004. Multitude: War and democracy in the age of Empire. New York: Penguin Press.

T. Herath and H.R. Rao, 2009. “Protection motivation and deterrence: A framework for security policy compliance in organisations,” European Journal of Information Systems, volume 18, number 2, pp. 106–125.
doi: https://doi.org/10.1057/ejis.2009.6, accessed 17 February 2019.

Hobart (Tasmania, Australia) Mercury, 1996. “Cyber terrorists threaten US, CIA boss warns,” Hobart Mercury (27 June).

B. Hoffman, 1998. Inside terrorism. New York: Columbia University Press.

D. Hoffman and D. Waisanen, 2015. “At the digital frontier of rhetoric studies: An overview of tools and methods for computer-aided textual analysis,” In: J. Ridolfo and W. Hart-Davidson (editors). Rhetoric and the digital humanities. Chicago, University of Chicago Press, pp. 169–183.

E. Jardine, 2018. “Mind the denominator: Towards a more effective measurement system for cybersecurity,” Journal of Cyber Policy, volume 3, number 1, pp. 116–139.
doi: https://doi.org/10.1080/23738871.2018.1472288, accessed 17 February 2019.

E. Jardine, 2017. “Sometimes three rights really do make a wrong: Measuring cybersecurity and Simpson’s paradox,” paper presented at the 16th Annual Workshop on the Economics of Information Security (La Jolla, Calif.), at https://weis2017.econinfosec.org/wp-content/uploads/sites/3/2017/07/WEIS_2017_paper_18.pdf, accessed 17 February 2019.

E. Jardine, 2015. “Global cyberspace is safer than you think: Real trends in cybercrime,” Global Commission on Internet Governance (GCIG) Paper, number 16, at https://www.cigionline.org/publications/global-cyberspace-safer-you-think-real-trends-cybercrime, accessed 17 February 2019.

B. Jensen, B. Valeriano, and R. Maness, 2019. “Fancy bears and digital trolls: Cyber strategy with a Russian twist,” Journal of Strategic Studies, volume 42, number 2, pp. 212–234.
doi: https://doi.org/10.1080/01402390.2018.1559152, accessed 17 February 2019.

R. Johnston, 2009. “Salvation or destruction: Metaphors of the Internet,” First Monday, volume 14, number 4, at https://www.firstmonday.org/article/view/2370/2158, accessed 17 February 2019.
doi: https://doi.org/10.5210/fm.v14i4.2370, accessed 17 February 2019.

F. Kaplan, 2016. Dark territory: The secret history of cyber war. New York: Simon & Schuster.

N. Kostyuk and Y.M. Zhukov, 2019. “Invisible digital front: Can cyber attacks shape battlefield events,” Journal of Conflict Resolution, volume 63, number 2, pp. 317–347.
doi: https://doi.org/10.1177/0022002717737138, accessed 17 February 2019.

K. Krippendorff, 2004. Content analysis: An introduction to its methodology. Thousand Oaks, Calif.: Sage.

G. Lakoff and M. Johnson, 1980. Metaphors we live by. Chicago: University of Chicago Press.

A. Lapointe, 2011. “When good metaphors go bad: The metaphoric ‘branding’ of cyberspace,” Center for Strategic & International Studies, at http://csis.org/files/publication/110923_Cyber_Metaphor.pdf, accessed 17 February 2019.

W. Laqueur, 1999. The new terrorism: Fanaticism and the arms of mass destruction. New York: Oxford University Press.

S. Lawson, 2016. “On this date in cyber doom history: An example of getting it so wrong for so long,” Forbes (25 June), at http://www.forbes.com/sites/seanlawson/2016/06/25/on-this-date-in-cyber-doom-history-an-example-of-getting-it-so-wrong-for-so-long/#37c553327f41, accessed 25 June 2016.

S. Lawson, 2013. “Beyond cyber-doom: Assessing the limits of hypothetical scenarios in the framing of cyber-threats,” Journal of Information Technology & Politics, volume 10, number 1, pp. 86–103.
doi: https://doi.org/10.1080/19331681.2012.759059, accessed 17 February 2019.

S. Lawson, 2012. “Putting the ‘war’ in cyberwar: Metaphor, analogy, and cybersecurity discourse in the United States,” First Monday, volume 17, number 7, at https://firstmonday.org/article/view/3848/3270, accessed 17 February 2019.
doi: https://doi.org/10.5210/fm.v17i7.3848, accessed 17 February 2019.

S. Lawson, S.K. Yeo, H. Yu, and E. Greene, 2016. “The cyber-doom effect: The impact of fear appeals in the US cyber security debate,” Proceedings of the 2016 8th International Conference on Cyber Conflict (CyCon), pp. 65–80.
doi: https://doi.org/10.1109/CYCON.2016.7529427, accessed 17 February 2019.

D. Lee, R. Larose, and N. Rifon, 2008. “Keeping our network safe: A model of online protection behaviour,” Behaviour & Information Technology, volume 27, number 5, pp. 445–454.
doi: https://doi.org/10.1080/01449290600879344, accessed 17 February 2019.

M. Leff and A. Sachs, 1990. “Words the most like things: Iconicity and the rhetorical text,” Western Journal of Communication, volume 54, number 3, pp. 252–273.
doi: https://doi.org/10.1080/10570319009374342, accessed 17 February 2019.

J.A. Lewis, 2018. “Cognitive effect and state conflict in cyberspace,” Center for Strategic & International Studies (26 September), at https://www.csis.org/analysis/cognitive-effect-and-state-conflict-cyberspace, accessed 17 February 2019.

J.A. Lewis, 2017. “The truth about a cyber Pearl Harbor,” CNN (29 August), at https://www.cnn.com/2017/08/29/opinions/truth-about-cyber-pearl-harbor-lewis/index.html, accessed 17 February 2019.

M.C. Libicki, 1997. Defending cyberspace, and other metaphors. Washington, D.C.: National Defense University, at https://apps.dtic.mil/dtic/tr/fulltext/u2/a368431.pdf, accessed 17 February 2019.

R.J. Lifton, 1999. Destroying the world to save it: um Shinrikyo, apocalyptic violence, and the new global terrorism. New York: Henry Holt.

S. Liles, 2010. “Cyber warfare: As a form of low-intensity conflict and insurgency,” In: C. Czosseck and K. Podins (editors). Conference on cyber conflict: Proceedings 2010. Tallinn, Estonia: CCD COE Publications, pp. 47–58, at http://ccdcoe.eu/uploads/2018/10/Liles-Cyber-warfare-As-a-form-of-low-intensity-conflict-and-insurgency.pdf, accessed 17 February 2019.

J.R. Lindsay, 2015. “Tipping the scales: The attribution problem and the feasibility of deterrence against cyberattack,” Journal of Cybersecurity, volume 1, number 1, pp. 53–67.
doi: https://doi.org/10.1093/cybsec/tyv003, accessed 17 February 2019.

S. Lyngaas, 2015. “NSA’s Rogers makes the case for cyber norms,” FCW (23 February), at https://fcw.com/articles/2015/02/23/nsa-rogers-cyber-norms.aspx, accessed 23 February 2015.

J. Markoff, 1999. “Blown to bits; Cyberwarfare breaks the rules of military engagement,” New York Times (17 October), at https://www.nytimes.com/1999/10/17/weekinreview/ideas-trends-blown-to-bits-cyberwarfare-breaks-the-rules-of-military-engagement.html, accessed 17 February 2019.

J.N. Mattis, 2008. “USJCOM commander’s guidance for effects-based operations,” Parameters, at https://apps.dtic.mil/dtic/tr/fulltext/u2/a490619.pdf, accessed 17 February 2019.

D. McMorrow, 2010. Science of cyber-security. McLean, Va.: MITRE Corporation, JASON Program Office, at https://fas.org/irp/agency/dod/jason/cyber.pdf, accessed 17 February 2019.

K.A. Neuendorf, 2011. “Content analysis — A methodological primer for gender research,” Sex Roles, volume 64, numbers 3–4, pp. 276–289.
doi: https://doi.org/10.1007/s11199-010-9893-0, accessed 17 February 2019.

J.S. Nye, 2016. “The Kremlin and the US election,” Project Syndicate (5 December), at https://www.project-syndicate.org/commentary/kremlin-cyber-attacks-american-election-by-joseph-s--nye-2016-12, accessed 17 February 2019.

A. Ortony (editor), 1979. Metaphor and thought. New York: Cambridge University Press.

C. Peoples and N. Vaughan-Williams, 2010. Critical security studies: An introduction. New York: Routledge.

G.-J.Y. Peters, R.A.C. Ruiter, and G. Kok, 2013. “Threatening communication: A critical re-analysis and a revised meta-analytic test of fear appeal theory,” Health Psychology Review, volume 7, supp1ement 1, pp. S8–S31.
doi: https://doi.org/10.1080/17437199.2012.703527, accessed 17 February 2019.

M. Pfau, 2007. “Who’s afraid of fear appeals? Contingency, courage, and deliberation in rhetorical theory and practice,” Philosophy & Rhetoric, volume 40, number 2, pp. 216–237.
doi: https://doi.org/10.1353/par.2007.0024, accessed 17 February 2019.

S.L. Pfleeger and D.D. Caputo, 2012. “Leveraging behavioral science to mitigate cyber security risk,” Computers & Security, volume 31, number 4, pp. 597–611.
doi: https://doi.org/10.1016/j.cose.2011.12.010, accessed 17 February 2019.

T. Rid and B. Buchanan, 2018. “Hacking democracy,” SAIS Review of International Affairs, volume 38, number 1, pp. 3–16.
doi: https://doi.org/10.1353/sais.2018.0001, accessed 17 February 2019.

T. Rid and B. Buchanan, 2014. “Attributing cyber attacks,” Journal of Strategic Studies, volume 38, numbers 1–2, pp. 4–37.
doi: https://doi.org/10.1080/01402390.2014.977382, accessed 17 February 2019.

J. Robb, 2015 “The OPM infobomb explodes,” Global Guerrillas (24 June), at http://globalguerrillas.typepad.com/globalguerrillas/2015/06/the-opm-infobomb-explodes.html, accessed 24 June 2015.

S. Romanosky, 2016. “Examining the costs and causes of cyber incidents,” Journal of Cyber Security, volume 2, number 2, pp. 121–135.
doi: https://doi.org/10.1093/cybsec/tyw001, accessed 17 February 2019.

School of Advanced Military Studies, n.d. “Art of design: Student text, Version 2.0,” at http://www.au.af.mil/au/awc/awcgate/sam/art_of_design_v2.pdf, accessed 17 February 2019.

M. Schulzke, 2018. “The politics of attributing blame for cyberattacks and the costs of uncertainty,” Perspectives on Politics, volume 16, number 4, pp. 954–968.
doi: https://doi.org/10.1017/S153759271800110X, accessed 17 February 2019.

R. Singel, 2009. “Is the hacking threat to national security overblown?” Wired (3 June), at http://www.wired.com/threatlevel/2009/06/cyberthreat, accessed 3 June 2009.

P.W. Singer and A. Friedman, 2014. Cybersecurity and cyberwar: What everyone needs to know. Oxford: Oxford University Press.

P.W. Singer and J. Wihbey, 2014. “Research chat: Peter Singer on cybersecurity and what the media needs to know” (14 April), at https://www.brookings.edu/on-the-record/research-chat-peter-singer-on-cybersecurity-and-what-the-media-needs-to-know/, accessed 14 April 2014.

M. Siponen, M.A. Mahmood, and S. Pahnila, 2014. “Employees’ adherence to information security policies: An exploratory field study,” Information & Management, volume 51, number 2, pp. 217–224.
doi: https://doi.org/10.1016/j.im.2013.08.006, accessed 17 February 2019.

T. Stevens, 2016. Cyber security and the politics of time. Cambridge: Cambridge University Press.

M. Stohl, 2006. “Cyber terrorism: A clear and present danger, the sum of all fears, breaking point or patriot games?” Crime, Law and Social Change, volume 46, numbers 4–5, pp. 223–238.
doi: https://doi.org/10.1007/s10611-007-9061-9, accessed 17 February 2019.

U.S. Department of Homeland Security, 2011. “Enabling distributed security in cyberspace: Building a healthy and resilient cyber ecosystem with automated collective action” (23 March), at https://www.dhs.gov/xlibrary/assets/nppd-cyber-ecosystem-white-paper-03-23-2011.pdf, accessed 17 February 2019.

U.S. Department of the Army, 2015. “Army design methodology,” Army Techniques, Publication number 5-0.1 (1 July), http://data.cape.army.mil/web/character-development-project/repository/atp5-0x1-2015.pdf, accessed 17 February 2019.

U.S. Strategic Command (USSTRATCOM), 2009. “The cyber warfare lexicon: A language to support the development, testing, planning, and employment of cyber weapons and other modern warfare capabilities,” version 1.7.6 (5 January), at https://info.publicintelligence.net/USSTRATCOM-CyberWarfareLexicon.pdf, accessed 17 February 2019.

B. Valeriano and R.C. Maness, 2015. Cyber war versus cyber realities: Cyber conflict in the international system. Oxford: Oxford University Press.

D.S. Wall, 2008. “Cybercrime and the culture of fear: Social science fiction(s) and the production of knowledge about cybercrime,” Information, Communication & Society, volume 11, number 6, pp. 861–884.
doi: https://doi.org/10.1080/13691180802007788, accessed 17 February 2019.

D.N. Walton, 2000. Scare tactics: Arguments that appeal to fear and threats. Boston: Kluwer Academic.

G. Weimann, 2008. “Cyber-terrorism: Are we barking at the wrong tree?” Harvard Asia Pacific Review, volume 9, number 2, pp. 41–46.

G. Weimann, 2005. “Cyberterrorism: The sum of all fears?” Studies in Conflict & Terrorism, volume 28, number 2, pp. 129–149.
doi: https://doi.org/10.1080/10576100590905110, accessed 17 February 2019.

D. Weinstein, 2018. “Stop saying ‘digital Pearl Harbor’,” Dark Reading (2 October), at https://www.darkreading.com/threat-intelligence/stop-saying-digital-pearl-harbor/a/d-id/1332932, accessed 17 February 2019.

S. Weisman, 2015. “The hacking of OPM: Is it our cyber 9/11?” USA Today (13 June), at http://www.usatoday.com/story/money/columnist/2015/06/13/hacking-opm-weisman/28697915,accessed 13 June 2015.

C. Whitlock, 2014. “Ashton Carter, passed over before, gets picked by Obama to be defense secretary,” Washington Post (5 December), at https://www.washingtonpost.com/world/national-security/ash-carter-passed-over-before-gets-picked-by-obama-to-lead-pentagon/2014/12/05/33a2429a-7c95-11e4-9a27-6fdbc612bff8_story.html?utm_term=.369f547d76d5, accessed 5 December 2014.

J.J. Wirtz, 2014. “The cyber Pearl Harbor,” In: E. Goldman and J. Arquilla (editors). Cyber analogies. Monterey, Calif.: Naval Postgraduate School, pp. 7–14.

K. Witte, 1996. “Fear as motivator, fear as inhibitor: Using the extended parallel process model to explain fear appeal success and failure,” In: P.A. Anderson and L.K. Guerrero (editors). Handbook of communication and emotion: Theory, applications, and contexts. San Diego, Calif.: Academic Press, pp. 423–450.
doi: https://doi.org/10.1080/10576100590905110, accessed 17 February 2019.

K. Witte and M. Allen, 2000. “A meta-analysis of fear appeals: Implications for effective public health campaigns,” Health Education & Behavior, volume 27, number 5, pp. 591–615.
doi: https://doi.org/10.1177/109019810002700506, accessed 17 February 2019.

M. Wolfe, B.D. Jones, and F.R. Baumgartner, 2013. “A failure to communicate: Agenda setting in media and policy studies,” Political Communication, volume 30, number 2, pp. 175–192.
doi: https://doi.org/10.1080/10584609.2012.737419, accessed 17 February 2019.

J. Wolff, 2018. “The national intelligence director issued a warning about a cyber 9/11-like cyberattack,” Slate (19 July), at https://slate.com/technology/2018/07/u-s-intel-chief-warns-of-a-crippling-cyberattack-against-our-critical-infrastructure-what-does-he-mean.html, accessed 17 February 2019.

S. Wyatt, 2004. “Danger! Metaphors at work in economics, geophysiology, and the Internet,” Science, Technology & Human Values, volume 29, number 2, pp. 242–261.
doi: https://doi.org/10.1177/0162243903261947, accessed 17 February 2019.

 

Appendix A: Content analysis codebook

 

Table 1: Codebook for U.S newspapers.
ConceptCodesVariablesDefinition
Sentiment1NeutralCyber Pearl Harbor is mentioned but is neither endorsed or rejected.
2PositiveEndorsing, promoting, affirming, etc. the idea that a cyber Pearl Harbor is a real threat.
3NegativeAmbivalence, skepticism, rejection, etc. towards cyber Pearl Harbor as a threat.
4Sentiment CombinationThe article provides both positive and negative assessments of cyber Pearl Harbor.
NASentiment NoteIf there is a combination of sentiments, use the notes field to provide a brief explanation.
(De)securitizing actors1Official\Civilian bureaucratA nonmilitary member, unelected individual such as a cabinet secretary, presidential advisor, etc.
2Official\Elected officialAn individual such as the President, member of Congress, governor, etc.
3Official\Military officerA uniformed military member like Chairman of the Joint Chiefs, a service chief, etc.
4Official\Law enforcementAn individual such as FBI director or agent, other law enforcement, or Department of Justice official, US Attorney, etc.
5Official\Intelligence analyst or officialAn intelligence official like director or deputy director of NSA or CIA, or generic intelligence officials or intelligence analysts.
6Official\Anonymous or unnamedCitation of anonymous officials who remain unnamed.
7Official\OtherAn official not covered in other definitions.
8Private\Computer security industryA cited expert who is a researcher, analyst, expert, etc. from a private computer security firm such as Symantec, Kaspersky, etc.
9Private\Defense contractorA cited expert who is a researcher, analyst, expert, etc. from a defense contractor like Lockheed, Booze Allen, etc.
10Private\Think tankA cited expert who is a researcher, analyst, expert, etc. from a think tank like CSIS, Brookings, etc.
11Private\University researcher/academicA cited expert who is a researcher, analyst, expert, etc. from a university.
12Private\OtherAnother kind of private source not defined above.
13Private\JournalistCitation of another journalist or use of the metaphor by the journalist him/herself without citing someone else.
14Actor CombinationMore than one securitizing actor is cited.
NAActor NoteIf there is a combination of actors cited, use the notes field to provide a brief explanation.
Threat subjects1State\ChinaChina identified as a possible source of a cyber Pearl Harbor attack.
2State\RussiaRussia identified as a possible source of a cyber Pearl Harbor attack.
3State\IranIran identified as a possible source of a cyber Pearl Harbor attack.
4State\North KoreaNorth Korea identified as a possible source of a cyber Pearl Harbor attack.
5State\IraqIraq identified as a possible source of a cyber Pearl Harbor attack.
6State\Generic, unspecified state(s)Generic hostile, foreign, etc. but unnamed states identified as a possible source of a cyber Pearl Harbor attack.
7State\Other, specifiedOther named states not listed above identified as a possible source of a cyber Pearl Harbor attack.
8Non-state\TerroristsTerrorist group(s) identified as a possible source of a cyber Pearl Harbor attack, such as al-Qa’ida, ISIS, etc.
9Non-state\CriminalsCriminals identified as a possible source of a cyber Pearl Harbor attack.
10Non-state\HacktivistsPolitical hacker-activists (“hacktivists”), generic or specified (e.g. Anonymous), identified as a possible source of a cyber Pearl Harbor attack.
11Non-state\Generic, unspecified hackersGeneric “hackers,” not identified as political, terrorists, criminals, or otherwise, identified as a possible source of a cyber Pearl Harbor attack.
12Non-state\OtherSome other non-state actor not listed above identified as a possible source of a cyber Pearl Harbor attack.
13HybridA combination state and non-state threat actor, such as state-sponsored, supported, or inspired hackers, criminals, or terrorists, identified as a possible source of a cyber Pearl Harbor attack.
14OtherSome other source of threat not listed above (e.g., system failure, accident, disgruntled insider, etc.) identified as a possible source of a cyber Pearl Harbor attack.
15UnspecifiedNo threat actor is identified as a possible source of a cyber Pearl Harbor attack.
16Subject CombinationSome combination of states or non-state actors identified as the possible source of a cyber Pearl Harbor attack.
NASubject Combination NoteIf a combination of state or non-state actors is identified, use the notes field to provide an explanation.
99NAArticle rejects the notion of cyber Pearl Harbor so the concept does not apply.
Referent objects1Civilian critical infrastructureInfrastructure such as electrical power, water, transportation, financial system, or similar systems identified as an object of attack resulting in a cyber Pearl Harbor.
2Military infrastructure/systemsMilitary command, control, communications systems, computers, logistics systems, etc. identified as an object of attack resulting in a cyber Pearl Harbor.
3Informational assetsInformation such as government secrets or private intellectual property identified as an object of attack resulting in a cyber Pearl Harbor.
4OtherSome other asset or system not listed above identified as an object of attack resulting in a cyber Pearl Harbor.
5UnspecifiedNo asset or system identified as an object of attack resulting in a cyber Pearl Harbor.
6Object combinationMultiple objects are identified as a possible target of a cyber Pearl Harbor attack.
NAObject combination noteIf multiple objects are identified as a possible target, use the notes field to provide an explanation.
99NAArticle rejects the notion of cyber Pearl Harbor so the concept does not apply.
Focusing events/sensitizing conditions1Cyber imaginedAn imagined cyber event in the form of a fictional scenario, wargame, work of popular fiction, or similar used as evidence or reason why we should be concerned about the possibility of cyber Pearl Harbor.
2Non-cyber imagined???
3Cyber actualA cyber event such as DDOS, defacement, data breach, system vulnerabilities, other cyber attack, or (non)government report/study provided as evidence or reason why we should take the threat of a cyber Pearl Harbor seriously.
4Non-cyber actualA non-cyber event such as a traditional terrorist attack or natural disaster (e.g., 9/11, OKC, Sandty, etc.) used as evidence or reason why we should be concerned about the possibility of a cyber Pearl Harbor.
5UnspecifiedNo event or condition provided as evidence or reason why we should take the threat of a cyber Pearl Harbor seriously.
6Event/condition combinationMultiple events or conditions are provided as evidence or reason why we should take the threat of a cyber Pearl Harbor seriously.
NAEvent/condition combination noteIf multiple events or conditions are provided, use the notes field to provide an explanation.
99NAArticle rejects the notion of cyber Pearl Harbor so the concept does not apply.

 

 

Appendix B: Intercoder reliability table

 

Intercoder reliability values
Table 2: Intercoder reliability values (Krippendorf’s alpha).
 

 

 

Appendix C: Cyber Pearl Harbor: Key texts, 1991–2016

 

Key texts
Table 3: Key texts, 1991–2016.
 

 

 


Editorial history

Received 17 January 2019; revised 15 February 2019; 15 February 2019.


Creative Commons License
“Cyber Pearl Harbor: Analogy, fear, and the framing of cyber security threats in the United States, 1991–2016” by Sean Lawson and Michael K. Middleton is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Cyber Pearl Harbor: Analogy, fear, and the framing of cyber security threats in the United States, 1991–2016
by Sean Lawson and Michael K. Middleton.
First Monday, Volume 24, Number 3 - 4 March 2019
https://firstmonday.org/ojs/index.php/fm/article/view/9623/7736
doi: http://dx.doi.org/10.5210/fm.v24i3.9623





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2019. ISSN 1396-0466.