Can terrorist attacks be predicted and prevented using classification algorithms? Can predictive analytics see the hidden patterns and data tracks in the planning of terrorist acts? According to a number of IT firms that now offer programs to predict terrorism using predictive analytics, the answer is yes. According to scientific and application-oriented literature, however, these programs raise a number of practical, statistical and recursive problems. In a literature review and discussion, this paper examines specific problems involved in predicting terrorism. The problems include the opportunity cost of false positives/false negatives, the statistical quality of the prediction and the self-reinforcing, corrupting recursive effects of predictive analytics, since the method lacks an inner meta-model for its own learning- and pattern-dependent adaptation. The conclusion is algorithms don’t work for detecting terrorism and is ineffective, risky and inappropriate, with potentially 100,000 false positives for every real terrorist that the algorithm finds.Contents
1. Introduction
2. What is the method behind the use of predictive analytics to predict terrorism?
3. What is the question?
4. What is the inspiration?
5. The predictive analytics literature
6. The use of predictive analytics in practice
7. Predictive analytics: The statistical premises and systemic problems
8. Predictive analytics is implicitly based on interpretation, but this goes unrecognized
9. Predictive analytics, dynamics and history
10. Predictive analytics and the self-fulfilling prophecy
11. Predictive analytics and the gamification of categorization
12. Conclusion
After several violent terrorist attacks in Europe, the political will to prevent such attacks has become a common political platform across EU countries. Now, a new security union will be formed (European Commission, 2016). The premise is that the most recent technical breakthroughs in machine learning and data mining — involving so-called predictive analytics (Mayer-Schönberger and Cukier, 2013; Siegel, 2013) combined with a broader sharing of intelligence information and data from social media — can prevent terrorism in the western world (McLaughlin, 2016). The aim is to make society safer and reassure an insecure population with new and promising IT solutions to an urgent social and political problem (Schneier, 2015; Travis, 2015).
2. What is the method behind the use of predictive analytics to predict terrorism?
The planning of terrorism is very difficult to observe in terms of behaviour. Therefore, a number of approximate signals are often used as a construct for the underlying variable planning of terrorism, such as possession of weapons, certain targeted keywords communication, watching certain YouTube videos, use of social networks or specific suspected phone numbers (Bouchard, 2015; Cohen, et al., 2014; Hayden, 2016; Jensen, 2002; Tucker, 2016a). New predictive analytics programs from IBM, Palantir, Predata and Recorded Future have therefore been designed to predict terrorism by finding these signals (IBM, 2015; Tucker, 2016a, 2016b). As described in its presentation of one of the programs, IBM believes the tool could help governments separate real refugees from imposters, untangle terrorist cells, and even predict bomb attacks (IBM, 2015; Tucker, 2016b). The method used in these programs is classification algorithms. These classification algorithms monitor and profile the signals that have historically been connected to terrorism, the sign being an indexable sign of terrorism (Cohen, et al., 2014). The aim is to predict future acts of terrorism based on these signals (Bouchard, 2015; Harcourt, 2006; Kaufmann, 2010; McLaughlin, 2016; Sandomir, 2009; Sperry, 2005).
There are two practical analytics approaches to seeking to prevent terrorist attacks based on data. One is an inductive search for patterns in data, and the other is a deductive data search based on a model of criminal relations between suspicious persons, actions or things (DeRosa, 2004; Hayden, 2016; Jensen, 2002; Jonas and Harper, 2006). Methodologically, this distinction is called pattern-based data mining versus subject-based data mining [1]. Only pattern-based data mining constitutes proper predictive analytics in which outliner segments are inductively identified in a multi-dimensional universe (Aggarwal and Yu, 2001; Everitt, et al., 2001) — segments identified by the algorithm as deviant in a number of central variables in which the deviation is then connected with terrorism. By contrast, subject-based data mining is merely traditional police investigation in a digitized form. No future predictions are involved, only the identification of a network of social relations. Searches in digitized data from larger and larger data volumes are performed more quickly based on the premise that crime is a social relation between people that leaves digital traces (Bouchard, 2015).
The question asked in this paper is: What are the problems of predicting terrorism using predictive analytics? Through a critical, realistic epistemology, the literature on these problems is presented, analyzed and discussed. The problems relate to the opportunity cost of false negatives and false positives, the statistical quality of predictions of terror, and the self-reinforcing, corrupting and recursive effects of predictive analytics, since the method lacks an inner automated meta-model for its own learning- and pattern-dependent adaptation. These analyzed problems are not merely a versioning and application of the classical epistemological contradiction between the natural sciences and the humanities known from the existing literature on predictive analytics in discussions of big data (boyd and Crawford, 2012; Crawford, 2013; Gitelman, 2013; Kitchin, 2014a), but arise out of the experience gained from using predictive analytics to prevent terrorism and reading current terrorism research based on a critical, realistic epistemology.
Ideologically, the interest in predictive analytics for counter-terrorism purposes can be traced back to the reactions to the 9/11 terrorist attacks in 2001 (Harcourt, 2006; Hayden, 2016; Schauer, 2003; Schneier, 2015, 2006). The event itself was concrete proof that improbable events with a great impact on society could happen and that social systems were unpredictable and complex. After the attacks, intelligence services and authorities were criticized for not having predicted and prevented them. Academically and politically, the focus was therefore on predicting and preventing terrorism using large amounts of data and monitoring, often in the form of scientific reports on and analyses of why the 9/11 attacks were not prevented, and the possibilities of preventing similar attacks in the future (McMorrow, 2009). Furthermore, a number of monitoring and data-sharing projects were carried out as a consequence of the attacks (Hayden, 2016; Kaufmann, 2010; Schneier, 2015). These projects provided an insight into the technical possibilities of predicting terrorist attacks.
This interest and experience once again became relevant in 2015 for two reasons. First, the big data trend and extensive digitization of public and private data provided new opportunities and awareness in respect of data and monitoring. Positive experience gained from predicting buying habits in e-commerce (Siegel, 2013) indicated that the state could be a predictive state that would be able to predict fire, tax fraud, crime and possibly terrorism (Mayer-Schönberger and Cukier, 2013). Second, a wave of Islamic terrorism in Europe brought security issues and the need to prevent possible future attacks to the forefront of the public consciousness. The combination of these factors meant that it was politically important and potentially technically possible to prevent terrorism, which created the market for predictive analytics for counter-terrorism purposes.
In practice, the use of predictive analytics to predict terrorism (Harcourt, 2006) was inspired by the prevention of credit card fraud and predictive policing experiments in the U.S., theoretically based on the behaviourist situational crime prevention paradigm (Cornish and Clarke, 2003; Ferguson, 2015, 2012; Greengard, 2012; Haberman and Ratcliffe, 2012; Harcourt, 2007; McCue, 2015; Newman, et al., 1997; Norrie, 2002; Rosenthal, 2011; Zarsky, 2014, 2013). In both cases, stable patterns and outliner segments have been identified based on large amounts of data subsequently used to predict the wheres and whens of fraud, burglary and violence, and in the end prevent them (Ferguson, 2015, 2012; Haberman and Ratcliffe, 2012). Supporters of this method see an opportunity to transfer experience from these domains to the far more complex, unpredictable and rare phenomenon of terrorism (Hayden, 2016; Siegel, 2013; Tucker, 2016a, 2016b, 2014). However, a number of critics are more sceptical about the possibility of predicting and preventing terrorism, as this is categorically less predictable than credit card fraud (Bouchard, 2015; Chivers, 2003; DeRosa, 2004; Horgan, 2008; Jonas and Harper, 2006; McMorrow, 2009; Schneier, 2015, 2006) and less characterized by routines than simple crimes that have been proven to follow simple, statistically stable patterns with a high repeat rate and simple functionalism (Haberman and Ratcliffe, 2012). Hence, these examples have statistical qualities that are not characteristic of terrorism (Kaufmann, 2010; McMorrow, 2009; Schneier, 2015). It seems that the choice to use these programs is not based on a scientifically informed, rational decision so much as on its tremendous value as a signal to a frightened population in an unsafe European present. The purchase and application of these programs is based on a political acceptance of the ideology that algorithms can solve social problems by preventing a possible future (Krasmann, 2007; Lash, 2007a, 2007b; Mager, 2012; Morozov, 2013).
Generally speaking, however, there is no scientific evidence that these counter-terrorism measures can actually predict or prevent terrorism (Lum, et al., 2008) and, specifically, there is disagreement as to the value of predictive analytics for crime-prevention and counter-terrorism purposes. A number of authors are positive and see potential in predictive analytics for crime-prevention and counter-terrorism purposes (Ferguson, 2015, 2012; Haberman and Ratcliffe, 2012; Mayer-Schönberger and Cukier, 2013; Norrie, 2002; Rosenthal, 2011; Siegel, 2013), while others are critical and unsure about the statistical validity and sociological effect of these analyses (Bouchard, 2015; Chivers, 2003; DeRosa, 2004; Harcourt, 2006; Horgan, 2008; Jonas and Harper, 2006; Kaufmann, 2010; McMorrow, 2009; Morozov, 2013; Schneier, 2015, 2006; Weber and Bowling, 2013).
5. The predictive analytics literature
The literature in this area consists of application-oriented literature, literature on practitioner tools, theoretical and sociological literature on the importance of big data, as well as popular scientific presentations and promotion of big-data technologies and approaches to social problems. The literature on predictive analytics used to prevent terrorism is divided between operational, application-oriented and practical literature on the one hand and theoretical literature on the ideological and social consequences and possibilities of predictive analytics used to prevent terrorism on the other. The technical, application-oriented literature typically consists of reports aimed at intelligence service practice. This literature for practitioners is critical to understanding the technical possibilities offered by pattern-based data mining, but is indifferent to their political legitimacy. The theoretical literature also has great confidence in the technical possibilities, but is divided in its assessment of the political consequences. Most practitioners find that it is almost impossible to predict terrorism based solely on outliner algorithms, as the frequency is too low, the noise level too high, the pattern too multidimensional and the number of false positives too large. This discussion can be traced back to the 9/11 attacks, but has become relevant again following the current wave of terrorist attacks.
Conversely, the general popular literature on predictive analytics sees outlier algorithms as a benefit to society. This phenomenon is considered anecdotally and merely seen as yet another social problem that may be solved technically by the new possibilities offered by big data. It is claimed that extensive prevention and prediction is now possible. The prevention of terrorism and general predictive police work (Ferguson, 2012; Greengard, 2012) are here described as essentially identical phenomena with different expressions. With the new methods of predictive analytics, society’s existing profiling of terrorists and prevention of terrorist attacks can become even more effective, just and targeted (Mayer-Schönberger and Cukier, 2013; Siegel, 2013; Tucker, 2014). This generally positive approach contrasts with a more critical literature in which the criticism of terrorism prevention forms part of a more general criticism of the technocratic approach to social problems (Citron, 2009; Citron and Pasquale, 2014; Harcourt, 2006; Lyon and Zureik, 1996; Pasquale, 2015). The critical literature on the use of predictive analytics to predict terrorism points out three types of problems: opportunity cost using predictive methods in police investigations, problems building a sustainable statistical model for the prediction of terrorism and problems with the method’s internal relations when predicting terrorism. Below, the various points of criticism are described in detail and put into perspective.
6. The use of predictive analytics in practice
The current wave of Islamic terrorism in Europe has a greater political and social platform in ethnic religious extremist environments and among returning volunteers from the Syrian civil war (Bennhold, 2015). The group of possible potential terrorists is therefore large, but the group of actual terrorists small, which makes it difficult to apply the rule of limited police resources optimally. Across Europe, authorities find it difficult to predict and focus correctly on who will become terrorists from the large group of potential terrorists. It involves a necessary prioritization of monitoring and investigations which in some cases have been erroneous. The wrong people have been monitored, and the right people have gone free, simply because the number of potential terrorists is so vast.
It is often mentioned after attacks that the now dead terrorists were known by the police but the police found that monitoring, investigating or arresting them was neither necessary nor possible. This prioritization involves tremendous opportunity costs when you choose not to investigate terrorists due to limited resources and instead choose to investigate innocent people, violating their legal rights and wasting resources. This is an insoluble practical problem, as the small selected target group versus the total group means more undiscovered terrorists and persecution of innocent people. This is why the extent and use of police resources is a large issue across Europe.
Successful prediction and prevention of terrorist attacks often require resource-intensive and time-consuming traditional police work involving interrogations, searches and informants in the extremist environments and digital networks. It is therefore difficult to choose which persons to investigate, and the risk of failure is great. If you look [2] at all public sources on current counter-terrorism efforts, you find that, in practice, the investigative breakthroughs in the prevention of terrorism reported in the media are entirely due to subject-based data mining (Bennhold, 2015). The wanted terrorist Abdeslam Salah, who was partly responsible for the Paris attacks in 2015, was arrested because he used a monitored mobile phone. Others involved in the Paris attacks were located based on their relations to and contacts with family members (Taub, 2016). Therefore, these investigative breakthroughs cannot be attributed to the use of predictive analytics, which is a pattern-based experimental method (DeRosa, 2004; Horgan, 2008; Jensen, 2002; McMorrow, 2009; Schneier, 2006). As mentioned above, the subject-based method is a digital version of traditional police work rather than pattern recognition and prediction based on large amounts of data with an infinite number of possible correlations (DeRosa, 2004). The results of counter-terrorist investigations are therefore consequences of the digitization of the investigations — an efficiency gain — rather than proof that predictive analytics works. The effect of the pattern-recognizing, inductive method is strongly debated and assessed in the literature relationally as an experimental method that can only be used as a complement to other methods (DeRosa, 2004; Hayden, 2016; Jensen, 2002). The challenge is that the inductive method seeks to find emerging patterns in relational and conditional data in infinite dimensions rather than unconditional and unrelated data in finite dimensions, which increases the complexity of and noise in the data material (DeRosa, 2004). The traditional method, based on investigating already convicted or suspected persons (subject-based), limits the complexity and noise dramatically, making it even more efficient (Bouchard, 2015; Chivers, 2003; DeRosa, 2004; Jonas and Harper, 2006; Kaufmann, 2010; Lum, et al., 2008). This underlines the practical problem of finding patterns in complex, open social systems with potentially infinite dimensions for predictive analytics.
7. Predictive analytics: The statistical premises and systemic problems
The underlying statistical premise for the value of the method is that society is a predictable, normally distributed, closed system. The statistical problem is that, like suicide, terrorism is a low-frequency event (Harcourt, 2007; Knibbs, 2014; Rosen, 1954), and every single event can be seen as unique (Schneier, 2015), which means that the risk of low base rate fallacy (Horgan, 2008) and over-generalization increases. Methodological generalization is impossible, as the amount of data is too small (Mackenzie, 2015), and the result will always be underfitting or overfitting (Horgan, 2008). Specifically, there is not enough data to build a model or to train the model to make a meaningful prediction (Kaufmann, 2010). The problem is that there is no clearly defined pattern or statistical possibility of defining what can and must be seen as attempts, and failed attempts are often kept secret as an essential characteristic of criminal activity (Horgan, 2008; Jonas and Harper, 2006; Knibbs, 2014; Rosen, 1954). Any algorithmic classification will result in false negatives as well as false positives (Ananny, 2016; Diakopoulos, 2015; Kraemer, et al., 2011; Silver, 2013), and as described previously, the size of these groups depends on the relationship between the total population and the population sought. This problem is also found in the classification algorithms of terrorism prevention: on the one hand, people or events classified as terrorists or terrorist attacks even though they are innocent (the so-called false positives); and on the other, guilty but overlooked terrorists and terrorist plans classified as non-terrorists and non-plans (the so-called false negatives), which appear normal but are in fact outliners. This is a practical resource allocation problem but also a theoretical statistical problem.
The result is far too many false positives and false negatives (Schneier, 2015; Silver, 2013), because the system is hypercomplex, builds on too little data (Jonas and Harper, 2006) and has far too excessive costs in the form of false positives/false negatives (Jonas and Harper, 2006; McLaughlin, 2016; Schneier, 2006).
The use of outliner algorithms creates many false positives (Weber and Bowling, 2013) because the method is used for large groups of people in which the group of true positives is very small (Chivers, 2003; McMorrow, 2009). As expressed by Chivers in a criticism of the NSA’s monitoring of possible terrorists:
Let’s start by recognizing that terrorism is extremely rare. So the probability that an individual under surveillance is also a terrorist is also extremely low [...] And now, we have all that we need. Apply a little special Bayes sauce: P(bad guy | +) = P(+ | bad guy) P(bad guy) / [ P(+ | bad guy) P(bad guy) + P(+ | good guy) P(good guy) ] and we get: P(bad guy | +) = 1/10,102. That is, for every positive (the NSA calls these ‘reports’) there is only a 1 in 10,102 chance that they’ve found a real bad guy. (Chivers, 2003) This can be seen as a problem for all predictive analytics in which the searched population is very small compared to the total population (Schneier, 2015). In the words of Mims (2013), the relationship between the total population and the population sought may be even more extreme than in the Chivers equation:
Simple math shows why the NSA’s Facebook spying is a fool’s errand. Based on what biologist Corey Chivers assumes in his estimate of PRISM’s effectiveness, it’s pretty challenging to find an unlikely event (e.g., a person who is a terrorist) in any very large set of data [...] Plugging less charitable numbers into the equation easily yields results that are far worse than Chivers’ estimate, suggesting that analysts might be confronted with 100,000 false positives for every real terrorist. (Mims, 2013) In the words of Kaufmann (2010) on the validity of the method:
The German dragnet investigation, for example, proved largely unsuccessful. The profiling parameters were widened so much that they were finally useless: 8.3 million individuals were analysed, 32,000 persons were identified as potential sleepers and yet no single person was charged with terrorism offences [...] This reflects what Vincent Cannistratro, former head of counter-terrorism at the CIA, stated, namely that he doesn’t know of any example where profiling caught a terrorist. [3] The dilemma is that the model will be either too general and vague or based on too specific data to be able to create useful predictions of other factors than itself (Jensen, 2002). Therefore, predictive analytics in counter-terrorism requires a high frequency to be able to make sustainable predictions about future attacks. Fortunately, terrorist attacks are low-frequency events; unfortunately, this makes them impossible to predict.
The systemic premise for believing that it is possible to prevent terrorism using predictive analytics and profiles based on traditional statistical methods is that the world is a generalized, normally distributed universe with a bell-shaped, Gaussian distribution (Bolin and Schwarz, 2015).
The problem is that terrorist attacks take place in social systems (Clauset, et al., 2007; Huey, et al., 2015) that are extreme power-law-distribution systems (Andriani and McKelvey, 2011), in which a terrorist attack can be described as an event with great effect and very limited probability, making it difficult to predict because we cannot use historical data or experience to predict them or to predict future terrorists (Horgan, 2008). It is not possible to use algorithmic, negative feedback mechanisms for prevention purposes in chaotic power-law distributed systems, because it is statistically impossible to determine the optimal spread of the value (Matthias, 2011). It is therefore impossible to build predictive terrorism models based on historical data, as this assumes a non-existing stability and lack of randomness in terrorism (Bouchard, 2015; Huey, et al., 2015).
8. Predictive analytics is implicitly based on interpretation, but this goes unrecognized
Statistically, the use of outliner algorithms is a problem, as all data and types of data are potentially relevant, resulting in an infinite number of variables. When the number of possible correlations increases dramatically, it is both problematic and time-consuming to find and locate the central and critical correlations. It can be difficult to define and delimit what is normal as opposed to the outliners. Often, what is normal is dynamic and changes over time; often, the data material in various data dimensions will generate various groups of normal versus outliners; and often, it will be difficult to distinguish outliners from data noise. You look for a pattern, but this assumes that you have systematic knowledge about what you are looking for based on a recognized or unrecognized pattern (Everitt, et al., 2001), because the method is by definition acausal [4]. As Xu and Wunsch [5] also suggest regarding the use of the subjectivism of outliner algorithms, once a proximity measure is determined, clustering could be constructed as an optimization problem with a specific criterion function. Again, the obtained clusters are dependent on the selection of the criterion function. The subjectivity of cluster analysis is thus inescapable [6]. Strictly speaking, it is therefore not possible to marginalize certain categorizations as more true than others (Everitt, et al., 2001; Greenacre, 2007; Xu and Wunsch, 2009). The result is that ‘everything that can be interpreted is valid’ [7]. In practice, predictive analytics used in counter-terrorism is generally interpretative but hides this by stating that the data speaks for itself (Anderson, 2008; Crawford, et al., 2014; Kitchin, 2014b).
9. Predictive analytics, dynamics and history
The premise of profiling and using predictive analytics to predict terrorism is a stable system and category in which the quality of the system and category is not affected by the system’s own categorization and new, reality-creating effects (Harcourt, 2006). But to date, no one has been able to profile terrorists:
Much of the thinking about the terrorist is still rooted in assumptions about profiling, while simultaneously hinting at the sense of frustration that no terrorist profile has yet been found — not only between members of different terrorist movements but also among members of the same particular movement [...] in spite of the evidence that, logically, terrorist profiles are unlikely to appear at all — at least at a level meaningful or practical to those who call for their identification. [8] The problem is that ethnicity is not a stable, categorical feature and proof that you are a terrorist (Harcourt, 2007; Kaufmann, 2010; Sandomir, 2009), because not all Middle Eastern Muslims are terrorists, and not all terrorists are from the Middle East. Just think of the growing group of ‘homegrown’ terrorists from Europe. The ethnic Muslim profiling is therefore problematic, as the classification is based on potentially unstable, dynamic traces that can be changed. It also includes a mixture of sufficient and necessary conditions for terrorism.
The uncertainty of the tipping point between those who are ‘at risk’ of becoming terrorists and those who are already ‘risky’ is framed in terms of the unknown:
Holding radical views, providing that they are promoted in non-violent ways, is not in itself normally regarded as incompatible with membership of a democratic free society [...] Nor is there any certainty as to whether and when a holder of extreme Islamist views might, if the circumstances arose, tip over into violent action. Where is the tipping point? The ‘preventative’ governance of terrorism produces a gap between suspect and terrorist subjectivities. [9] Hence, the profiling and binary sorting in the use of predictive analytics assumes the stability of the categories, a postulated similarity between suspect and terrorist subjectivities, timelessness and construct validity as a valid sign of terrorism. It claims that the essential quality of terrorism is categorized. Therefore, it is problematic for the classification to relate to the complex and open world of terrorism, in which any categorization in its pattern dependence and adaptation has the opportunity cost that it precludes a valuable and better alternative classification. In the words of Bowker and Star:
The two basic problems for any overarching classification scheme in a rapidly changing and complex field can be described as follows. First, any classificatory decision made now might by its nature block off valuable future development [...] Inversely, if every possible relevant piece of information were stored in the scheme, it would be entirely unwieldy. [10] So: how does classification relate to change and to the problem that there is always a price to pay for categorization, such as the loss of future dynamic potentiality? Naturally, anyone can adapt and change the classification algorithm manually to new knowledge, for example after a hermeneutic reading of Salafist texts online. However, this is something that happens outside the method and independently of the method’s extensional logic and epistemological horizon as an automated process. Here is the human interpretative and directly subjective influence in which human cognition is the central component (Brynielsson, et al., 2013).
As Cohen, et al. (2014) write, this creates practical problems as a function of the method’s inability to have hermeneutic [11] reflexivity. Efficiency requires interpretation:
To produce fully automatic computer tools for detecting lone wolf terrorists on the Internet is, in our view, not possible, both due to the enormous amounts of data (which is only partly indexed by search engines) and due to the deep knowledge that is needed to really understand what is discussed or expressed in written text or other kinds of data available on the Internet, such as videos or images. [12] The problem with predictive analytics is that it lacks hermeneutic intervention and interpretation, it is historical and lacks an architecture for learning and a dynamic negotiation of its own sorting rules (Uprichard, 2013). This has been termed the stasis problem (Rosenthal, 2011), i.e., a lack of efficiency in profiling and modelling of the future due to the fact that predictive algorithms project the past and thus assume that the past categorically equals the present. Predictive algorithms are therefore not suitable for open, dynamic systems with complex causal links in which the future differs from the past and which require new or dynamic categories (Silver, 2013).
A current category can quickly become outdated and the relationship can quickly change in open, dynamic systems that are meta-reflexive. The categorization finds it difficult to set up rules for its own change, as quantitative data are essentially acausal and subject to an interpretative causality outside themselves that lacks an automated meta-model level in itself that is epistemologically conceptualized [13]. The lack of this meta-level emerges as a category problem. As Gladwell (2006) states: ‘a generalization about a generalization about a trait that is not, in fact, general [is] a category problem’. The problem is the epistemological complexity and the paradox that categorization involves searching for a structure while the actual search process is itself a structuring categorization (Everitt, et al., 2001). This duality is its limitation, as the quality of the prediction depends on the ability to learn from the dynamics of the categorization, such as structuring, pattern dependence and adaptation. The problem is that this structuring is unrecognized, non-conceptualized, hermeneutic, non-automated and non-modelled in the use predictive analytics in counter-terrorism.
10. Predictive analytics and the self-fulfilling prophecy
One criticism of the use of predictive analytics and profiling of possible terrorists is that they make the underlying statistical model create its own self-corroborating spiral in an infinite recursivity (Zarsky, 2014). This means that the category is recursive: the described or defined category explains and reinforces itself. Hence, it creates its own reality as a simulacrum (Baudrillard, 1983). For instance, the more the police stop and question ethnic Muslims (Weber and Bowling, 2013), the more probable it is that you will find terrorists in this category, confirming the link between terrorists and ethnic Muslims, and the more you will stop and question ethnic Muslims in future (Harcourt, 2008, 2007, 2006).
The problem arises when the model potentially becomes a reality-creating model that creates the reality it claims to show and predict neutrally [14]. The overlooked consequence is that the measurement itself becomes an intervention in the world, corrupting the indicator and creating a new ontology of possible futures. This gives it a normative function, taking the form of algorithmic, generative rules that exert power over the possible futures. In the words of Lash: ‘Algorithmic generative rules are, as it were, virtuals that generate a whole variety of actuals’ [15]. This reality-creating problem is compounded by the fact that predictive algorithms are often associated with institutional abuse of monopoly power and lack of transparency and neutrality, for example within intelligence services that want to prevent terrorism (Diakopoulos, 2015; Napoli, 2014). They are secret in their design and consist of proprietary data in a commercial and government context, such as credit rating in the financial sector, or they are found in closed government databases. The so-called black box problem makes it difficult to stop the recursivity or criticize/discuss this problem. These self-fulfilling processes are automatic and impossible to stop unless the government ensures greater transparency (Citron and Pasquale, 2014; Pasquale, 2015, 2013, 2010; Zarsky, 2014). The self-reinforcing process is merely a technical problem. It can therefore be solved with a more qualified and reflexive use of data. The use of predictive methods in anti-terrorism is unfortunately as predictive credit ratings applied statistics (Gladwell, 2006), and not science with more focus on utility value than statistical evidence and precision. This raises a number of problems when you want to predict terrorism — problems that are also found in credit scoring of data quality and data use.
Indeed, it is possible to consider credit-scoring as merely ‘bad social science’. For example (Shaoul, 1989) identifies at least five serious methodological problems with the process of credit scoring. These include: the use of small sample sizes; errors introduced from the statistical translation of verbal information; histogram error because multiple discriminant analysis needs continuous data, not the discrete variables produced by scorecards; the use of median values to fill gaps in incomplete application forms and the problem of reject inference [16]. Reject Inference is the problem that credit scoring is applied to data on the accepted population of clients rather than the total population because there will typically not be any performance data available for the “rejected” population. Such non-random sample selection may produce bias in estimated model parameters and accordingly model predictions of repayment performance may not be optimal. The problem of reject inference can be considered a selection bias/false negative problem for the statistical model (Banasik and Crook, 2003).
Applied non-evidence based statistics, filter bubbles and simulacra are not necessary conditions for the algorithmic method (Pariser, 2012), but one possible presentation among others that can be changed with a more evidence-based methodological design and through randomization: in effect, a statistic function that arbitrarily gives something else. Randomization will lift the self-reinforcing spiral and provide a greater equality to the entire relevant population. Another benefit is that it is impossible to cheat a system based on randomization (Harcourt, 2008, 2007, 2006). In conclusion, the quality of the prediction of terrorism depends on control and correction of the reality-creating effects of the model, e.g., through randomization and evidence-based methodological design. But this would in turn require the non-existent meta-model to correct the reality-creating effects.
11. Predictive analytics and the gamification of categorization
Peoples relationships with each other are infinitely reflective and epistemological in open systems, but people are also instrumentally reflective and able to reflect on the rules applied (Tufekci, 2013). Therefore, social relations are always at risk of being gamed, as already described by sociology scholars in the 1970s (Campbell, 1979). As Campbell put it, any indicator and sign of a category will be compromised over time: ‘The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor’ [17]. Hence the profiling of terrorists has a meta-reflexive categorization problem, as terrorists see the signal and learn from the polices profiling over time, removing the affiliation trace (Schneier, 2015) so that the category no longer fits as a sign of terrorism. This is called adaptation and pattern-dependence. It can be done, for example, by recruiting non-profiled ethnic European converts for terrorism purposes (Gladwell, 2006; Harcourt, 2007, 2006; Kaufmann, 2010). The category can be destabilized by changing both the profile and method of attack, as was seen, for example, after one of the terrorist attacks in London (Gladwell, 2006). The quality of the categorization therefore depends on its signal value to the profiled persons. This is temporally conditioned as, sooner or later, the category will be corrupted or the relationship between the trace and category will change dynamically from stasis. The problem is that preventive predictive analytics may stop a crime from happening in the future, but this will make the correcting risk-minimizing measures and profiling an event and a signal in the past, present and future:
The object itself changes owing to the strategic focus on the anticipation of a danger [...] It is not an act or a disturbance that becomes real, as the intention is to prevent even the possibility of an unwanted incident. The interference itself becomes an event, indeed, it leaves a trace in the case of a concrete culprit who, in the end gets interrogated, arrested or excluded. [18] The use of predictive analytics to fight terrorism is limited by the ability to reflexively categorize learning, meta-reflexive systems in which the construct is continuously corrupted because the strategies are pattern-dependent and dialectically adaptive. This meta-reflexivity requires an ability to meta-model that this method in counter-terrorism cannot accommodate automatically.
This article’s question was whether it is possible to predict terrorism using predictive analytics. The answer was that it raises a wide range of technical and theoretical issues that limits the method’s value. These can inductively be used to set up a number of more general problems and limitations in the use of predictive analytics in open social systems. These can be categorized as three types of problems: problems associated with the high opportunity cost in comparatively more false negatives and false positives because the searched population is so small; problems building a sustainable statistical model; and problems with the model’s negative recursive effects on itself, since the unrecognized hermeneutic premise of the method and the lack of a meta-model for its own modelling leads to stasis, corruption and simulacra effects. Thus the problem with predictive analytics is that the method lacks an explicitly automated model for how to analyze itself. As described by a practitioner in counter-terrorism:
The problem is far worse than “finding a needle in a haystack”. In that analogy, the needle is easy to identify once it is observed. In contrast, many problems of counter-terrorism are, in the words of DARPA’s Ted Senator, about “assembling and identifying dangerous needles in stacks of needle pieces”. The problem is to infer the existence of clandestine organizations and activities, based on lower-level records that relate people, places, things, and events. [19] This requires a profound interpretative competence because the violent ideology behind terrorism is not or cannot be meaningfully analyzed using an automatic algorithm. In the words of Tukey on the meta-model issue and its hidden hermeneutic premise in predictive analytics: ‘No data set is large enough to provide complete information about how it should be analyzed’ [20]. The use of predictive methods to predict terrorism is therefore ineffective, risky and inappropriate, with potentially 100,000 false positives for every real terrorist that the algorithm finds.
About the author
Timme Bisgaard Munk, Ph.D., is a postdoc at the Royal School of Library and Information Science, University of Copenhagen, Denmark.
E-mail: timme [at] kforum [dot] dk
Notes
1. Jonas and Harper, 2006, p. 6; Chivers, 2003.
2. Mims, 2013; Kaufmann, 2010, p. 71.
3. Kaufmann, 2010, p. 71.
4. Sayer, 1992, p. 180.
5. Xu and Wunsch, 2009, p. 6.
6. Ibid.
7. Benzécri, 1992, p. 89.
8. Horgan, 2008, p. 83.
9. Heath-Kelly, 2012, p. 79.
10. Bowker and Star, 1999, p. 69.
11. Horgan, 2008, p. 83.
12. Cohen, et al., 2014, p. 247.
13. Heath-Kelly, 2012, p. 79.
14. Citron and Pasquale, 2014; Cohen, et al., 2014, p. 247.
15. Lash, 2007b, p. 71.
16. Leyshon, 1999, p. 448.
17. Campbell, 1979, p. 34.
18. Krasmann, 2007, p. 307.
19. Jensen, 2002, p. 9.
20. Tukey, 1997, p. 21.
References
Charu C. Aggarwal and Philip S. Yu, 2001. “Outlier detection for high dimensional data,” SIGMOD ’01: Proceedings of the 2001 ACM SIGMOD International Conference on Management of Data, pp. 37–46.
doi: http://dx.doi.org/10.1145/375663.375668, accessed 7 August 2017.Mike Ananny, 2016. “Toward an ethics of algorithms: Convening, observation, probability, and timeliness,” Science, Technology, & Human Values, 41(1), pp. 93–117.
doi: http://dx.doi.org/10.1177/0162243915606523, accessed 7 August 2017.Chris Anderson, 2008. “The end of theory: The data deluge makes the scientific method obsolete,” Wired (23 June), at https://www.wired.com/2008/06/pb-theory/, accessed 7 August 2017.
Pierpaolo Andriani and Bill McKelvey, 2011. “From skew distributions to power-law science,” In: Peter Allen, Steve Maguire and Bill McKelvey (editors). Sage handbook of complexity and management. London: Sage, pp. 254–273.
doi: http://dx.doi.org/10.4135/9781446201084.n16, accessed 7 August 2017.John Banasik and Jonathan Crook, 2003. “Lean models and reject inference,” at https://www.researchgate.net/publication/215991640_Lean_models_and_reject_inference, accessed 7 August 2017.
Jean Baudrillard, 1983. Simulations. Translated by Paul Foss, Paul Patton and Philip Beitchman. New York: Semiotext(e).
Katrin Bennhold, 2015. “Paris attacks highlight jihadists’ easy path between Europe and ISIS territory,” New York Times (18 November), at http://www.nytimes.com/2015/11/19/world/europe/paris-attacks-islamic-state-jihadis.html, accessed 7 August 2017.
Jean-Paul Benzécri, 1992. Correspondence analysis handbook. New York: Marcel Dekker.
Göran Bolin and Jonas Andersson Schwarz, 2015. “Heuristics of the algorithm: Big Data, user interpretation and institutional translation,” Big Data & Society, volume 2, number 2.
doi: http://dx.doi.org/10.1177/2053951715608406, accessed 7 August 2017.Martin Bouchard (editor), 2015. Social networks, terrorism and counter-terrorism: Radical and connected. London: Taylor & Francis.
Geoffrey C. Bowker and Susan Leigh Star, 1999. Sorting things out: Classification and its consequences Canbridge, Mass.: MIT Press.
danah boyd and Kate Crawford, 2012. “Critical questions For big data: Provocations for a cultural, technological, and scholarly phenomenon,” Information, Communication & Society, volume 15, number 5, pp. 662–679.
doi: http://dx.doi.org/10.1080/1369118X.2012.678878, accessed 7 August 2017.Joel Brynielsson, Andreas Horndahl, Fredrik Johansson, Lisa Kaati, Christian Maartenson and Pontus Svenson, 2013. “Harvesting and analysis of weak signals for detecting lone wolf terrorists,” Security Informatics, volume 2, number 11.
doi: https://doi.org/10.1186/2190-8532-2-11, accessed 7 August 2017.Donald T. Campbell, 1979. “Assessing the impact of planned social change,” Evaluation and Program Planning, volume 2, number 1, pp. 67–90.
doi: https://doi.org/10.1016/0149-7189(79)90048-X, accessed 7 August 2017.Corey Chivers, 2003. “How likely is the NSA PRISM program to catch a terrorist?” (6 June), at https://bayesianbiologist.com/2013/06/06/how-likely-is-the-nsa-prism-program-to-catch-a-terrorist/, accessed 7 August 2017.
Danielle Keats Citron, 2009. “Cyber civil rights,” Boston University Law Review, volume 89, pp. 61–125.
Danielle Keats Citron and Frank Pasquale, 2014. “The scored society: Due process for automated predictions,” Washington Law Review, volume 89, pp. 1–33.
Aaron Clauset, Maxwell Young and Kristian Skrede Gleditsch, 2007. “On the frequency of severe terrorist events,” Journal of Conflict Resolution, volume 51, number 1, pp. 58–87.
doi: https://doi.org/10.1177/0022002706296157, accessed 7 August 2017.Katie Cohen, Fredrik Johansson, Lisa Kaati and Jonas Clausen Mork, 2014. “Detecting linguistic markers for radical violence in social media,” Terrorism and Political Violence, volume 26, number 1, pp. 246–256.
doi: https://doi.org/10.1080/09546553.2014.849948, accessed 7 August 2017.Derek B. Cornish and Ronald V. Clarke, 2003. “Opportunities, precipitators and criminal decisions: A reply to Wortley’s critique of situational crime prevention,” Crime Prevention Studies, volume 16, pp. 41–96.
Kate Crawford, 2013. “The hidden biases of big data,” Harvard Business Review (1 April), at https://hbr.org/2013/04/the-hidden-biases-in-big-data, accessed 7 August 2017.
Kate Crawford, Kate Miltner and Mary L. Gray, 2014. “Critiquing big data: Politics, ethics, epistemology,” International Journal of Communication, volume 8, pp. 1,663–1,672, and at http://ijoc.org/index.php/ijoc/article/view/2167/1164, accessed 7 August 2017.
Mary DeRosa, 2004. “Data mining And data analysis for counterterrorism,” Center For Strategic and international Studies (CSIS), at https://cdt.org/files/security/usapatriot/20040300csis.pdf, accessed 7 August 2017.
Nicholas Diakopoulos, 2015. “Algorithmic accountability: Journalistic investigation of computational power structures,” Digital Journalism, volume 3, number 3, pp. 398–415.
doi: https://doi.org/10.1080/21670811.2014.976411, accessed 7 August 2017.European Commission, 2016. “Commission paves the way towards a Security Union and reports on EU-Turkey Agreement” (20 April), at http://europa.eu/rapid/press-release_WM-16-1618_en.pdf, accessed 7 August 2017.
Brian S. Everitt, Sabine Landau and Morven Leese, 2001. Cluster analysis. Fourth edition. London: Arnold.
Andrew Guthrie Ferguson, 2015. “Big data and predictive reasonable suspicion,” University of Pennsylvania Law Review, volume 163, number 2, pp. 327–410.
Andrew Guthrie Ferguson, 2012. “Predictive policing and reasonable suspicion,” Emory Law Journal, volume 62, number 2, pp. 259–325.
Lisa Gitelman (editor), 2013. Raw data is an oxymoron. Cambridge, Mass.: MIT Press.
Malcolm Gladwell, 2006. “What pit bulls can teach us about profiling,” New Yorker (6 February), at http://www.newyorker.com/magazine/2006/02/06/troublemakers-2, accessed 7 August 2017.
Michael Greenacre, 2007. Correspondence analysis in practice. Second edition. Boca Raton, Fla.: Chapman & Hall/CRC.
Samuel Greengard, 2012. “Policing the future,” Communications of the ACM, volume 55, number ), pp. 19–21.
doi: https://doi.org/10.1145/2093548.2093555, accessed 7 August 2017.Cory P. Haberman and Jerry H. Ratcliffe, 2012. “The predictive policing challenges of near repeat armed street robberies,” Policing, volume 6, number 2, pp. 151–166.
doi: https://doi.org/10.1093/police/pas012, accessed 7 August 2017.Bernard E. Harcourt, 2008. “A reade’s companion to Against prediction: A reply to Ariela Gross, Yoram Margalioth, and Yoav Sapir on economic modeling, selective incapacitation, governmentality, and race,” Law & social inquiry, volume 33, number 1, pp. 265–283.
Bernard E. Harcourt, 2007. Against prediction: Profiling, policing, and punishing in an actuarial age. Chicago: University of Chicago Press.
Bernard E. Harcourt, 2006. “Muslim profiles post-9/11: Is racial profiling an effective counterterrorist measure and does it violate the right to be free from discrimination?” University of Chicago, John M. Olin Program in Law and Economics, Working Paper, number 288, at http://chicagounbound.uchicago.edu/law_and_economics/329/, accessed 7 August 2017.
Michael V. Hayden, 2016. Playing to the edge: American intelligence in the age of terror. New York: Penguin Press.
Charlotte Heath-Kelly, 2012. “Reinventing prevention or exposing the gap? False positives in UK terrorism governance and the quest for pre-emption,” Critical Studies on Terrorism, volume 5, number 1, pp. 69–87.
doi: http://dx.doi.org/10.1080/17539153.2012.659910, accessed 7 August 2017.John Horgan, 2008. “From profiles to pathways and roots to routes: Perspectives from psychology on radicalization into terrorism,” Annals of the American Academy of Political and Social Science, volume 618, number 1, pp. 80–94.
doi: https://doi.org/10.1177/0002716208317539, accessed 7 August 2017.Laura Huey, Joseph Varanese and Ryan Broll, 2015. “The gray cygnet problem in terrorism research,” In: Martin Bouchard (editor). Social networks, terrorism and counter-terrorism: Radical and connected. London: Routledge, pp. 34–47.
IBM, 2015. “IBM i2 Enterprise Insight Analysis V1.0.11 delivers product enhancements and additional national language support” (10 March), at https://www-01.ibm.com/common/ssi/rep_ca/0/897/ENUS215-040/ENUS215-040.PDF, accessed 7 August 2017.
David Jensen, 2002. “Data mining in networks,” University of Massachusetts, Computer Science Department, Faculty Publication Series, paper 67, at http://scholarworks.umass.edu/cs_faculty_pubs/67/, accessed 7 August 2017.
Jeff Jonas and Jim Harper, 2006. “Effective counterterrorism and the limited role of predictive data mining,” Cato Institute, Policy Analysis, number 584 (11 December), at https://object.cato.org/sites/cato.org/files/pubs/pdf/pa584.pdf, accessed 7 August 2017.
Mareile Kaufmann, 2010. Ethnic profiling and counter-terrorism: Examples of European practice and possible repercussions. Hamburger Studien zur Kriminologie und Kriminalpolitik, band 46. Berlin: LIT Verlag.
Rob Kitchin, 2014a. “Big Data, new epistemologies and paradigm shifts,” Big Data & Society, volume 1, number 1.
doi: https://doi.org/10.1177/2053951714528481, accessed 7 August 2017.Rob Kitchin, 2014b. The data revolution: Big data, open data, data infrastructures and their consequences. Thousand Oaks, Calif.: Sage.
Kate Knibbs, 2014. “Researchers are working on an algorithm to help prevent suicide,” Daily Dot (25 May), at http://www.dailydot.com/technology/algorithm-suicide-prevention/, accessed 7 August 2017.
Felicitas Kraemer, Kees van Overveld and Martin Peterson, 2011. “Is there an ethics of algorithms?” Ethics and Information Technology, volume 13, number 3, pp. 251–260.
doi: https://doi.org/10.1007/s10676-010-9233-7, accessed 7 August 2017.Susanne Krasmann, 2007. “The enemy on the border: Critique of a programme in favour of a preventive state,” Punishment & Society, volume 9, number 3, pp. 301–318.
doi: https://doi.org/10.1177/1462474507077496, accessed 7 August 2017.Scott Lash, 2007a. “Capitalism and metaphysics,” Theory, Culture & Society, volume 24, number 5, pp. 1–26.
doi: https://doi.org/10.1177/0263276407081281, accessed 7 August 2017.Scott Lash, 2007b. “Power after hegemony: Cultural studies in mutation?” Theory, Culture & Society, volume 24, number 3, pp. 55–78.
doi: http://dx.doi.org/10.1177/0263276407075956, accessed 7 August 2017.Cynthia Lum, Leslie W. Kennedy and Alison Sherley, 2008. “Is counter-terrorism policy evidence-based? What works, what harms, and what is unknown,” Psicothema, volume 20, number 1, pp. 35–42, and at http://www.psicothema.com/pdf/3426.pdf, accessed 7 August 2017.
David Lyon and Elia Zureik (editors), 1996. Computers, surveillance, and privacy. Minneapolis: University of Minnesota Press.
Adrian Mackenzie, 2015. “The production of prediction: What does machine learning want?” European Journal of Cultural Studies, volume 18, numbers 4–5, pp. 429–445.
doi: https://doi.org/10.1177/1367549415577384, accessed 7 August 2017.Astrid Mager, 2012. “Algorithmic ideology: How capitalist society shapes search engines,” Information, Communication & Society, volume 15, number 5, pp. 769–787.
doi: http://dx.doi.org/10.1080/1369118X.2012.676056, accessed 7 August 2017.Andreas Matthias, 2011. “Algorithmic moral control of war robots: Philosophical questions,” Law, Innovation and Technology, volume 3, number 2, pp. 279–301.
doi: http://dx.doi.org/10.5235/175799611798204923, accessed 7 August 2017.Viktor Mayer-Schönberger and Kenneth Cukier, 2013. Big data: A revolution that will transform how we live, work, and think. Boston: Houghton Mifflin Harcourt.
Colleen McCue, 2015. Data mining and predictive analysis: Intelligence gathering and crime analysis. Second edition. Oxford: Butterworth-Heinemann.
Jenna McLaughlin, 2016. “The White House asked social media companies to look for terrorists. Heres why they’d #Fail,” The Intercept (20 January), at https://theintercept.com/2016/01/20/the-white-house-asked-social-media-companies-to-look-for-terrorists-heres-why-theyd-fail/, accessed 7 August 2017.
D. McMorrow, 2009). “Rare events,” JSR-09-108 (October), at https://fas.org/irp/agency/dod/jason/rare.pdf, accessed 7 August 2017.
Christopher Mims, 2013. “Simple math shows why the NSA’s Facebook spying is a fool’s errand,” Quartz (7 June), at https://qz.com/92207/simple-math-shows-why-the-nsas-facebook-spying-is-a-fools-errand/, accessed 7 August 2017.
Evgeny Morozov, 2013. To save everything, click here: Technology, solutionism, and the urge to fix problems that don’t exist. London: Allen Lane.
Philip M. Napoli, 2014. “Automated media: An institutional theory perspective on algorithmic media production and consumption,” Communication Theory, volume 24, number 3, pp. 340–360.
doi: http://dx.doi.org/10.1111/comt.12039, accessed 7 August 2017.Graeme Newman, Ronald V. Clarke and S. Giora Shoham (editors), 1997. Rational choice and situational crime prevention: Theoretical foundations. Aldershot: Ashgate.
Alan Norrie, 2002. “Review of Ethical and social perspectives in situational crime prevention, edited by A. Von Hirsch, D. Garland and A. Wakefield/The judicial role in criminal proceedings, edited by S. Doran and J. Jackson,” King’s Law Journal, volume 13, number 1, pp. 128–131.
Eli Pariser, 2012. The filter bubble: What the Internet is hiding from you. London: Penguin.
Frank Pasquale, 2015. The black box society: The secret algorithms that control money and information. Cambridge, Mass.: Harvard University Press.
Frank Pasquale, 2013. “Grand bargains for big data: The emerging law of health information,” Maryland Law Review, volume 72, number 3, pp. 682–772, and at http://digitalcommons.law.umaryland.edu/mlr/vol72/iss3/2, accessed 7 August 2017.
Frank Pasquale, 2010. “Beyond innovation and competition: The need for qualified transparency in Internet intermediaries,” Northwestern University Law Review, volume 104, number 1, pp. 105–173.
Albert Rosen, 1954. “Detection of suicidal patients: An example of some limitations in the prediction of infrequent events,” Journal of Consulting Psychology, volume 18, number 6, pp. 397–403.
Danny Rosenthal, 2011. “Assessing digital preemption (and the future of law enforcement?),” New Criminal Law Review, volume 14, number 4, pp. 576–610.
doi: http://dx.doi.org/10.1525/nclr.2011.14.4.576, accessed 7 August 2017.David C. Sandomir, 2009. “Preventing terrorism in the long term: The disutility of racial profiling in preventing crime and the counterproductive nature of ethnic and religious profiling in counterterrorism policing,” Master’s thesis, Naval Postgraduate School, Monterey, Calif.; version at http://www.dtic.mil/get-tr-doc/pdf?AD=ADA514381, accessed 7 August 2017.
Andrew Sayer, 1992. Method in social science: A realist approach. London: Routledge.
Frederick F. Schauer, 2003. Profiles, probabilities, and stereotypes. Cambridge, Mass.: Belknap Press of Harvard University Press.
Bruce Schneier, 2015. Data and Goliath: The hidden battles to collect your data and control your world. New York: Norton.
Bruce Schneier, 2006. “Why data mining won’t stop terror,” Wired, at https://www.wired.com/2006/03/why-data-mining-wont-stop-terror/, accessed 7 August 2017.
Michael Shaoul, 1989. “Ticking for the truth: An investigation of credit scoring,” paper presented at Touche Ross European Doctoral Colloquium in Accounting (Stuttgart, Germany; 2-4 April 1989).
Eric Siegel, 2013. Predictive analytics: The power to predict who will click, buy, lie, or die. Hoboken, N.J.: Wiley.
Nate Silver. 2013. The signal and the noise: Why so many predictions fail — but some don’t. London: Penguin Books.
Paul Sperry, 2005. “When the profile fits the crime,” New York Times (28 July), at http://www.nytimes.com/2005/07/28/opinion/when-the-profile-fits-the-crime.html, accessed 7 August 2017.
Ben Taub, 2016. “Salah Abdeslam, captured in Brussels,” New Yorker (19 March), at http://www.newyorker.com/news/news-desk/salah-abdeslam-captured-in-brussels, accessed 7 August 2017.
Alan Travis, 2015. “European counter-terror plan involves blanket collection of passengers’ data,” Guardian (27 January), at https://www.theguardian.com/uk-news/2015/jan/28/european-commission-blanket-collection-passenger-data, accessed 7 August 2017.
Patrick Tucker, 2016a. “How traffic to this YouTube video predicts ISIS attacks,” Defense One (3 May), at http://www.defenseone.com/technology/2016/05/how-traffic-youtube-video-predicts-future-isis-attacks/127962/, accessed 7 August 2017.
Patrick Tucker, 2016b. “Refugee or terrorist? IBM thinks its software has the answer,” Defense One (27 January), at http://www.defenseone.com/technology/2016/01/refugee-or-terrorist-ibm-thinks-its-software-has-answer/125484/, accessed 7 August 2017.
Patrick Tucker, 2014. The naked future: What happens in a world that anticipates your every move? London: Penguin.
Zeynep Tufekci, 2013. “Big data: Pitfalls, methods and concepts for an emergent field,” (7 March).
doi: http://dx.doi.org/10.2139/ssrn.2229952, accessed 7 August 2017.John Tukey, 1997. “More honest foundations for data analysis,” Journal of Statistical Planning and Inference, volume 57, number 1, pp. 21–28.
doi: https://doi.org/10.1016/S0378-3758(96)00032-8, accessed 7 August 2017.Emma Uprichard, 2013. “Focus: Big data, little questions?” Discover Society (1 October), at http://discoversociety.org/2013/10/01/focus-big-data-little-questions/, accessed 7 August 2017.
Leanne Weber and Ben Bowling (editors), 2013. Stop and search: Police power in global context. Abingdon: Routledge.
Rui Xu and Donald C. Wunsch, 2009. Clustering Oxford: Wiley.
Tal Z. Zarsky, 2014. “Understanding discrimination in the scored society,” Washington Law Review, volume 89, number 4, pp. 1,375–1,412.
Tal Z. Zarsky, 2013. “Transparent predictions,” Illinois Law Review, volume 2013, number 4, pp. 1,503–1,569, and at https://illinoislawreview.org/print/volume-2013-issue-4/transparent-predictions/, accessed 7 August 2017.
Editorial history
Received 25 November 2016; revised 14 June 2017; revised 31 July 2017; accepted 5 August 2017.
This paper is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.100,000 false positives for every real terrorist: Why anti-terror algorithms don’t work
by Timme Bisgaard Munk
First Monday, Volume 22, Number 9 - 4 September 2017
https://firstmonday.org/ojs/index.php/fm/article/download/7126/6522
doi: http://dx.doi.org/10.5210/fm.v22i19.7126