With the rise of ‘Internet behemoths’ and the surveillance of increasingly personal domains there is a trend toward questioning life online. This paper draws attention to hacking practices that engage with the diverse faces of online veillance. Current debates about hacking surveillance are introduced. Instead of portraying hacking as a digital counterculture, the article complicates dichotomies of power vs. resistance, online vs. off-line, and technological system vs. social practice. Based on qualitative interviews, it introduces the diverse, dialogical and ambiguous hacking practices that answer online surveillance. The article suggests using the concept of dispute to capture these multiplicities and to understand the ‘orders of worth’ at stake in online environments. The small, continuous and constitutive dynamics of disputing online surveillance not only create political momentum, but call for a re-thinking of the totality of surveillance metaphors used today.
Waking up to a life online
Understanding hacking — From countercultures to dispute
Craft, explore, redefine, create: A portrayal of hackers disputing surveillance
Waking up to a life online
“With any click, with any application, any form of online use one is categorized, x-rayed.” The interviewed hacker Panoptipwned  pauses. “Surveillance is everywhere. It is as present as electricity”. Panoptipwned summarizes a development that hackers tend to have an ambiguous relationship to, namely that the Internet is inseparable from surveillance. To most hackers, the Internet is a space that they partly built themselves, which creates a sense of ownership. At the same time, it is also an infrastructure that has traceability and surveillance deeply engrained in its architecture. In this article, I explore the many ways in which hackers engage with or dispute online surveillance.
Now that digital technologies rearrange social life at full speed, the insight that everything smart is also personal and political (cf., Amoore and Piotukh, 2016) has become increasingly mainstream. In search of the usability oriented and efficiently organized life, mainstream discourses have neglected that any of our digital moves play back into the hands of those who have the privileged overview. Complicated terms and conditions, as well as smooth interfaces that obscure the complexity of the relationship between technology and society make it easy to postpone more thorough engagements with societal and political consequences of digital technology use. After all, most citizens have consented to technology companies watching us (Andrejevic and Gates, 2014), to computing our personal preferences and calculating our love life.
In recent years, however, more self-critical reporting has begun to problematize our role as consumers (cf., New Scientist, 2018): clicks, swipes and likes have given rise to “behemoths” (Zhong, 2018) and while we are still processing the consequences of our actions, our virtual assistants (as for example Amazon’s Alexa) literally laugh at us . The vocabulary of the “frightful five” (referring to Alphabet, Amazon, Apple, Facebook, Microsoft; Manjoo, 2017), China’s “digital authoritarianism” (Erixon and Lee-Makiyama, 2011), “filter bubbles” (Pariser, 2011), information “fakes” (Carson, 2018) and “hoaxes” (Ohlheiser, 2017), “digital burnouts” (Itstimetologoff, 2016) or the digital push to condense time (Uhle, 2017) have created trends towards re-thinking life online. Those efforts include anything from legal, yet slowly operating initiatives like data protection regulation (e.g., the EU General Data Protection Regulation as described in European Commission, 2018; Zuboff, 2019) or policies for opting out of digital services (Burgess, 2018) to personal choices of “un-facebooking” (Evans, 2014) and “digital detox” (see Camp “Breakout”) — or more generally active and passive “non-participation” (Casemajor, et al., 2015).
Within this context, I want to draw attention to a different position. I have interviewed subjects who self-identify as hackers and analyze the many ways in which they answer online surveillance without abandoning the Internet as such. According to Bl4ckb0x, hackers do not take technologies as a given as they “de- and re-construct” them in order to “discover” (Crypsis), “define the undefined” (Panoptipwned), “test” (heisenbugwatch), “reinvent” (LOLveillance), “create” (3x3cute) or “divert” (GCSgateway). Hacking is a “science of staying curious” (3x3cute), which is also a starting point for critique. What binds all interviewees together is their commitment to hack online surveillance. Not all hackers do so. Hacking is associated with diverse cultures, which is why an interest in hacking surveillance was a precondition to take part in the interview study. This shared commitment also underlines another important point: the interviewees’ ways of engaging with online surveillance are more complicated than resisting online solutions. The reasons for this can be manifold. For example, a complete opt-out of online services puts weight on the shoulders of the individual who pays by loss of utility, efficiency, connection, and everyday transactions (Brunton and Nissenbaum, 2011). A trend that illustrates this problem is that we find a growing use of Facebook for the organization of everyday activities, some of them quite official in character (e.g., for parenting and childcare, Bartholomew, et al., 2012). Further, disconnecting from official services (such as companies, banks and administrative authorities) also implies social and financial costs that not everyone can afford (Morozov, 2017). For others, tinkering with digital technologies is simply fun (Sicart, 2014). The interviewees mention that they enjoy “developing something further” (DataD14709), “being awake” (Numbercruncha) and “setting an example” (IzMyQ). Especially hackers tend to have a high sense of identification with the Internet and digital technologies. Technologies are key to how they socially express, make, reproduce themselves (Coleman and Golub, 2008). Their approach is then to use the same technologies to perform an intricate balancing act of being invisible to surveillers and remaining visible to social networks (cf., Grosser, 2017). Here, the article shows how interviewees have different knowledges and interests to challenge online surveillance. Indeed, my study was guided by the aim of diversifying mainstream notions of hacking. Hackers do not agree on techniques and reasons for answering online surveillance. The political effects of their actions are as diffuse as their ethics and the types of technologies they interrogate (Coleman, 2017). Hackers do not self-identify as one group. Thus, the article does not follow a singular definition of hacking, neither does it work with the model of different hats — black for illegal hacks, white for ethical hacks, grey for hacks that are ethical in the outset, but may be illegal, blue for hacks performed within a business framework — because hats tend to simplify a rather complex practice. The interviewees range from technically highly skilled to those who use more mundane hacking practices. Some conduct hacks that are considered illegal, while others disapproved of them. As a result, I do not intend to give an overview of surveillance-hacks that is representative of the hacker culture. I portray “small corners of activity in a vast territory”  and I use the notion of hacking as a “resource” (Chun, 2008) to portray multiple practices of hacking online surveillance.
Understanding hacking — From countercultures to dispute
Different scholars have started a rich dialogue about and with digital countercultures (Jordan and Taylor, 2004; Coleman and Golub, 2008; Brunton and Nissenbaum, 2011; Hampson, 2012; Kubitschko, 2015; Söderberg, 2017). Here, Bakir, et al.  ask a crucial question: “Is it possible or desirable to resist being watched?” While the terms counterculture and resistance are based on dyadic logics that I would like to complicate, the essence of Bakir, et al.’s question is nonetheless an important one. It points us towards the multiple forms of engaging with surveillance. Once one engages critically with surveillance, it is easy to overlook the relationship between watching and care, as discussed in Foucault’s (1979a) lecture on “pastoral power” (watching over a flock). One aspect is thus whether resisting surveillance is always desirable, another is whether it is possible. The metaphors that dominate the scholarly analysis of surveillance advert to its totality. The perfect architecture of Bentham’s panopticon that Foucault (1979b) applied to societal life at large, as well as the Orwellian (2000) notion of the all-seeing eye present veillance as absolute. It shapes our inner lives, our-selves. Mann’s (2005) sousveillance meaning watching from below, or equiveillance, that is mutual watching (Mann, et al., 2006), as well as Mathiesen’s (1997) synopticon, which implies that many watch few, and the various hyphenated veillances that appeared in the past years choose visibility as their vantage point. These metaphors nurture the simplified, yet “successful myth of surveillance that cannot be overcome” (Brunton and Nissenbaum, 2011). Haggerty and Ericson’s (2000) surveillant assemblage is the most encompassing account of the many dynamics of veillance, which also includes forms of watching that die down and reappear. Yet, it is exactly this ambitious and detailed conceptualization of surveillance that ultimately also underlines its omnipresence. The authors themselves emphasize its force: “The surveillant assemblage transforms the purposes of surveillance and the hierarchies of surveillance, as well as the institution of privacy” . While the larger part of -veillance metaphors implies that surveillance is present, potent and comprehensive, “hacking surveillance” takes as its vantage point that some societal actors have the agency to challenge this totality.
In 2004, Nissenbaum critically discussed how hacking tends to be framed as “harmful and menacing acts”  as opposed to something that has the potential to produce social value. With that, she created an opening in the conceptual schema with which hackers are associated. As part of these debates the idea of hacktivism arose (Jordan and Taylor, 2004). Hacktivists would fight those forms of technological progress that they would see as social decline (Maxigas, 2017). Hacking would then become the “critical, creative, reflective and subversive use of technology that allows creating new meanings” . While some analyze hacking as a “countercultural lifestyle of self-reliance”  or frame it as a legitimate form of protest (Hampson, 2012), this image of being “anti-institutional”, “countercultural and resistant”  does not apply to those who “work with institutions and corporations to bring about social change” . Some analyses go further and suggest that some hackers have become complicit in neoliberalism and globalization by providing the infrastructures for capitalism (Söderberg and Delfanti, 2015).
Coleman and Golub (2008), then, observed that the growth in partial portrayals of hackers led to a binary understanding of hacking as either lauded or denounced. Others also saw a tendency to characterize hackers as those who either corrupt or protect (Steinmetz and Gerber, 2015). The only way out of this analytical dead end is to see the multiplicity of hacking practices — also within the subfield of hacking surveillance or even within one individual. By today, we find that authors theorize hacking as “different, sometimes incompatible, material practices” . Hacking involves a mosaic of ethical positions including the support of negative freedoms (i.e., freedom from surveillance) and positive freedoms (e.g., availability of free software). While some hackers are state employed, act as ethical instances and risk managers (Palmer, 2001), hacking also includes moralities of the underground and romantic individualism, which invoke Nietzschean notions of power, pleasure and critique of liberalism (Coleman and Golub, 2008). Hackers move in a diverse field that includes authority and sharing (Powell, 2016), self-expression and collective forms of action (Söderberg, 2017), expressions of masculine identity (Hunsiger and Schrock, 2016; Jordan, 2017) as well as feminist values of inclusion (Goode, 2015; SSL Nagbot, 2016). In this article I want to take account of this diversity, when I discuss the many ways in which hackers engage with surveillance.
Thus, rather than portraying the counter-cultural aspects of hacking surveillance, I theorize these views and practices through the notion of dispute, which is inspired by that of Boltanski and Thévenot, but also takes their understanding of dispute further. In 1999, Boltanski and Thévenot developed a sociology of the dispute that focuses on the way in which disputes bring people and objects together to settle injustice. This requires a recognition of an “order of worth”  by all involved actors, which means that “proofs”  of injustice have to be established and the normative principles at stake have to be negotiated. Eventually, these need to be recognized by all involved parties in order to settle the dispute. The attention Boltanski and Thévenot draw to the critical practices exercised by ordinary people is a source of inspiration for this article. Their concept of dispute acknowledges the critical capacity of everyday situations. In addition, it points our attention to the multiplicities that are at play when orders of worth are evaluated. Huysmans (2016) underlines the “uncoordinated quality of disputes” . Disputes bring highly diffuse and multiple relations to the fore. Disputes are not based on simple oppositions as expressed in resistance or counter-cultures. He shows how disputes mobilize different “modes of autonomy, rights, and dispositions of acceptability” , which is what I want to show, too.
And yet, I also rethink Boltanski and Thévenot’s notion of dispute, whose analysis mainly relates to face-to-face, workplace or legally mediated situations, the aim of such conflicts is also to be settled eventually. Firstly, when we understand hacking as a form of disputing surveillance, we see that these disputes can happen at a distance and involve practices that are technologically mediated. Some parts of mediated disputes are very tangible, such as the Internet’s infrastructures and the practical restrictions they impose on the actual practices of hacking. Other parts of such disputes are less tangible, for example when situations occur in which not all participants are aware about their participation in a dispute. Secondly, the situations of dispute I describe in this article are not necessarily as straightforward as hackers claiming injustice with the aim of resolving a conflict about surveillance. Instead of settling a conflict, hacking can also reinforce injustices, for example, when hacking itself is part of a power game that perpetuates gender inequalities (e.g., Adam, 2005). Ironically, hacking surveillance can also solidify surveillance practices or lead to more encompassing legislation. The hacker Sousveillor says, for example: “The answers to surveillance change with legislation. (...) and then suddenly certain forms of avoiding surveillance can be categorized as illegal.” Hacking also tends to have a temporality that is not aimed at a final resolution. As a form of dispute it is more a playful back-and-forth between surveillance mechanisms and those who hack them. Disputes also inspire a re-thinking of norms and practices — something that is in fact a major aspect of hacking. Kate90r13, for example, finds that “hacking is about using your technical understanding to solve problems creatively — this can be a form of social engineering. (...) This thinking is essential to modern society.” Hacking is thus a form of disputing that is not necessarily aimed at establishing ‘orders of worth’ to settle the conflict. Instead, it is aimed at constantly questioning and re-inventing orders of worth. This idea of continually negotiating orders of worth through hacking resonates with Wendy Chun’s account of new media as a “crisis machine” . What crises and new media have in common is that they offer us “the experience of something like responsibility (...) Their value stems from their relevance to an ongoing decision, to a sense of computers as facilitating ‘real time’ action.”  They both give us a sense of empowerment. What Chun sees, too, however, is that “the decisions we make, however, seem to prolong crises rather than end them” . This corresponds to what I observe and describe in this article, namely that the Internet and its inherent surveillance mechanisms provoke engagement in hackers, who dispute code, rewrite code and play with coded surveillance. This leads to a change with its own temporality, a slow one that builds on ongoing interaction. Such interactions, so Chun, resemble “a long, thin chain of transmission” .
The way in which I use the concept of dispute in this article, then, captures that hacking is not fully grasped by dyadic concepts of embracing/resisting technologies or agreeing with/countering surveillance. Rather, hacking is never exclusively directed at undoing technologies or certain forms of veillance altogether. Dispute captures the multiplicity and ongoing negotiation that characterizes practices of hacking surveillance. As a form of dispute, hacking also challenges dystopian surveillance metaphors.
Methodologically, I aimed at describing and analyzing online surveillance technologies and the disputative engagement with them. This also included the study of identities and embodied aspects of hacking as well values and beliefs involved in disputing surveillance. Feminist approaches, for example, see hacking as an identity practice that involves a “recalibration from competition to inclusion, from individual autonomy to intimacy” (SSL Nagbot, 2016). I embrace the standpoint that technologies, bodies and identities co-constitute disputative hacking practices and vice versa. For some interviewees, hacks have literally become body automatisms, meaning that they internalized which information they do not share online at all. Hacker LOLveillance shares even off-line information only with his “web of trust”. In other cases, the identity work that hackers perform is the opposite of an automatism. =Overview explains, for example, that masking ones identity requires arduous work of feeding a surveillance algorithm inconsistent datasets about oneself:
“You can find so many things in Google about me that it just does not make any sense anymore [...]. It’s a different identity — the person I am online does not match me anymore.”
Because I was interested in the values and beliefs, identity and embodied practices that inform the disputative practices of hacking surveillance, I chose to explore them via 22 qualitative interviews. I conducted almost all interviews one-to-one, while a few interviewees preferred a group-setting. However, none of the interviews were conducted face-to-face. Instead, we communicated via different apps and programs, varying from commercial to open-source software based on peer-to-peer connections. While most interviewees chose a voice-based software, approximately one third asked to have the interview in writing. The interviewees’ choice of communication tool was thus already part of expressing their views on surveillance, which will become more evident in my analysis below. My interviews included hackers who would act under their real name, while most would claim their online acts only under their handle, and yet others would remain completely anonymous. Most interviewees valued anonymity to the extent that they did not want to reveal their name or faces to me. This choice created positive situations in which the interviewees would open up, sharing their interest or involvement in illegal acts. While a combination of interviews with participant observation would have given me even more insight into embodied practices of hacking, it would also have created a de-anonymized situation and subjects would maybe not have opened up in the same way. Doing interviews was also a methodical choice, because they enabled me and the interviewees to deliberate together how and why their hacks come about. The interviews lasted between 45 minutes and 2.5 hours and included questions about hacking generally and hacking surveillance in specific. I organized the entry point for a snowball-recruitment via the Chaos Computer Club (CCC). Using the CCC as a recruitment vantage point, the technical configurations and the language chosen by hackers gave me the impression that most of my interviewees were German, Austrian or Swiss. This selection impacted the answers I received. The CCC is known for their public debates about online privacy and their commitment to share knowledge. This may be different in other hacker organizations. However, their closeness to political debate also helped finding hackers with an outspoken standpoint on surveillance. Gaining access to this vast and at times secluded field involved the repeated passing of “social captcha tests” (Kaufmann and Tzanetakis, 2020). However, once passed, we were also in an interview situation that allowed for more nuanced discussions.
Craft, explore, redefine, create: A portrayal of hackers disputing surveillance
It is difficult, if not impossible for hackers to find un-surveilled space online, finds AceOfPlays. LOLveillance sees that surveillance grows in parallel to the development new technologies. New apps, as well as hardware and software for communication, expand the overall amount of surveillance and allow for new forms of surveillance. For Numbercruncha, this trend is also reflected in political decisions and practices : “constantly, new laws are passed, which allow for big data storage”. Most prominently, however, commercial actors changed the practice of surveillance : they are responsible for the growing amounts of data exchange and create storage space . Kate90r13 summarizes:
“On the one hand we have state-based online surveillance and on the other we have surveillance by companies and other entities. For both of these, surveillance increased mainly due to the technological possibilities that we have today. Just 20, 30 years ago there would not have been the technological possibilities to retain and analyze data [...] The analytic tools that are used today, such as machine learning and algorithms also increase the amount of online surveillance.”
While Sousveillor reminds himself that veillance has always existed and Crypsis warns that one should “stay away from conspiracy theories and paranoia”, most interviewees observe the growing surveillance with suspicion. jE2EE states: “it’s important one knows what happens and how technologies work in order to assess dangers and possibilities.” This and similar quotes much describe the embedded situation of hackers, who would not reject technologies per se. Rather, so bl4ckb0x suggests, “hackers craft with them, explore their borders, redefine their usage and create added value with them” . These multiple practices are described in the following.
What happens before hacking: Data-minimalism
“If I really don’t want anyone to listen in to my conversation, I have it in real life” says Sousveillor about his everyday approach to avoiding digital surveillance. He continues: “The best data are those that cannot be registered in the first place.” This attitude seems straightforward, but shows that being informed about digital veillance is the foundation for hacks. Online surveillance is enveloped in hackers’ ordinary practices. The rise of social media, for example, increases possibilities for information sharing as well as forms of mutual watching. The publicity of social media, so Mandelbugger, is different from the publicity of sharing information in private chats. A first — and quite mundane — practice of disputing surveillance is thus to share as little information as possible online. Re-ID observes, ”once shared, information exists out there and cannot be retracted anymore“. To limit the amount of private information ‘out there’ DataD14709 covers his laptop camera. AceOfPlays does not store private data in cloud spaces and password-protects the hard disks of computers used to go online. Most hackers avoid the use of commercial service providers such as Google, Facebook, WhatsApp or Dropbox. Encrypted and open-source providers are the main alternative . DataD14709 explains:
“WhatsApp is encrypted, too. That is still problematic, because when you look at the bigger picture and see that WhatsApp has been bought by Facebook — an American company that sells any data they can get their hands on, then encryption seems a little dubious.”
Even if banal forms of data-minimalism are not a hack, they express awareness about surveillance. This awareness is often the origin of hacks that reduce the quantity and the quality of information available to corporate, public or private surveillance. Such everyday practices signal the worth of data. Calling for a re-valuation of data is arguably the foundation for most of the practices that follow.
Rerouting and encrypting data
Some interviewees would communicate via overlay networks. Information is then rerouted through infrastructure that is added on top of existing networks. Through this de-centralization the tracking of a user’s online activity is complicated and more surveillance effort is needed. Since most overlay networks spread across the globe, it is difficult for a third party to watch the full string of connections through which messages are passed. Overlay networks can be self- or custom-made, which requires technical skill. They can also be used via free and open-source software such as The Onion Router (Tor) or I2P . Per se, routing online traffic through overlay networks is nothing illegal — much of today’s telephone infrastructures are built as overlay networks. However, Tor and other peer-to-peer networks have been associated with the darknet — precisely because they make it difficult to track communication. And yet, communication via Tor may be safer than via other channels. At the same time, Tor tends to be slow and especially the entry and exit nodes of traffic are still identifiable by surveillers. Crypsis evaluated:
“Even when it comes to the spaces that we think of as not surveilled, such as the darknet or Tor — governments do anything to get them out into the open. So I come to the conclusion that effort to monitor those presumably not surveilled spaces is spent exactly there.”
Filterer explains that alternatives to overlay networks are tunnel protocols, which repackage the data before sending it through a public network. In this case, the standard wrapping of data is hacked. All of these techniques exemplify the dynamics of back-and-forth that characterize the disputing of surveillance: surveillance technologies are challenged via re-appropriation, while those who re-appropriate are well aware that such practices will not stop surveillance activities per se.
In addition to repurposing existing network technologies, end-to-end encryption is an alternative form of online communication. It is debatable whether one can consider the use of encryption a hack. Still, one can acknowledge that encryption technologies were originally developed by those who decided to push boundaries of surveillance by hacking the standard format of the traveling information. This means that messages are encrypted before sending and decrypted upon arrival. Only parties with the correct set of keys have access the meaningful content of the messages. Such keys can be homemade, something that IzMyQ does. This requires technological skills. Alternatively, keys can be provided by commercial as well as open-source projects. Users of encryption techniques still need to trust the providers of the keys, since it is not always clear who else has access to the keys. Re-ID summarizes his concerns: “The encryption keys can get lost, they can be stolen or published”. The choice of keysets is thus a conscious one. Open-source keys are often considered the most trustworthy, since there would be no vested owners  who could exploit keys for their own intentions.
With these examples of rerouting and encrypting data, hackers recast the value of their own data. They also dispute where and how data is collected and how technologies function. Through their actions and through signaling their knowledge, hackers suggest that new orders of worth are needed when it comes to technology design and personal data.
Opacity and obfuscation
Most surveillance mechanisms work because they rely on social and technical communication standards that render their contents identifiable. A different approach to disputing online surveillance is to hack or re-appropriate those communication standards. These are socio-technical forms of hacking that circumvent or confuse the eye. =Overview explains that the idea is to ensure that tracked information “does not make any sense” to those who collect and analyze the data.
The idea of disappearing from view by hacking communication standards is one of the more forceful practices of dispute that do not discard of the Internet as such. Because it is dispute uttered from inside the network it builds upon the idea of opacity, of remaining unidentifiable. Glissant (1997) describes opacity as the lack of transparency, the unknowability of people and the evasion of cognitive schema and standards that are generally used to govern people. Opacity is then a main force to challenge systems of domination from within. Thus, he asks for the right to let the opaque be opaque (Glissant, 1997). In the context of online communication, this also means not to render communicators scientific and measurable and not to reduce them to a scale of “some universal model” (Blas, 2016) in order to render them governable. “Opacity, therefore, exposes the limits of schemas of visibility, representation, and identity that prevent sufficient understanding of multiple perspectives of the world and its peoples” . Opacity describes what it means to dispute surveillance from the inside of the surveillant system and to argue for multiplicity within the system Internet. There are many different practices of disputing surveillance via opacity or obfuscation. Brunton and Nissenbaum (2011) provided a basic typology of different techniques that hackers use. Time-based obfuscation, for example, means that one would generate large amounts of unimportant or incorrect data to let the original message disappear in the excess traffic. Hackers have split opinions about this practice: Panoptipwned, for example, argues for time-based obfuscation vis-à-vis encryption. Encryption would always make you identifiable as a user of that technique. Not only could keys be decrypted and thus be insecure, but the whole technique is a form of armament against other users, which is why he prefers to vanish in the masses. 3x3cute and GCSgateway, however, argue that the development of search algorithms is so fast that it is possible to find those who seek to disappear in excess data traffic. Besides, all the extra traffic is trash clogging the already limited bandwidth, which is why encryption is their preferred mode of hacking surveillance. This example describes the multiplicity and ambiguity of hacking techniques and illustrates the back-and-forth that hacking surveillance is a part of.
A similar practice is what Brunton and Nissenbaum would call ambiguating obfuscation (2011), which is to feed browsers randomized or specifically chosen, yet irrelevant datasets. Even though this is a very rigorous form of hacking communication standards, Sousveillor admits that it is a lot of work. It delimits the user’s freedoms, because it has to be done constantly. In addition, he continues, one needs to know the surveillance algorithms’ specifications to do it in a meaningful way.
Selective obfuscation (Brunton and Nissenbaum, 2011) is yet a different method. It renders information understandable for authorized recipients only. Some, for example, mask their IDs, change their MAC address every time they start the computer, use different locations online or two handles in the same chat channel to write with the same person . DataD14709 mentions riddle-based and other playful kinds of hacking communication standards that are only available to those who communicate with each other. These include leet-speak (the exchange of letters for numbers, i.e., 1337) or writing in symbols (i.e., shorthand languages or lyric lines). DataD14709 also mentions:
“Stenography. (...) A way of data transmission that makes it impossible to find the data, especially not with a filter or other kinds of technology.”
Interviewer: “How would that work?”
DataD14709: “There are many ways: visual ways, languages, different logical connections to transmit a message that cannot be digitized, (...) a chat via two different chat providers, where you only send a part via each program. (...) There are so many options! In most of the cases I know, you as a private person create a key and once the other understands it, you can use it as a communication means. That’s the point: you can only communicate with that one person that you want to communicate with.”
Interviewer: “And those keys are not necessarily created via a programming language, but also include social codes? (...) Do you know of other hackers using that?”
DataD14709: “Many! For example, for indicating time, you only send a (...) digital picture of an analogue watch with the time. That is practical, because you could communicate a time without creating a direct association.”
The idea of these techniques is to challenge surveillance by obscuring both, the contents of a message and the fact that a message is being sent.
Even though hackers disagree about different forms of obfuscation, these practices illustrate something quite crucial. By demonstrating digital literacy, hackers put the worth of usability and convenience up for discussion at the same time as they emphasize the user’s potential for participation in the dispute. The following two hacking techniques illustrate this point even more.
Diversifying hardware and creating parallel networks
Some hackers also repurpose or play with hardware in order to dispute online surveillance. Dark boxes, so DataD14709 explains, are second computers that are only used for data traffic, which is not supposed to be associated with the user. GCSgateway chooses to start the computer’s operating system completely from a stick, so that the system is not visible or accessible to potential surveillers. A popular version of these operating systems is, for example, TAILS . Another practice is to share information through digital networks that users have built themselves. Such networks do not run as an overlay network, but they run disconnected from the Internet. Here, files can be shared anonymously, but only locally. This is why such spaces are often referred to as pirate boxes. Examples of hardware used to craft them are raspberry pis or beagle/freedom bones that are simple, credit card-sized computers. Re-ID explains:
“There is a so-called pirate box, that is an image one can draw onto a raspberry pi and once you have that connected to a mobile charger, you can let it run ‘headless’, meaning without mouse, keyboard or monitor [...] The raspberry pi creates a WLAN that is completely secluded from the Internet, that means nobody goes online with this and no data are shared via the Internet. That means anything you communicate via a pirate box, that you upload — pictures, photos, videos, any data — are local and that is an interesting concept.”
Users of that technique, so explains Re-ID, create a space where third actors are not able to enact surveillance or censorship without more invasive methods. These practices still follow the playful spirit of hacking and interrogating surveillance, but as dispute they are of a different kind than the other practices described above. With these hacks dispute is moved outside the system. Critique is no longer internal to the system Internet. The creation of parallel networks can be interpreted as an attempt to fracture the Internet’s totality and to imagine alternatives outside the totalized condition of the Internet (Browne and Blas, 2017). As that, they do invite reconsidering the worth of given hardware or the Internet’s architecture as such.
Barricading, defacing, leaking and reverse-engineering
This last set of practices is different from the ones described above, since they move from Do not disturb! to We disturb you!-type of activities. Concrete practices are distributed denial of service attacks (DDOS) , which makes online services unavailable by disrupting their traffic or by overwhelming them with traffic (cf., Digitalattackmap.com). Sousveillor points to Web defacements, which change the visual appearance of a Web site, and Crypsis mentions information leaks. AceOfPlays, however, was outspokenly critical towards aggressive types of hacks:
“My form of protest is constructive, not destructive. I give advice or organize a critical blog. I can also start a crusade by switching away from Google-based services to services with less tracking functions. That is my form of protest.”
bl4ckb0x and two others mentioned that it would also not be clear whom these aggressive attacks eventually damage . Some hackers, however, find it possible to justify offensive hacks in particular cases . DataD14709, for example, said:
“Earlier I observed active raids — DDOS, but I don’t do them myself. [...] There are definitely legal ways to challenge systems and companies one does not like — without having to be aggressive. One can show one’s opinion online, on social media, blogspots without generating black hat traffic. If you want to be a black hat, you have to go all the way. If you do not want to follow any ethical standards or do anything bad — you can go for it, but then you always have to expect that your behavior has consequences.”
=Overview does something different. She reverse engineers surveillance algorithms. She dedicated a computer to be a honeypot that attracts surveillance and spyware programs in order to analyze them and understand how they work. =Overview explains that she does reverse engineering not only to protect herself against the actual software, but also to exploit their most useful aspects against similar types of software. This practice of reverse engineering requires dedication and more-than-average technical expertise. It describes very well how hacking can be a playful practice. Reverse engineering captures the dialogical, ambiguous and interrogative aspect of disputing surveillance, since it turns surveillance against itself to express criticism, but also to harvest its strongest traits. Practices like reverse engineering also underline the options Internet users have to engage with the future of digital environments.
Hackers formulate the ‘worth at stake’
Hackers do debate amongst each other how they would understand their own actions, whether their hacks are a resource or have political valence. When we understand hacking through the lens of dispute, these discussions can show us how hackers define the ‘worth at stake’.
Not all hackers appreciate the term hacktivism. Either hacktivism is associated with more drastic methods of DDOS, Web defacement  and cracking , or it is rejected since hacktivism is already infiltrated by the powerful — because governments hack, too . bl4ckb0x finds that the term hacktivism also risks nurturing the vanity of specific groups such as lulzSec and Anonymous, which delimits the authenticity of hacks. Re-ID mentions yet a different counter-argument:
“I don’t identify with the concept of hacktivism. It suggests that society can only be changed through technology and hacking. I am critical towards that. I think a cultural change needs to happen and that does not just involve a few smart people writing a few smart programs that make the world a better place.”
Faced with such a variety of arguments, it has become increasingly difficult for hackers themselves to identify motivations for hacking, says heisenbugwatch.
When it comes to their own person, some interviewees mention that they mainly hack for their protection . This does not exclude that hackers see a role for themselves in disputing online surveillance. Part of that role is to act as a responsible, critical citizen . That is also true for Real& — even if it is done without words and if it is a quiet form of protest that can easily disappear. In Kate90r13’s opinion, it is also a problem that hackers lack societal representation, which inhibits the appreciation of the values they create: “They are pushed towards the fringes of society, because people are afraid of technology. And as a result, many people have a problem saying that they are hackers.”
Even though hacking is still considered a fringe activity without much representation, practicing hacking is not a refusal to be part of the mainstream . Panoptipwned commented: “Actually, I hope that hackers’ knowledge about surveillance and our techniques to deal with online veillance will become the mainstream.” In a similar vein bl4ckb0x said: “It’s the other way around. I wish that mainstream users were more aware of online surveillance”. Numbercruncha puts a twist on the argument:
“Yes, you have to be different from the mainstream. (...) If no one protests, surveillance would spread way faster. We need to confront this, for example new surveillance laws. We need a higher amount of people who say: We don’t want this!”
He also adds that enacting such forms of hacking means that one understood more about surveillance than the average Internet user. This reasoning also leads some hackers to express their self-conception as multipliers. Their role is to pass on knowledge and de-bunk myths , but, as DataD14709 finds, “(...) not under your own name. You should do this via the hacker spaces.” Crypsis argues that such actions can include leaks if necessary, but mainly everyday practices such as sharing techniques or providing help with installations .
The many purposes and roles of disputing online surveillance become apparent when discussing whether hacking is a political act. A range of interviewees consider disputing surveillance an everyday practice, but that doesn’t exclude that hackers have political agency .
“Interviewer: “When you build your own technical solutions to hack surveillance, would you consider yourself a participant in a political dialogue?”
bl4ckb0x: “I would rather describe it as a monologue. But ... yeah.”
Interviewer: “So you mean one is alone, because one doesn’t get an answer or acknowledgement?”
bl4ckb0x: “Yes. Exactly.”
bl4ckb0x, however, also mentions that this monologue would not stop him from questioning systemic surveillance: “You can, for example, point to gaps or unfair behavior. In that respect, I would hope that one can sensitize firms or the state about problems and also request answers.”
For some hackers, like Crypsis and Re-ID, the political dimension of hacking is also to be in control of one’s own data. Or, as =Overview suggests, to increase the value of one’s data, for example by making it more costly for surveillers to capture personal information. Sousveillor and Panoptipwned find that, whether noticed or not, such moments of dispute would also function as their comments on socio-digital developments. For AceOfPlays, 3x3cute, heisenbugwatch, and Kate90r13 this form of commenting also includes ethical considerations, which can surpass legislative normativity. bl4ckb0x went further by arguing that even though constructive action is favorable, selected hacks can also be destructive. A few interviewees agreed that hacking can be a form of civil disobedience , but AceOfPlays added that “it may not always be the best or most effective tool to point to social injustice”.
The above accounts underline that the evaluation of the ethical and political valence of hacks are not only performed by critics outside hacker cultures, such as scholars and policy experts. Hackers critique and negotiate different kinds of hacks amongst each other, within their subcultural groups and in relation to the surveillance technology they seek to dispute. Even though some hackers go to extreme lengths to dispute surveillance online, quite a few interviewees mentioned directly that they also assess their investments with cost-benefit evaluations . They would deliberate what they get out of their online behavior. LOLveillance, for example, finds that it is out of scale to use Tor for one’s own privacy. Most cost-benefit evaluations do not only refer to economic, but also social value. GCSgateway put it like this:
“How much technical protection do you allow yourself to have? And when do you notice — disillusioned — that you cannot protect yourself without investing an enormous effort? [...] Even when you use Tor, you would also have to change your behavioral patterns. In order to stay anonymous in a forum, you would have to change your writing style and your choice of words. You would even have to obscure the kinds of contents you want to write about in order to be anonymous. [...] To disappear completely means to change your identity.”
Such deliberations not only illustrate the awareness about the ambiguity of hacking practices, but also that such awareness is highly dependent on the socio-technical competences that hackers have when they dispute surveillance.
Given the above portrayal, is hacking surveillance a valuable “resource” (Chun, 2008) or a weak form of protest with “no ethical or political valence of its own” (Brunton and Nissenbaum, 2011)? Data-minimalism, rerouting and encrypting data, obfuscation, diversifying hardware and creating parallel networks, barricading, defacing, leaking and reverse engineering are examples of hacks that question the many ways in which online surveillance happens today. The interviewees describe these hacks as a back-and-forth between technological development, political development and the critical engagement with these. Hacking is a re-appropriation — not a revolution — of the technological and political preconditions of being online. With Boltanski and Thévenot (1999) we can then understand hacking as a form of disputing online surveillance. Their lens of the dispute helps us seeing the critical capacity of everyday practices and their multiplicities. This article, however, took the notion of the dispute further. Hacking is part of a highly mediated dispute and not necessarily aimed at settling the surveillance conflict. Rather, hacking is aimed at constantly questioning accepted behaviors, standards, objects and orders of worth online. At first sight, this dispute defines itself in the negative — as an anti-veillance project. At the same time it is a form of challenging surveillance that does not give up on technology altogether. Rather, hacking can be understood as a dispute that is uttered from the inside. It is also a dispute that can re-invent orders of worth in the positive, for example when it comes to the value of private space online, technology design, Internet architecture and users with an active role in Internet politics. This form of dispute brings about new technologies and techniques — some of which are equally based on surveillance, some may trigger further surveillance efforts. Disputing surveillance oscillates between online and off-line, technological systems and social situations, constructive and destructive practices, traceability and obscurity. It muddles the neat dualisms of self-interest and common cause, legal and illegal as well as power and counter-power. This goes to show that hacking not simply opposes values, techniques and systems of online surveillance. Neither is it aimed at settling a conflict, but it launches continuous negotiations about what counts as an order of worth online. Chun (2011) reminds us, however, that this temporality of ongoing dispute between surveiller and surveilled bears both, the value of engagement, but also the threat to catch and exhaust hackers
“in an endlessly repeating series of responses. Therefore, to battle this twinning of crises and codes, we need a means to exhaust exhaustion, to recover the undead potential of our decisions and our information through a practice of constant care.” 
She suggests that instead of being consumed by exhaustion, we need to actively remove the desire for an end and nurture “constant ethical encounters between self and other. These moments can call forth a new future” . Hacking creates small, potentially messy, moments of disputing surveillance, which can re-constitute digital futures if they are enacted with an ethics of care.
Importantly, such moments of dispute challenge the totality of surveillance and surveillance metaphors. Haggerty and Ericson’s (2000) surveillant assemblage may capture the dynamics of dispute — of practices that appear, disappear and spring back to life. However, most scholars working with the surveillant assemblage are concerned with describing the many forms of seeing and rendering visible. Research on those that critically engage with, dispute or re-direct surveillance is less prominent under the metaphor of the assemblage. This article illustrated how hacking interrogates the totality of surveillance from within. Hackers introduce a force field of multiple dynamics, of seeing and not being seen at the same time, which has the potential to recast the surveillant assemblage as an “assemblage of in/visibility”.
About the author
Mareile is a post doc researcher in the Department of Criminology and Sociology of Law, University of Oslo. A large part of her work focuses on surveillance practices and technologies, but also on how people engage critically with these from within surveillance systems. Mareile uses qualitative research designs that combine theory with innovative angles and strong empirical components.
E-mail: Mareile [dot] Kaufmann [at] jus [dot] uio [dot] no
1. For all references to interviewed hackers I use imagined handles. They are inspired by a mix of veillance concepts and technologies, hacker cultures and leet-speak. Any similarities with existing handles are unintentional.
2. Amazon’s speech-controlled personal assistant Alexa laughed at its owners without being in a ‘conversation’ with them (Segarra, 2018).
3. Coleman, 2017, p. 91.
4. Bakir, et al., 2017, p. 1.
5. Haggerty and Ericson, 2000, p. 605.
6. Nissenbaum, 2004, p. 213.
7. Kubitschko, 2015, p. 83.
8. Davies, 2018, p. 186.
9. Hunsiger and Schrock, 2016, p. 537.
11. Jordan 2017, p. 528; cf., Goode, 2015.
12. Boltanski and Thévenot, 1999, p. 363.
14. Huysmans, 2016, p. 89.
16. Chun, 2011, p. 96.
18. Chun, 2011, p. 97.
20. See also 3x3cute, heisenbugwatch, =Overview.
21. Bl4ckb0x, DataD14709, heisenbugwatch, Mandelbugger, =Overview, Panoptipwned, Sousveillor.
22. 3x3cute, Filterer, LOLveillance, Sousveillor.
23. See also Crypsis, DataD14709, 3x3cute, GCSgateway, Re-ID, Kate90r13, LOLveillance, Mandelbugger, =Overview, Panoptipwned, IzMyQ, Real&, Sousveillor, ToastIt.
24. AceOfPlays, 3x3cute, GCSgateway, Re-ID, Panoptipwned.
25. AceOfPlays, Filterer, heisenbugwatch, Re-ID, Kate90r13, LOLveillance, Mandelbugger, ToastIt.
26. AceOfPlays, DataD14709, 3x3cute, Filterer, Mandelbugger, Real&, ToastIt.
27. Blas, 2016, p. 149.
28. Bl4ckb0x, Crypsis, DataD14709.
29. https://tails.boum.org, accessed 17 June 2019.
30. DataD14709, Kate90r13, Mandelbugger.
31. Bl4ckb0x, 3x3cute, Filterer.
32. DataD14709, Kate90r13, Mandelbugger, Sousveillor.
33. heisenbugwatch, Re-ID, LOLveillance.
36. DataD14709, Re-ID, jE2EE, Real&.
37. heisenbugwatch, jE2EE, Sousveillor, Real&.
38. Re-ID, Mandelbugger, =Overview, Q, Sousveillor.
39. AceOfPlays, Numbercruncha, Re-ID, Real&.
40. AceOfPlays, Re-ID.
41. AceOfPlays, Crypsis, 3x3cute, GCSgateway, heisenbugwatch, jE2EE, Kate90r13, LOLveillance, Mandelbugger, Panoptipwned, IzMyQ, Real&, Sousveillor, ToastIt.
42. AceOfPlays, Bl4ckb0x, Crypsis, 3x3cute, Filterer, GCSgateway, heisenbugwatch, Kate90r13, LOLveillance, Mandelbugger, Numbercruncha, =Overview, Panoptipwned, IzMyQ, Sousveillor, ToastIt.
43. Bl4ckb0x, Crypsis, GCSgateway, heisenbugwatch, Re-ID, jE2EE.
44. Chun, 2011, p. 92.
45. Chun, 2011, p. 107.
A. Adam, 2005. “Hacking into hacking: Gender and the hacker phenomenon,” In: Gender, ethics and information technology. London: Palgrave Macmillan. pp. 128–146.
doi: https://doi.org/10.1057/9780230000520_7, accessed 20 April 2020.
L. Amoore and V. Piotukh (editors), 2016. Algorithmic life: Calculative devices in the age of big data. London: Routledge.
M. Andrejevic and K. Gates, 2014. “Big data surveillance: Introduction,” Surveillance & Society, volume 12, number 2, pp. 185–196.
doi: https://doi.org/10.24908/ss.v12i2.5242, accessed 20 April 2020.
V. Bakir, M. Feilzer and A. McStay 2017. “Introduction to the Special Theme Veillance and transparency: A critical examination of mutual watching in the post-Snowden, Big Data era,” Big Data & Society (15 March).
doi: https://doi.org/10.1177/2053951717698996, accessed 20 April 2020.
M.K. Bartholomew, S.J. Schoppe-Sullivan, M. Glassman, C.M. Kamp Dush and J.M. Sullivan, 2012. “New parents’ Facebook use at the transition to parenthood,” Family Relations, volume 61, number 3, pp. 455–469.
doi: https://doi.org/10.1111/j.1741-3729.2012.00708.x, accessed 20 April 2020.
Z. Blas, 2016. “Opacities: An introduction,” Camera Obscura, volume 31, number 2, pp. 149–153.
doi: https://doi.org/10.1215/02705346-3592499, accessed 20 April 2020.
L. Boltanski and L, Thévenot, 1999. “The sociology of critical capacity,” European Journal of Social Theory volume 2, number 3, pp. 359–377.
doi: https://doi.org/10.1177/136843199002003010, accessed 20 April 2020.
S. Browne and Z. Blas, 2017. “Beyond the Internet and all control diagrams,” New Inquiry (24 January), at https://thenewinquiry.com/beyond-the-internet-and-all-control-diagrams/, accessed 16 April 2019.
F. Brunton and H. Nissenbaum, 2011. “Vernacular resistance to data collection and analysis: A political theory of obfuscation,” First Monday, volume 16, number 5, at https://firstmonday.org/article/view/3493/2955, accessed 20 April 2020.
doi: https://doi.org/10.5210/fm.v16i5.3493, accessed 20 April 2020.
M. Burgess, 2018. “How to stop Google from tracking you and delete your personal data,” Wired UK (16 March), at http://www.wired.co.uk/article/google-history-search-tracking-data-how-to-delete, accessed 5 June 2018.
Camp Breakout, “Das Digital Detox Camp,” at https://www.camp-breakout.com, accessed 11 July 2018.
J. Carson, 2018. “Fake news: What exactly is it — and how can you spot it?” Telegraph (20 November), at https://www.telegraph.co.uk/technology/0/fake-news-exactly-has-really-had-influence/, accessed 5 June 2018.
N. Casemajor, S. Couture, M. Delfin, M. Goerzen and A. Delfanti, 2015. “Non-participation in digital media: toward a framework of mediated political action,” Media, Culture & Society, volume 37, number 6, pp. 850–866.
doi: https://doi.org/10.1177/0163443715584098, accessed 20 April 2020.
W.H.K. Chun, 2011. “Crisis, crisis, crisis, or sovereignty and networks,” Theory, Culture & Society, volume 28, number 6, pp. 91–112.
doi: https://doi.org/10.1177/0263276411418490, accessed 20 April 2020.
W.H.K. Chun, 2008. “On ‘sourcery,’ or code as fetish,” Configurations, volume 16, number 3, pp. 299–324.
doi: https://doi.org/10.1353/con.0.0064, accessed 20 April 2020.
G. Coleman, 2017. “From Internet farming to weapons of the geek,” Current Anthropology, volume 58, number S15, pp. S91–S102.
doi: https://doi.org/10.1086/688697, accessed 20 April 2020.
G. Coleman and A. Golub, 2008. “Hacker practice: Moral genres and the cultural articulation of liberalism,” Anthropological Theory, volume 8, number 3, pp. 255–277.
doi: https://doi.org/10.1177/1463499608093814, accessed 20 April 2020.
S.R. Davies, 2018. “Characterizing hacking: Mundane engagement in US hacker and makerspaces,” Science, Technology, & Human Values volume 43, number 2, pp. 171–197.
doi: https://doi.org/10.1177/0162243917703464, accessed 20 April 2020.
European Commission, 2018. “EU data protection rules,” at https://ec.europa.eu/commission/priorities/justice-and-fundamental-rights/data-protection/2018-reform-eu-data-protection-rules_en, accessed 5 June 2018.
F. Erixon and H. Lee-Makyama, 2011. “Digital authoritarianism: Human rights, geopolitics and commerce,” European Centre for International Political Economy (ECIPE) Occasional Paper, number 5/2011, at http://hdl.handle.net/10419/174715, accessed 20 April 2020.
C.L. Evans, 2014. “UnFacebooking, randomizing, and other ways to burst the filter bubble,” Motherboard (28 August), at https://motherboard.vice.com/en_us/article/kbz3aa/unfacebooking-random-and-other-strategies-for-popping-the-filter-bubble, accessed 5 June 2018.
M. Foucault, 1979a. “Omnes et singulatim: Towards a criticism of ‘political reason’,” Tanner Lectures on Human Values, delivered at Stanford University 10 and 16 October), at https://tannerlectures.utah.edu/_documents/a-to-z/f/foucault81.pdf, accessed 17 June 2019.
M. Foucault, 1979b. Discipline and punish: The birth of the prison. Translated from the French by A. Sheridan. New York: Vintage.
E. Glissant, 1997. Poetics of relation. Translated by B. Wing. Ann Arbor: University of Michigan Press.
L. Goode, 2015. “Anonymous and the political ethos of hacktivism,” Popular Communication, volume 13, number 1, pp. 74–86.
doi: https://doi.org/10.1080/15405702.2014.978000, accessed 20 April 2020.
B. Grosser, 2017. “Tracing you: How transparent surveillance reveals a desire for visibility,” Big Data & Society (1 February).
doi: https://doi.org/10.1177/2053951717694053, accessed 20 April 2020.
K.D. Haggerty and R.V. Ericson, 2000. “The surveillant assemblage,” British Journal of Sociology, volume 51, number 4, pp. 605–622.
doi: https://doi.org/10.1080/00071310020015280, accessed 20 April 2020.
N.C.N. Hampson, 2012. “Hacktivism: A new breed of protest in a networked world,” Boston College International and Comparative Law Review, volume 35, number 2, pp. 511–542, and at https://lawdigitalcommons.bc.edu/iclr/vol35/iss2/6, accessed 20 April 2020.
J. Hunsiger and A. Schrock, 2016. “The democratization of hacking and making,” New Media & Society, volume 18, number 4, pp. 535–538.
doi: https://doi.org/10.1177/1461444816629466, accessed 20 April 2020.
J. Huysmans, 2016. “Democratic curiosity in times of surveillance,” European Journal of International Security, volume 1, number 1, pp. 73–93.
doi: https://doi.org/10.1017/eis.2015.2, accessed 20 April 2020.
Itstimetologoff, 2016, “4 signs you’re at risk from digital burnout,” at https://www.itstimetologoff.com/2016/06/01/10-signs-youre-at-risk-from-digital-burnout/, accessed 5 June 2018.
T. Jordan, 2017. “A genealogy of hacking,” Convergence, volume 23, number 5, pp. 528–544.
doi: https://doi.org/10.1177/1354856516640710, accessed 20 April 2020.
T. Jordan and P. Taylor, 2004. Hacktivism and cyberwars: Rebels with a cause. New York: Routledge.
M. Kaufmann and M. Tzanetakis, 2020. “Doing Internet research with hard-to-reach communities: Methodological reflections about gaining meaningful access,” Qualitative Research (16 February).
doi: https://doi.org/10.1177/1468794120904898, accessed 20 April 2020.
S. Kubitschko, 2015. “The role of hackers in countering surveillance and promoting democracy,” Media and Communication, volume 3, number 2, pp. 77–87.
doi: http://dx.doi.org/10.17645/mac.v3i2.281, accessed 20 April 2020.
F. Manjoo, 2017. “Tech’s frightful five: They’ve got us,” New York Times (10 May), at https://www.nytimes.com/2017/05/10/technology/techs-frightful-five-theyve-got-us.html, accessed 5 June 2018.
S. Mann, 2005. “Sousveillance and cyberglogs. A 30-year empirical voyage through ethical, legal and policy issues,” Presence: Teleoperators and Virtual Environments, volume 14, number 6, pp. 625–646.
doi: http://dx.doi.org/10.1162/105474605775196571, accessed 20 April 2020.
S. Mann, J. Fung and R. Lo, 2006. “Cyborglogging with camera phones: Steps toward equiveillance,” MM ’06: Proceedings of the 14th Annual ACM International Conference on Multimedia, pp. 177–180.
doi: https://doi.org/10.1145/1180639.1180690, accessed 20 April 2020.
T. Mathiesen, 1997. “The viewer society: Michel Foucault’s ‘panopticon’ revisited,” Theoretical Criminology, volume 1, number 2, pp. 215–234.
doi: https://doi.org/10.1177/1362480697001002003, accessed 20 April 2020.
Maxigas, 2017. “Hackers against technology: Critique and recuperation in technological cycles,” Social Studies of Science, volume 47, number 6, pp. 841–860.
doi: https://doi.org/10.1177/0306312717736387, accessed 20 April 2020.
E. Morozov, 2017. “So you want to switch off digitally? I’m afraid that will cost you ...” >Guardian (18 February), at https://www.theguardian.com/commentisfree/2017/feb/19/right-to-disconnect-digital-gig-economy-evgeny-morozov, accessed 15 April 2019.
New Scientist, 2018. “Taming the big tech beasts,” New Scientist, volume 237, number 3164 (10 February), p. 5.
doi: https://doi.org/10.1016/S0262-4079(18)30229-X, accessed 20 April 2020.
H. Nissenbaum, 2004. “Hackers and the ontology of cyberspace,” New Media & Society, 6, 2, pp. 195–217.
doi: https://doi.org/10.1177/1461444804041445, accessed 20 April 2020.
A. Ohlheiser, 2017. “No, the shark picture isn’t real: A running list of Harvey’s viral hoaxes,” Washington Post (29 August), at https://www.washingtonpost.com/news/the-intersect/wp/2017/08/28/no-the-shark-picture-isnt-real-a-running-list-of-harveys-viral-hoaxes/?hpid=hp_no-name_hp-in-the-news%3Apage%2Fin-the-news&utm_term=.1e1106cc96be, accessed 17 November 2017.
G. Orwell, 2000. Nineteen eighty-four. New York: Penguin.
C.C. Palmer, 2001 “Ethical hacking,” IBM Systems Journal, volume 40, number 3, pp. 769–780.
doi: https://doi.org/10.1147/sj.403.0769, accessed 20 April 2020.
E. Pariser, 2011. The filter bubble: What the Internet is hiding from you. New York: Penguin Press.
A. Powell, 2016. “Hacking in the public interest: Authority, legitimacy, means, and ends,” New Media & Society, volume 18, number 4, pp. 600–616.
doi: https://doi.org/10.1177/1461444816629470, accessed 20 April 2020.
L.M. Segarra, 2018. “It’s not just you: Amazon admitted that Alexa has been laughing at people,” Time (7 March), at http://time.com/5190044/amazon-alexa-echo-laughing/, accessed 5 June 2018.
M. Sicart, 2014. Play matters. Cambridge, Mass.: MIT Press.
J. Söderberg, 2017. “Inquiring hacking as politics. A new departure in hacker studies?” Science, Technology, & Human Values, volume 42, number 5, pp. 969–980.
doi: https://doi.org/10.1177/0162243916688094, accessed 20 April 2020.
J. Söderberg and H. Delfanti, 2015. “Hacking hacked! The life cycles of digital innovation,” Science, Technology, & Human Values, volume 40, number 5, pp. 793–798.
doi: https://doi.org/10.1177/0162243915595091, accessed 20 April 2020.
K. Steinmetz and J. Gerber, 2015. “‘It doesn’t have to be this way”: Hacker perspectives on privacy,” Social Justice, volume 41, number 3, pp. 29–51.
SSL Nagbot, 2016. “Feminist hacking/making: Exploring new gender horizons of possibility,” Journal of Peer Production, number 8, at http://peerproduction.net/issues/issue-8-feminism-and-unhacking-2/feminist-hackingmaking-exploring-new-gender-horizons-of-possibility/, accessed 16 April 2019.
C. Uhle, 2017. “Momo in Zeiten der Digitalisierung,” Transform (12 July), at https://www.transform-magazin.de/momo-in-zeiten-der-digitalisierung/, accessed 11 July 2018.
R. Zhong, 2018. “Worried about big tech? Chinese giants make America’s look tame,” New York Times (31 May), at https://www.nytimes.com/2018/05/31/technology/china-tencent-alibaba.html, accessed 5 June 2018.
S. Zuboff, 2019. The age of surveillance capitalism: The fight for the future at the new frontier of power. London: Profile Books.
Received 29 April 2019; revised 8 April 2020; accepted 17 April 2020.
This paper is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
by Mareile Kaufmann.
First Monday, Volume 25, Number 5 - 4 May 2020