First Monday

Pirates of Silicon Valley: State of exception and dispossession in Web 2.0 by Peter Jakobsson and Fredrik Stiernstedt



Abstract
This article investigates a paradox in the reception of Web 2.0. While some of its services are seen as creators of a new informational economy and are hence publicly legitimized, other features are increasingly under surveillance and policed, although in reality the differences between these services is far from obvious. Our thesis is that we are currently experiencing a temporary postponement of the law, in the context of Web 2.0. Agamben’s work on the state of exception is here used to theorize the informational economy as an ongoing dispossession, under the guise of ‘networked production’. This dispossession is seen as a parallel to the concept of ‘primitive accumulation’, as a means of moving things from the exterior to the interior of the capitalist economy. This theory lets us problematize the concept of free labor, the metaphor of the enclosure, and puts into question the dichotomy between copyright and cultural commons.

Contents

Introduction
State of exception and accumulation by dispossession in Web 2.0
Three cases of dispossession 2.0
Administration of the absence of order
Conclusion

 


 

Introduction

In this article we investigate what seems to be a paradox in the reception of Web 2.0. The services that exemplify the Web 2.0 paradigm are, among others, Google AdSense, Flickr, Wikipedia, BitTorrent and Napster (O’Reilly, 2005). To this list one can also add social networking sites such as Facebook and MySpace. Policy–makers, economists and the public have welcomed some of these services almost unanimously. Together they are taken to represent how a new class of cultural producers (or “prosumers”) relates to cultural production and creates opportunities for business on the Internet. On the other hand, some of the practices and technologies of Web 2.0 have been met with scorn and contempt because they instigate so–called piracy and are considered to lack respect for the work of others. These practices have, accordingly, not been seen as exemplary of a new class of cultural producers but rather as a problem that should be handled by the police as well as juridical and penal institutions.

However, on second thought, the differences between the two are not as clear–cut as they might seem. Depending on dress code and choice of logo, some actors need not fear the long arm of the law; under the cover of a strong brand name and a catchy corporate motto, piracy is excused. We will argue that this is however not due to rhetoric or appearance, but is dependant on the social power that comes with the promise of economic growth.

Common for all of the services (‘legal’ as well as ‘illegal’) of Web 2.0 is that they depend on content created externally from Internet companies themselves. As noted by a long list of commentators, users and cultural industries respectively, are faced with the fact that their work is being appropriated in the name of collective intelligence and social networking. In contrast to p2p networks and (illegal) file–sharing this appropriation does take place without any severe prosecution from the state. Interventions have been made, of which the court cases against Google Books are among the most well known, but on the whole, services engaged in ‘legitimate piracy’ are not only welcomed, but even utilized as tools for organizations and individuals representing state interests (e.g., the European Union’s and the U.S. Army’s use of YouTube).

How then shall one understand the economics of legitimate piracy? We propose that value creation in Web 2.0 takes place through a process of dispossession, which is at least partly sponsored by the power of the state and the legal system. In disregard of copyright and privacy rights, rather than with the help of copyright, Web companies have been given free rein to extract and dispossess the values of the cultural commons and, in effect, lay the ground for a new economic order on the Internet. Our claim is that the forceful upholding of property rights in relation to informational goods is not the only reason behind the commons being threatened or a truly participatory culture being held back. Rather than getting caught up in Web 2.0’s entanglements in the dichotomies of either upholding strong intellectual property rights or relaxing the constraints of copyright, the cardinal matter is who is allowed the rights to his or her products of cultural creativity and intellectual labour, and under what circumstances these rights are upheld. Right now, the harbingers of the new economy are given some leeway, while their business models are tested and established. Simultaneously the so–called ‘users’ are increasingly surveyed in their media use (motivated by the need to uphold copyright), while being at the same time legally dispossessed of their own creative productivity at sites such as Google, YouTube and Facebook. On the one hand intellectual property rights are strengthened and reinforced, while on the other, they are annulled [1]. This seems to be not two differing or conflicting trends, but rather the two sides of the same coin. How can we conceptualise this seeming paradox?

This article contributes to a growing critique of Web 2.0 (Petersen, 2008; Scholz, 2008; Dijck, 2009; Dijck and Nieborg, 2009). A common theme in this critique is the lack of control that user’s have over which aspects of their participation, production and self–disclosure that is used to generate economic value for the companies. This article aims to contribute to such a growing body of critique through a re–thinking of the role of copyright in these processes, since ownership of immaterial goods is central to the question of control. The main part of the existing academic work concerning copyright has been about its overuse — the excesses of the the media industry in upholding, defending and protecting their intellectual property and the way in which copyright law maintains and makes possible exploitation and asymmetrical power relations (not only in the sphere of the media) (see for example Hardt and Negri, 2009; Žižek, 2009). Therefore, the absence of copyrights has not been interpreted as a possible problem — rather it has been univocally understood as something almost inherently positive or progressive. The fact that the meaning of copyright, as well as the relationship between copyrights and cultural commons can change depending on context has been somewhat of a blind spot for critical internet studies. This article is not a defense of strong protections of copyrighted material, but a needed revision of the relative one–sidedness of this discussion in order to contribute to a “critical theory of information” (Fuchs, 2009): a theory that also need to consider the legal framework of a developing informational economy.

In order make our argument we present three cases of dispossession in Web 2.0. We contend that these should not be seen as legitimate market transactions, but should — at least in some instances — be considered and debated as state–sponsored piracy. The first of our cases is YouTube’s appropriation of cultural material from users and cultural industries, disregarding editorial control and copyright. Secondly, we analyze Google’s language processing of private e–mail communications, search queries and Web texts in order to serve users with context–sensitive advertisement. The third case is Facebook which uses the capital of social trust to generate economic surplus. We argue that in none of these cases are copyright and technical measures of control (i.e., DRM) the gravest threat to the cultural commons. On the contrary — and somewhat contradictory to commonplace conceptions (Benkler, 2006; Boyle, 2008; Lessig, 2001) — it seems as if some form of rights to the products of our cultural expressiveness is needed to protect these very commons. In the next part of the article we will develop the theoretical tools needed to further develop this argument.

 

++++++++++

State of exception and accumulation by dispossession in Web 2.0

State of exception

We would like to suggest that the shifting environment of Web 2.0 is at the moment in a state of exception, a temporary suspension of the rule of law in order to bring forward the rule of the new. We take our cue here from Giorgio Agamben (1998; 2005) who has theorized the conception of the state of exception as a form of governance in modern societies. In Agamben’s reading, through Carl Schmitt and ancient Roman law, the state of exception emerges in our societies not as an exception but as an ongoing governance through decrees and the suspension of the regular rule of law. It is also through such suspension that the rule of law as such is constituted, as a way of internalizing new domains into it. According to Agamben, the state of exception is “society’s attempt to confine the outside”. His main example is bio–politics, coming to the fore in the wake of modernity and by which the political interest in handling and ordering dimensions of human life, previously conceived of as un–political, is created: “placing biological life at the center of calculations.” [2] The state of exception is the mechanism by which such a confinement of “bare life” into the realm of the political is managed; it is the “politicization of life as such.” [3] Agamben, then, understands the state of exception as that which creates and defines the “very space in which the juridico-political order can have validity.” [4]

The state of exception, for Agamben, has been a feature of political life at least since Aristotle. However, with the rise of modernity, the role of the state of exception is transformed from being a marginalised feature of political life to a central one, if not even indistinguishable from the rule of law as such. According to Agamben “[w]e are experiencing the triumph of the management, the administration of the absence of order.” [5] Today it is not the case that a state of exception is actually objectively present (in a legal sense), but rather that governance through administrative measures, which are only pragmatically legitimized, has become paradigmatic for contemporary government. Another important point is that governing through administration is not antithetical to the rule of law; rather they coexist and are simultaneously driven to their extremes (Agamben, 2004). The state of exception, understood as an anomic space — a space without norms — needs the rule of law to create the fiction that in fact the state of exception can be understood from within the law (Agamben, 2005). There is therefore no contradiction in the all–round strengthening of copyright and the institution of new privacy laws and the developments we describe here. They are integral parts of what we try to bring forth in this article. For our purposes then, the state of exception sets up a perspective through which to interrogate the ongoing reconstruction of a new economy on the Web. The deprivation of established rights for some for the benefit of strong economic actors, and the policing of the same rights in other contexts, can be adequately captured through the state of exception as an analytical lens.

In this article we do not have the space, or the means, to pursue this interpretation in full. The state of exception is offered as a means for analyzing ongoing developments. With the help of this perspective we are however in a position to rethink some common assumptions and throw new light on the phenomenon of Web 2.0. We argue that if we accept the suggestion of an ongoing state of exception, the Web economy could fruitfully be theorized as accumulation through dispossession (Harvey, 2003): a state–sponsored project with the goal of achieving a new and digital economic order. Secondly, our analysis leads us to reconsider the role of copyright for the dispossession of the commons. If we are in fact in a state of exception then it is not copyright, but the disregard of copyright and dispossession through other means that is threatening the commons. What these other means are will be described below. A third assumption — which needs to be rethought — is how the metaphor of an enclosure can be productively used in order to understand the digital economy. In Web 2.0 everything is free and available, so how can we then refer to enclosures? We argue that in the state of exception enclosures are constructed, not to shut people out, but rather to let them take part in the ongoing dispossession of their rights and their autonomy.

Accumulation by dispossession

Historically, the question of the commons is a long–standing topic in leftist thinking. The first article that Karl Marx ever wrote as a journalist concerned the ongoing enclosure of the commons in Germany in the first half of the nineteenth century (Wheen, 1999). He was later to elaborate the processes of dispossession and privatization in Capital under the rubric of “primitive accumulation” (Marx, 1976), and this theoretization has continued to be a recurrent theme and topic of discussion in Marxist and leftist thinking in general. The received notion of the concept of primitive accumulation is that it is something that happened once — the first enclosure of the commons — an event that privatized both access and use of land and created a surplus of labour that had no choice but to seek employment in the urban factories or work the now privately–owned land. This appropriation of land owned in common instigated the move from a feudal to a capitalist economy as it created the condition for capitalist accumulation through the surplus value of labour power. This is also what is implied in James Boyle’s analysis of the “second enclosure movement”, in which the digital cultural commons are enclosed with the help of strengthened copyrights (Boyle, 2003).

There is however another way of understanding this concept that is prevalent in Marxist and critical political economic thinking, an interpretation that downplays its singularity, and its connection to specific enclosure movements. It has been argued that all accumulation rests upon one common feature: the separation between human beings and their means of production (De Angelis, 2001). Sometimes the processes of labour are necessary to complete this separation, sometimes not. For that reason the term primitive or original accumulation has been abandoned for other concepts, such as advanced accumulation (Perelman, 1998) or accumulation by dispossession (Harvey, 2003).

For David Harvey, accumulation by dispossession is the accumulation by capital through other means than those of the market, whether “through force, fraud, oppression” [6] (or as in our case technological dominance, abandonment of the law, etc). In his writings Harvey focuses on things such as the privatization of water, housing, and land in developing countries. The dispossession in Web 2.0 does however demand a somewhat different take on the concept, not least because of the reproducibility of the digital commons, in comparison to the economies of land and water. Web 2.0 is part of the informational economy; its goods are immaterial and intangible, consisting of information, attention, knowledge, and sentiments. Can we really be dispossessed of these things? Naturally we cannot be robbed of language in the same sense that we can be robbed of an apple or a car, but perhaps in a more subtle sense. The argument we will develop in this article is that the instrumentalized social production of Web 2.0 does not so much dispossess us of the means of production as such, but our sense of ownership of ourselves and the autonomy of our imagination. Our collective intelligence is not being sucked up and stowed away — how could it? It is rather that we are dispossessed of the meaning and purpose of this collectivity.

The first enclosure of the commons, and the move from a feudal to a capitalist economy, meant for Marx (1976) a dialectics of freedom and exploitation. Subjects were set free from the oppressive structure of feudalism, and gained the freedom to sell their labour on the market. But at the same time this meant that they found themselves in a new relationship of exploitation in the factories of early industrial capitalism. In a similar vein we (users of the Internet) are, in the informational economy, given the means to freely express ourselves, communicate, and form relationships outside of the structures of the traditional cultural industries. Long–standing folk traditions of appropriation, remixing, and playful interpretations of texts with a sound disrespect for the figure of the author, are awarded an infrastructure and a public platform in Web 2.0. We are free to work as cultural entrepreneurs, writing texts on blogs, producing videos for online services, etc., with relatively cheap means of production and distribution, and with some chances of receiving part of the money that flows in the Web 2.0 networks. Furthermore, we are witnessing a renegotiation of traditional copyright, which opens a space for these popular practices, and entrepreneurial spirits.

The dialectics of Web 2.0 means, however, that in the informational economy our participation plays an essential role in the accumulation of capital. For that reason participation goes from being restricted and privileged to becoming mandatory. Increasingly, in the informational economy, to gain access to the networks of culture, knowledge and information, some kind of production — active, conscious and participatory (e.g., uploading videos to YouTube), or passive, automatic and even unconscious or involuntary (e.g., in the case of language processing and social profiling) — becomes obligatory. This is fundamental to how dispossession works in the Web 2.0 economy. The enclosure in the informational economy, in contrast to the first enclosure movement, becomes more metaphorical since the preconditions for dispossession in Web 2.0 are not based on locks or barriers that restrict access. Somewhat paradoxically, the essence of the informational enclosure is openness. Although the enclosure is not without its restrictions, it strives for informational and communicative excess through promoting freedom and openness, and enhancing certain types of communication.

This is also in line with how the concept of “primitive accumulation” has been interpreted by other authors, namely, as an inherent conflict within capitalism in which what is at stake is the “natural” ownership proceeding from the act of producing something. The mechanism of irregular accumulation, or accumulation by irregular means (e.g., accumulation without labour) is something that capitalist forces activate in moments in which the principles of accumulation are in some ways contested. Human beings make things in order to survive (food, clothing, etc.) but also for the sheer joy of making things — and making them well (Sennett, 2008). Such an enterprise — of doing things for their own sake — does not necessarily coincide with the interests of Capital, whose main driving force is accumulation. Yet, this driving force, of doing things for the sake of it, underlies much of the creative and communal activities within Web 2.0 (as commented upon by for example Benkler, 2006; Uricchio, 2004). The conflict between these two principles, or what has been labelled the “circle of struggle” (Dyer–Whiteford, 1999) is what spurs “primitive accumulation” or the forces of dispossession. It is a process and a mechanism that carries out the perpetual incorporation of new realms of human productivity into capitalism. In a way somewhat analogous to how the state of exception is framed by Agamben, it constitutes a threshold between inside and outside, between capitalism and its Other (Harvey, 2003). Our three case studies are elaborations on the two faces of Web 2.0, as both an extension and evaporation of the commons.

 

++++++++++

Three cases of dispossession 2.0

YouTube

YouTube is the flagship of Web 2.0. Perhaps not because of its enormous financial successes, as these have yet to materialize, but because of its enormous popularity as the audio–visual medium par preference of the young generation (Nielsen, 2008), reaching one billion unique viewers in September 2009 according to Nielsen (Nielsen, 2009). Because of YouTube’s instant appeal and enormous popularity, the old organizations of mass media have had no choice but to accept it, which not only has led television and film production companies to upload content on YouTube, but also print journalism to compile YouTube top lists and the like [7]. YouTube has been famously used in political campaigns as well as by respectable cultural organizations such as UNESCO [8]. If not in financial terms then at least its cultural success is unrivalled (cf., Snickars and Vonderau, 2009; Lovink and Niederer, 2008). At the same time, however, it should be fairly obvious to anyone that there is only a very fine line separating the respectability of YouTube from the pariahs of the digital economy: pirates, file–sharers and p2p networks.

YouTube was sued by Viacom in 2007, with the court most recently deciding in favor of YouTube (Letzing, 2010; Sandoval, 2009). Viacom claimed that YouTube has knowingly violated Viacom’s copyrights by not removing protected content even though they had been given notice of its appearance on the site. YouTube has countered by claiming that Viacom themselves had uploaded content to the site and that YouTube had no way of knowing which clips were authorized and which were not. There is no mistaking that the YouTube platform creates value. Although YouTube has installed filters and practices in order to remove copyrighted material from the site, a quick search on almost any artist reveals the obvious truth that YouTube is only propelled forward by ignoring regulations and avoiding technical protection. As Snickars (2009) argues, it is the lack of editorial control that has been behind the triumph of YouTube, something which both the company and its users are fully aware of. The platform’s ‘interactive audience’ seems at times not to be creating new content, but rather ripping and re–posting material from other sources. From a rights–holders’ perspective — and this has eventually been accepted by some members of the content industry — the challenge of YouTube, and of Web 2.0 generally, is consequently to secure a share of advertising revenues that these economies generate (Farchy, 2009; McDonald, 2009).

However, it is not only the cultural industries that are implicated in the Web 2.0 video economy. In this economy everyone might have to find him– or herself broadcasted to a worldwide audience of scopophiles. Celebrities might find their private sex tapes on less than savoury streaming Web sites, while mere mortals might never even know that their life’s most embarrassing moment has suddenly become the number one hit on YouTube. These moments, arguably examples of the release of our ‘animal spirit’ into the supposedly innocent Eden of the cultural commons (Pasquinelli, 2008), are also part of the lifeblood of Web 2.0. The video economy not only relies on the desire to be seen — ‘to broadcast yourself’, as the slogan goes — but on the desire to broadcast the Other. In Web 2.0, celebrities as well as ordinary people join Hollywood and the recording industry in having to accept the role of the Other to the “collective intelligence” and “social networks” of the new economy. The democratic potential of YouTube needs to be considered in relation to the site’s dispossession of the control of the audiovisual means of constructing narratives of our own lives, as well as in relation to the lack of control over products of the orchestrated imagination of cultural industries and amateur video producers.

Gmail

Our next example of dispossession in Web 2.0 — the technological appropriation of language — is perhaps less obvious, and might also require some rather technical discussions to explain. Nevertheless, we argue that it is as emblematic of Web 2.0 as the avoidance of copyright regulation by video and music mashup sites.

In 2003 Google acquired Applied Semantics, a small company specialized in developing software solutions for semantic text processing (Google, 2003). The product that attracted Google’s attention was an application of the company’s technology geared towards the advertising market. This product, AdSense, is an attempt to simulate human understanding through the processing of the textual output of humans and applying this understanding to the search economy. More specifically it is an attempt to technically solve the ambiguity of language by analysing very large amounts of text [9]. All this to solve the problem of placing so–called ‘context–sensitive’ advertisement in front of Internet users.

For Google as a company the usefulness of the AdSense technology should have been obvious, as it relies on the same principle as its search engine. It uses the collective efforts of Web users to produce relevant interpretations of the Web’s content. As is well known, Google’s search engine partly relies on counting links between Web pages to calculate the relative relevance of sites containing specific search terms. In a similar way, albeit supported by extensive scientific research in linguistics and computer science, AdSense uses our linguistic practices on the Web to create a model of our collective cultural understanding of the world.

It seems that Google has almost made this practice into a general strategy for the company. In research papers published by Google engineers it is shown, for example, how search strings in themselves can be turned into a source of knowledge. As explained by Google engineer Marius Paşca, by taking advantage of the “wisdom of the (search) crowds”, something to which millions of Web users contribute daily, relations between words, things, and “real–world interest[s]” [10] of Web users can be used to capture not only human knowledge, but the very process of knowledge creation, through users searching and scanning the Web.

With the right infrastructure (i.e., large Web indexes) companies such as Google can also take advantage of ‘knowledge’ created and gathered outside of their own services. Google engineers exemplify this with the possibility to take “advantage of some of the human knowledge available in Wikipedia, a free online encyclopedia created through decentralized, collective efforts of thousands of users.” [11] The enormous sets of (free) data that this gives Google is seen in the company’s research papers: one paper describes the Internet as facilitating an ‘experimental setting’ containing “50 million unique, fully–anonymized queries in English submitted by Web users to the Google search engine in 2006.” [12] In another case the researcher had used “the unstructured text available within a collection of approximately 100 million Web documents in English as available in a Web repository snapshot from 2006 maintained by the Google search engine.” [13]

Google’s claim for this practice is that it is a win–win situation. Through exploiting the collective work of users they are able to serve these same users. Without their machines reading and interpreting our e–mail messages we would for example have been bombarded with totally irrelevant advertisements. Nothing is either taken or stolen in this process. Information is duplicated and processed; it is multiplied rather than dispossessed and enclosed. May we conclude then that we cannot be dispossessed in Web 2.0, only enriched and elevated? Have the forces of production finally realized socialist utopia?

As you might already have guessed, our answer is not unequivocally affirmative. According to Google’s privacy policy which states the terms for the interactions between users and the company, Google only uses personal information for things such as: “Auditing, research and analysis in order to maintain, protect and improve our services” (Google, 2009). In the Gmail service this includes “relevant advertising and related links based on the IP address, content of messages and other information related to your use of Gmail” (Google, 2008). Granted this, Google could claim that users, by using their services and accepting the terms of use, are entering a legitimate relationship of exchange. Users receive access to the company’s services and Google receives the permission to use the content of users’ e–mail messages as research material in the development of their software and in order to enhance their algorithms.

However, we contend that what is actually happening is a little more than that. Since our use of language — on Web pages, in e–mail messages, and in search strings — is anonymized, circulated and processed in order to produce better representations of human language and to realise the goal of simulating human understanding, Google goes beyond the intent stated in the terms of service. In addition, for Wikipedians and other producers of content on the Web, the possibility of opting out is a procedure that requires knowledge about how search engines collect information and about writing code that tells search engines that they should not index a particular site [14]. This of course makes the option of not being a part of the ‘mass intelligence’, a choice that can only be made by a rather limited number of individuals. Google is thus dependent on the collective knowledge of the Web, represented by language use and language skills on the Web, in the creation of value for the company. But the processes of harvesting this value is hardly visible to most of Web users. Consequently it could be argued that the company is engaged in an accumulation by dispossession of one of the fundamental characteristics of being human: the ability to communicate through symbols, signs, and other means of representation.

Facebook

Our third example concerns online social networking — yet another hallmark of Web 2.0. The dominant actor on this field is without question Facebook, with over 400 million users worldwide. What should be understood about social networking sites is that they are not, as for example, dating sites or open chat rooms, places to meet new people. What Facebook essentially does is to digitize already existing relationships in the social realm, giving them the shape of a network.

So what is dispossessed in online social networking? For one thing, it has to be pointed out that Facebook is different from many other online communities; since authenticity is required, the user, on signing up, is obliged to “not provide any false personal information” (facebook.com, 2009b). The ‘networking’ taking place on the site, and the organised and institutionalized practises of self–disclosure that Facebook promotes and encourages, are consequently always connected to personal information. Social networking sites in general, and Facebook par excellence, are hence “profiling machines” (Elmer, 2004), facilitating customized and targeted advertising through their access to extensive and detailed information about users/customers, that they can disclose to third parties (i.e., sell to advertisers). From one perspective, what is gathered in online social networking is private information about human users.

This, however, is only one part of Facebook’s business model. Another dimension of Facebook, which it shares with many other services within Web 2.0, is that it works as a platform onto which users themselves are able to upload content in various forms. Much as in the example of YouTube, Facebook allows for a harvesting or usurping of resources, such as, for example, photographs. This is a massive part of Facebook’s operations; users upload more than two billion photos and about 14 million videos each month according to the company’s own statistics (as of November 2009; Facebook.com, 2009a). This information (photographs, videos, text, etc.) however do not directly generate value for Facebook since the company does not sell it and the people that uploaded it — at least theoretically “control how it is shared” (Facebook.com, 2009b) [15].

Nevertheless, online social networks add yet another dimension to dispossession in Web 2.0. A distinctive feature of online social networks is the very ‘socialness’ inherent in their software and interfaces, as well as in the ways in which they are used (Papacharissi, 2009; Livingstone, 2008). It is not mainly, as discussed above, information of different kinds (self–disclosed information about oneself, uploaded materials) that comprises the basis for economic value, surplus value and profit in relation to Facebook. For example, as of September 2009, Facebook had cancelled the infamous advertising service Beacon, which operated according to logics of profiling and patterning. The possibilities for uploading and sharing pictures, text and film, as well as the profiling and patterning on the site, can be seen as means for producing or maintaining the more important quality of social trust, which in turn can be dispossessed and monetized. Two examples of such attempts at monetizing the social were launched in 2009: Facebook Connect and Facebook Pages. The spirit of both these two services is that they let companies and brands interact with other (human) agents within the networks of Facebook, and that thereby companies and brands are paying Facebook for the possibility to exploit what is the main feature of the site: that it is a venue for sociability and social trust. Most successful and talked about is Starbucks’ ongoing campaign, where the coffee company has succeeded in creating a network of more than 4.5 million “friends.”

This is how Facebook Pages are described in the company’s own product information:

“When your fans interact with your Facebook Page, stories linking to your Page can go to their friends via News Feed. As these friends interact with your Page, News Feed keeps driving word–of–mouth to a wider circle of friends”.

Hence Facebook materializes what have been buzzwords in the advertising industry during the first decade of the new millennium: viral advertising, affective binds and emotional capital. Media scholar Henry Jenkins, with reference to such developments, has shown how the industry has claimed that brands are increasingly transformed into “lovemarks”, which are held to be “more powerful than traditional ‘brands’, because they command the ‘love’ as well as the ‘respect’ of consumers.” [16] The relationships between people, brands and products are according to this business philosophy meant to take on the form of stories or narratives about those new powerful relations. Jenkins (2006) gives several examples of how corporations try to harvest consumers’ stories about their relations to products, for example via interactive parts of corporate Web sites, etc. In a similar vein, a central (if not the central) feature of Facebook Pages is the stories or narratives they generate within the network through the feature Newsfeed, which also is the backbone of the sociability of Facebook as such, keeping and binding networks between people (as well as brands and products) together. To interact with Facebook Pages hence is to give shape or narrative form to the love story between commodities, brands and their consumers.

What is dispossessed in social networking sites is then not only personal information and intellectual property rights but also sociability as such. This is not to be confused with the privatization of the infrastructures of social or interpersonal communication, which is a much more commonplace phenomenon (i.e., telephone, mail). In social networking sites, what is taking place is an inclusion into capitalist relations of the very quality of social relations as such — the sense of community and social trust. Plus — not to forget — also the means of staging, upholding and making community and sociability through the dispossession and instrumentalizing that is inherent in “networks” and “news–feeds”.

 

++++++++++

Administration of the absence of order

The argument that Web 2.0 is somehow appropriating the work of users is (at the least) on its way to becoming established (e.g., Lovink and Scholz 2007). The cases outlined above contribute to corroborating this view empirically in the realms of cultural products, language and sociability. In order to take the analysis further, to establish the juridico–political framework under which this appropriation is taking place and develop the argument of a state of exception, we now turn to how the social Web is handled by relevant European institutions, discursively and in practice.

There are of course already established laws, rules and regulations concerning both intellectual property rights and personal integrity, and amongst these, intellectual property rights are increasingly being strengthened. Web 2.0 as an economic model for cultural production has, however, called for a distribution of rights that goes against established interpretations of the law. Economic value in Web 2.0, as we have tried to show above, is generated through a disregard of users’ copyrights by not regarding copyright as a mean of creating economic value through the creation of immaterial commodities, but using the technical platform as a means of realising this value. The state of exception thus becomes visible in the way that these business models are supported in emerging policy discussions, in the application of the law in favour of these companies, and through generally embracing Web 2.0 as a new model for economic growth — in spite of these business models’ break with established notions of intellectual property and personal integrity. As we will show, this insight can be gleaned from a comparison between The Pirate Bay (TPB, thepiratebay.org/) and Google, from the dominant framing of these issues in terms of privacy and self–regulation, as well as in ongoing and recent policy discussions such as the ACTA agreement, the EU Telecom package, and the initiative for Creative Content Online in the Single Market. These policy discussions are all in favour of a strong protection of intellectual property, but view this only from the perspective of the cultural industries.

The existing juridical framework for Web 2.0 services is found in the 1998 Digital Millenium Copyright Act’s safe harbour provisions (in the U.S.) and in the European Union’s guidelines for electronic commerce (Directive 2000/31/EC). These laws are meant to limit the responsibility of service providers in regard to how their platforms are used. In the E.U. there is also a directive regarding data protection and privacy which was implemented in 1995. This directive stated among other things that processing of personal data requires the consent of users and that it is not processed for other purposes than those agreed upon by the two parties (Directive 95/46/EC). How distinctions are made in the Web 2.0 economy between legitimate and illegitimate services can be approached through a comparison between Google and the torrent site TPB. Both of these services can be used to find and download copyrighted content and both also generate revenue from advertisements. In the trial against TPB the defendants claimed that their site was protected under the ‘safe harbours’ of electronic commerce, since what they did was only providing a service in line with a search company. The claim of TPB was, of course, that they are just as legitimate a business as Google. The court, however, found that they were not. The perhaps more correct conclusion, in light of existing laws and regulations, is that both models are equally illegitimate. In Sweden there was some debate about the fact that TPB had made money through their service, and that this somehow compromised the idealistic appeal of the site’s owners, and the fact that they earned money was also a major issue in the court hearings. However, TPB’s problem was perhaps not that they made too much money, but that they made too little. Politically it is hardly possible to close down Google, but small search providers such as TPB that do not generate substantial revenues and do not employ thousands of workers worldwide are a much easier target.

In itself the TPB case can only amount to a demonstrative example that does not carry much empirical weight. From the perspective of ongoing policy discussions it can, however, be elevated to make a case in point. In these policy discussions it is clearly evident how different issues are framed in the case of Google and vis–à–vis other established social networking sites on the one hand, and in the case of TPB on the other. TPB was handled as question of property and ownership by the cultural industries, whereas in the case of other Web 2.0 services it is always framed as an issue of the user’s privacy. It is almost never acknowledged that users should have some right to ownership; for example, Vivian Reding (2008), the E.U. commissioner for issues of information society and telecommunications, in a public address at the Safe Internet Forum stated that social networking sites:

“… have turned us into active users of the Internet and shown us that we do not need special skills in order to create new forms of art online […] creating and sharing content is now easy and gives users the power to shape information and create new forms of art.”

For her the social Web creates new “opportunities for business” (Reding, 2008), but it also engenders problems that have to be solved politically. We will investigate how these problems are perceived in order to form an understanding of how this part of the new economy is envisioned in ongoing policy discussion. First we will look at the rather straightforward fact that the operations of the companies of Web 2.0 as an area of regulation are predominantly framed in terms of privacy (e.g., Reding, 2008), and the consequences this has for questions of regulation. Secondly, we will try to say something about a much trickier issue, namely what is not mentioned in these discussions: the protection of users’ rights to their intellectual property.

Privacy

A cluster of institutions and NGOs that engage in policy debates about social networking has emerged. Among them are organizations such as the Social Networking Task Force; INSAFE network (Safer Internet Day); Safer Internet Forum; Article 29 working group (E.U.); The International Working Group On Data Protection in Telecommunications; Institute for Prospective Technological Studies; Good Practice Guidance for Providers of Social Networking and Other User Interactive Services (U.K.); Electronic Privacy Information Center (EPIC); World Privacy Forum and Privacy Rights Clearinghouse. From the nature and activities of the organizations on this in no way all–encompassing list, it is clear how the problem in need of policy is framed: as one of privacy. There is nothing wrong with that — except for the fact that privacy is a very loose concept that can be, and is, embraced by all: liberals, conservatives and radicals; the European Union and the U.S. Federal Trade Commission; NGOs and the industry. For example, in September 2009, Facebook donated $US9.5 million for the construction of a foundation with the sole purpose of “securing online privacy” (Cashmore, 2009). For the industry and for the E.U., as exemplified in Reding’s speech, the privacy frame also implies that the means of regulation are not those of state intervention or regulation by law but a question of self–regulation. The E.U.–funded IPTS (Institute for Prospective Technological Studies), designed with the purpose to provide “customer–driven support to the E.U. policy–making process” delivered a report to the Commission in 2009 where, conveniently, it was shown that the users of social networking sites:

“… do not attribute responsibility for the protection of personal data to governments or police and courts. Instead they are asking for tools that give them more direct control over their own identity data.” (Lusoli and Miltgen, 2009)

Consequently, as directed by the European Commission, companies such as Google, Facebook, Microsoft Europe, and Yahoo! Europe, have signed self declarations for safer social networking. This seems to imply that as long as the industry promises to handle personal information discretely and correctly, the problem goes away.

There is also a strong push for self–regulation from the Web industry as can be seen in the launch of the Facebook Principles in February 2009. In what was described as a reaction to the heated privacy debate surrounding the company, Facebook invited all of their users to become co–designers, participants in the development of new codes of privacy to govern the site. The founder and CEO of Facebook, Mark Zuckerberg, stated that Facebook invites users to “review, comment and vote” in order to create the new “set of values that will guide the development of the service, and Statement of Rights and Responsibilities that make clear Facebook’s and users’ commitments related to the service“ (Spring, 2009). This move was advertised as the era of “Facebook Democracy”. However, from the perspective of this article, this move could be interpreted more as a way of fencing off further regulation and state–governed policy in this area, under the seal increased influence on the part of the public. Hence, this framing could be understood as an ideological one, to frame problems as concerns of privacy lets governments withdraw under the guise of doing something (enhancing self–regulation, setting up guidelines and best practices, etc.). This withdrawal — regulation as non–regulation — opens the space for a smooth allocation of resources, from the producers themselves (i.e., the users of Facebook) to the owners of the means of production. In the words of Agamben, it is an example of an administration of the absence of order.

The (un)canny silence

As pointed out earlier, the state of exception is best understood in conjunction with the intensification of the reach and power of the law in other areas. This other area, in our case, includes the increased measures for surveillance and policing of intellectual property rights that are being rolled out simultaneously with the withdrawal of state regulation from the ‘social Web’. The interesting paradox here is how little attention is paid to the intellectual property rights of users. In what we know about current negotiations, and from recently passed laws, this question is hardly raised at all. Given that it is always hard to theorize the absence of something, we should nevertheless acknowledge this omission. There are a couple of cases of policy–making in which the issue of user’s intellectual property rights could have been addressed: the ongoing discussion about the ACTA agreements, the European Telecom package, and the European Union’s initiative for “Content Online”.

First, although there is little public information available on the ACTA agreement, the information that is accessible gives no reason to assume that users’ copyrights are under consideration. The stated goal of the agreement is “to fight more efficiently the growing problem of counterfeiting” [17] in which the “intended focus is on counterfeiting and piracy activities that significantly affect commercial interests” [18]. And rather than investigating Web 2.0 services’ role in copyright infringement the goal is to discuss the “limitations on the application of those remedies to online service providers” [19]. The continued stress on “the rise of easy–to–copy digital storage mediums” [20] also seems to refer to personal hard drives and media players rather than the large server farms of Web companies.

Secondly, the European Telecom Package, which is part of the E.U.’s Lisbon strategy for increased growth and jobs and part of the i2010 initiative for the creation of a “Single European Information Space” [21], was first proposed in 2007 and adopted in November 2009. This package, being one of the major initiatives in this area of law, should be expected to address some of the issues discussed in this article. Once again however the privacy frame prevails, as the package included regulations for Telecom operators that hold them responsible for communicating to their users any breach of security and leaks of personal data to third parties. In cases where such ‘leaks’ are direct effects of the company’s own operations (i.e., ‘crowdsourcing’) the directive had nothing to say. The great debate over copyright enforcement instead only covered whether so–called ‘end–users’ can be disconnected from the Internet if they are acting in violation of intellectual property rights.

Thirdly, the most relevant E.U. initiative for the questions discussed in this article is the European initiative for “Creative Content in a European Digital Single Market”. The overall goals of this initiative is to encourage growth and innovation by “ensuring that European content achives its full potential” (European Commission, 2008a), to “fostering users’ active role in content selection, distribution and creation” and to update the “legal provisions that unnecessarily hinder online distribution of creative content in the EU, while acknowledging the importance of copyright for creation”. An accompanying working paper acknowledges the economic importance of what the report calls user–generated content, and also the potentially problematic question of property rights, “though the large majority of users will be satisfied with credits and exposure of their content”, as the paper puts it (European Commission, 2008b). The paper argues that “User–created [sic] should get proper room and protection to allow for this type of content to develop”. It is however not entirely clear who or what that needs to be protected for these business models to develop — the platform operators, or the users? The need and purpose for regulation is however specified in a later document from the initiative: “user–created content is playing a new and important role, alongside professionally produced content. The co–existence of these two types of content needs a framework designed to guarantee both freedom of expression and an appropriate remuneration for professional creators” (European Commission, 2009). In this paper then, a difference between professionally produced material and content produced by amateurs is stipulated. This lets the paper handle user–generated content as a matter of free speech (‘the right to remix’) and professional material as a question of ownership. This pattern is also found a somewhat earlier paper, the Final Report on the Content Online Platform (2009), where copyright issues are handled separately from questions of user–generated content. The concerns raised in the earlier papers and drafts has accordingly vanished in the later versions where a simplified dicothomy between amateur and professional content prevails. Furthermore, the report states that the Commission’s aim is: “in the short term, to promote pragmatic solutions enhancing the availability of creative content online and ensuring additional revenues for all players in the value chain; in the medium term, to look at the need for regulatory intervention”. What the Commission is proposing is consequently a period of non–regulation in order to promote the emergence of new business models, and to ensure regulation only in a later stage. In such a later stage it is reasonable to believe that policy and regulation will be adapted to and modeled upon the economic practices already in place, developed by the industry in the unregulated period.

 

++++++++++

Conclusion

ur conclusion is then that the definitions of and attitudes towards legitimate piracy is very much in the air right now. The public and official support of the social Web by institutions and governments could be interpreted as evidence for these sites being at least partly sponsored by the political system. But more to the point is the anomie of the situation surrounding the social Web. We have seen efforts from policy–makers to shut down what is perceived as illegitimate Web 2.0 services, but there is a considerable silence and neglect around similar business models originating from the U.S. and Silicon Valley. Agamben (2005) writes that if we want to give a name to the actions that are undertaken in a state of anomie, it is that they “inexecute” [22] the rule of law; the state of exception works through a postponement of the law. Is this not how the line between legitimate and non–legitimate Web 2.0 sevices is drawn at the moment? Through vigilance on the one hand, and acknowledged neglect on the other. Since these questions have been on the table for quite some time — Alvin Toffler (1980) coined the term “prosumer” decades ago — this is not only a matter of the slow progression of the law. Rather, services that transform user labour into economic value are given a status of working in between the categories of law — user rights, intellectual property rights, etc. — in effect a generalized exception for Web 2.0 services that have proven themselves financially viable. Web 2.0, with all its symbolic and economic power, occupies a privileged space at the moment — representing the class of the new (Barbrook, 2006) — and is expected to deliver new models for economic growth. Only if Web 2.0 fails in delivering what it promises might we see regulation in this area.

Taken together our analysis suggests that the popular and widespread juxtaposition between copyright on the one hand, and the freedom of the cultural commons on the other, is a much too simplified dichotomy. As we have tried to show, a condition for many of the Web 2.0 services is a weakened conception of intellectual property rights, weakened notions of authorship and a disregard for the rights to the fruits of intellectual or immaterial labour. Our argument has been that it is questionable whether this really leads — in the case under inspection here — to a strengthening of the commons and a truly egalitarian and participatory production of culture. We need to think more about how rights to participation and rights to take part of the value we produce together, are distributed and regulated, more than we need to think about how we can achieve less regulation. A just regulation of the commons cannot take place if we forget the differences in power that accompany technologies and capital, nor the differences in interests between those who own the means of production and those who don’t.

Since in this article we have tried to capture aspects of ongoing development, our work should be seen as a plea for further research in this area. A lot will be decided in the near future: in the practice of law and in the construction of new laws. For the reasons outlined in the article we want to see increased vigilance from the academic community, not only in relation to the strengthening of copyright, but also in the opposite case, in relation to the free labourers of the Web. End of article

 

About the authors

Peter Jakobsson is a PhD candidate at Södertörn University, Sweden. His thesis work tracks the development of copyright in relation to new media in a Swedish context. He has previously published articles on game studies in, for example, the European Journal of Cultural Studies and Interactions.

Fredrik Stiernstedt is a PhD candidate and Junior Lecturer at Södertörn University. His dissertation research is on the political economy of Web 2.0. He has previously worked in the field of radio research and published in, for example, the Radio Journal.

Together they are currently working on a project about the material aspects of informational culture, including work on data centres and the role of digital archives in the information economy.

 

Notes

1. For example, in a Swedish context, the right–wing government states on the one hand that their policy is that creators should have better opportunities to resign from their rights (Dagens Nyheter, 2009), while on the other, they issue legislation that reinforces and increases the possibilities for (presumably the cultural industries) to survey and supervise their rights, as well as strengthening copyrights generally.

2. Agamben, 1998, p. 6.

3. Ibid., p. 4.

4. Ibid., p. 19.

5. Agamben, 2004, p. 611.

6. Harvey, 2003, p. 137.

7. E.g., A leading Swedish newspaper Dagens Nyheter has published a list with the 100 greatest moments in rock history as seen on YouTube at http://www.dn.se/kultur-noje/musik/dns-fredrik-strage-blir-youtube-ciceron-1.632918 (091105).

8. http://www.youtube.com/user/UNESCOVideos (091105).

9. The technology is explained in a white paper published on Applied Semantic’s Web site: CIRCA technology: Applying meaning to information management. The white paper, along with most of the material on the site, was however removed when Google acquired the company. The paper can be retrieved via an e–mail request to the authors of this paper.

10. Paşca, 2007, p. 109.

11. Bunescu and Paşca, 2006, p. 10.

12. Paşca, 2007, p. 103.

13. Durme and Paşca, 2008, p. 1,244.

14. See, for example, online guides such as http://www.buildwebsite4u.com/building/web-crawlers.shtml.

15. Even if Facebook, by way of contract, secures a “non–exclusive, transferable, sub–licensable, royalty–free, worldwide license to use any IP content that you post on or in connection with Facebook (‘IP License’).” (Facebook.com, 2009b).

16. Jenkins, 2006, p. 70.

17. “The Anti–counterfeiting Trade Agreement — Summary of key elements under discussion,” p. 2.

18. Ibid.

19. Ibid., p. 4.

20. “Anti–counterfeiting: Participants meet in Tokyo to discuss ACTA,” p. 1.

21. http://ec.europa.eu/information_society/eeurope/i2010/pc_post-i2010/index_en.htm.

22. Agamben, 2005, p. 50.

 

References

Giorgio Agamben, 2005. State of exception. Translated by Kevin Attell. Chicago: University of Chicago Press.

Giorgio Agamben, 2004. “Interview with Giorgio Agamben — Life, A work of art without an author: The state of exception, the administration of disorder and private life,” Interviewed by Ulrich Raulff, German Law Journal, at http://www.germanlawjournal.com/article.php?id=437, accessed 5 July 2010.

Giorgio Agamben, 1998. Homo sacer: Sovereign power and bare life. Translated by Daniel Heller–Roazen. Stanford, Calif.: Stanford University Press.

“Anti–counterfeiting: Participants meet in Tokyo to discuss ACTA — Tokyo, 9 October 2008,” at http://trade.ec.europa.eu/doclib/docs/2008/october/tradoc_141203.pdf, accessed 18 December 2009.

“The Anti–counterfeiting Trade Agreement — Summary of key elements under discussion,” at http://trade.ec.europa.eu/doclib/docs/2009/november/tradoc_145271.pdf, accessed 18 December 2009.

Applied Semantics, 2002. CIRCA technology: Applying meaning to information management. White paper that is presently unavailable; can be retrieved via an e-mail request to the authors of this paper.

“Article 29 Data protection working party,” at http://ec.europa.eu/justice_home/fsj/privacy/news/docs/pr_google_16_05_07_en.pdf, accessed 18 December 2009.

Richard Barbrook, 2006. The class of the new. [s.l.]: Openmute.

Yochai Benkler, 2006. The wealth of networks: How social production transforms markets and freedom. New Haven, Conn.: Yale University Press.

James Boyle, 2008. The public domain: Enclosing the commons of the mind. New Haven, Conn.: Yale University Press.

James Boyle, 2003. “The second enclosure movement and the construction of the public domain,” Law and Contemporary Problems, volume 66, pp. 33–74.

Razvan Bunescu and Marius Paşca, 2006. “Using encyclopedic knowledge for named entity disambiguation,” Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL–06), Trento, Italy (April), pp. 9–16.

Pete Cashmore, 2009. “RIP Facebook Beacon,” Mashable (19 September), at http://mashable.com/2009/09/19/facebook-beacon-rip/, accessed 18 December 2009.

Dagens Nyheter, 2009. “‘Moderaterna ska jobba för en dynamisk upphovsrätt’ [‘The moderate party will work for a dynamic copyright law’] (17 December), at http://www.dn.se/opinion/debatt/moderaterna-ska-jobba-for-en-dynamisk-upphovsratt-1.1015204, accessed 18 December 2009.

Massimo De Angelis, 2001. “Marx and primitive accumulation: The continuous character of Capital’s ‘enclosures’,” Commoner, number 2, pp. 1–22.

José van Dijck, 2009. “Users like you? Theorizing agency in user-generated content,” Media, Culture & Society, volume 31, number 1, pp. 41–58.http://dx.doi.org/10.1177/0163443708098245

José van Dijck and David Nieborg, 2009. “Wikinomics and its discontents: A critical analysis of Web 2.0 business manifestos,” New Media & Society, volume 11, number 5, pp. 855–874.http://dx.doi.org/10.1177/1461444809105356

Directive 2000/31/EC, 2000. “On certain legal aspects of information society services, in particular electronic commerce, in the Internal Market,” Official Journal L 178, pp. 1–16.

Directive 95/46/EC, 1995. “On the protection of individuals with regard to the processing of personal data and on the free movement of such data,” Official Journal L 281, pp. 31–50.

Benjamin van Durme and Marius Paşca, 2008. “Finding cars, goddesses and enzymes: Parametrizable acquisition of labeled instances for open–domain information extraction,” Proceedings of the 23rd AAAI Conference on Artificial Intelligence, volume 2, pp. 1,243–1,248.

Nick Dyer–Witheford, 1999. Cyber-marx: Cycles and circuits of struggle in high–technology capitalism. Urbana: University of Illinois Press.

European Commission, 2009. “Creative Content in a European Digital Single Market: Challenges for the Future. A Reflection Document of DG INFSO and DG MARKT (22 October), at http://ec.europa.eu/avpolicy/docs/other_actions/col_2009/reflection_paper.pdf, accessed 18 March 2010.

European Commission, 2008a. “Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committ and the Committee of the Regions on Creative Content Online in the Single Market,” COM(2007) 836 final (3 January), Brussels: European Commission, at http://eur-lex.europa.eu/, accessed 5 July 2010.

European Commission, 2008b. “Commission staff working document: Document accompanying the Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committ and the Committee of the Regions on Creative Content Online in the Single Market,” COM(2007) 836 final (3 January), Brussels: European Commission, at http://eur-lex.europa.eu/, accessed 5 July 2010.

Greg Elmer, 2004. Profiling machines: Mapping the personal information economy. Cambridge, Mass.: MIT Press.

Facebook.com, 2009a. “Statistics,” at http://www.facebook.com/press/info.php?statistics, accessed 18 December 2009.

Facebook.com, 2009b. “Statement of rights and responsibilities,” at http://www.facebook.com/terms.php?ref=pf, accessed 18 December 2009.

Joëlle Farchy, 2009. “Economics of sharing platforms: What’s wrong with the cultural industries,” In: Pelle Snickars and Patrick Vonderau (editors). The YouTube reader. Stockholm: National Library of Sweden, pp. 360–371.

Christian Fuchs, 2009. “Towards a critical theory of information,” TripleC, volume 7, number 2, pp. 243–292.

Google, 2009. “Privacy policy” (11 March), at http://www.google.com/privacypolicy.html, accessed 18 December 2009.

Google, 2008. “Gmail privacy notice,” at http://mail.google.com/mail/help/privacy.html, accessed 18 December 2009.

Google, 2003. “Google acquires Applied Semantics” (23 April), at http://www.google.com/press/pressrel/applied.html, accessed 18 December 2009.

Michael Hardt and Antonio Negri, 2009. Commonwealth. Cambridge, Mass.: Belknap Press of Harvard University Press.

David Harvey, 2003. The new imperialism. Oxford: Oxford University Press.

Henry Jenkins, 2006. Convergence culture: Where old and new media collide. New York: New York University Press.

Lawrence Lessig, 2001. The future of ideas: The fate of the commons in a connected world. New York: Random House.

John Letzing, 2010. “Google victorious in Viacom’s YouTube lawsuit,” MarketWatch (23 June), at http://www.marketwatch.com/story/google-victorious-in-viacoms-youtube-lawsuit-2010-06-23, accessed 5 July 2010.

Sonia Livingstone, 2008. “Taking risky opportunities in youthful content creation: Teenagers’ use of social networking sites for intimacy, privacy and self–expression,” New Media & Society, volume 10, number 3, pp. 393–411.http://dx.doi.org/10.1177/1461444808089415

Geert Lovink and Sabine Niederer, 2008. Video Vortex reader: Responses to YouTube. Amsterdam: Institute of Network Cultures.

Geert Lovink and Trebor Scholz, 2007. The art of free cooperation. New York: Autonomedia.

Wainer Lusoli and Caroline Miltgen, 2009. “Young people and emerging digital services: An exploratory survey on motivations, perceptions and acceptance of risks,” JRC Scientific and Technical Reports, EUR 23765 EN, at http://ipts.jrc.ec.europa.eu/publications/pub.cfm?id=2119.

Karl Marx, 1976. Capital: A critique of political economy. Volume 1. Translated by Ben Fowkes. London: Penguin.

Paul McDonald, 2009. “Digital discords in the online media economy: Advertising versus content versus copyright,” In: Pelle Snickars and Patrick Vonderau (editors). The YouTube reader. Stockholm: National Library of Sweden, pp. 387–405.

Nielsen, 2009. “Nielsen announces September U.S. online video usage data” (12 October), at http://en-us.nielsen.com/main/news/news_releases/2009/october/nielsen_announces, , accessed 18 December 2009.

Nielsen, 2008. “The video generation: Kids and teens consuming more online video content than adults at home, according to Nielsen Online,” at http://en-us.nielsen.com/etc/medialib/nielsen_dotcom/en_us/documents/pdf/ press_releases/2008/june.Par.92328.File.pdf, accessed 18 December 2009.

Tim O’Reilly, 2005. “What is Web 2.0: Design patterns and business models for the next generation of software,” at http://oreilly.com/web2/archive/what-is-web-20.html, accessed 18 December 2009.

Zizi Papacharissi, 2009. “The virtual geographies of social networks: A comparative analysis of Facebook, LinkedIn and ASmallWorld,” New Media & Society, volume 11, numbers 1–2, pp. 199–220.

Marius Paşca, 2007. “Organizing and searching the World Wide Web of facts — step two: Harnessing the wisdom of the crowds,” Proceedings of the 16th International Conference on World Wide Web (Banff, Alberta, Canada), pp. 101–110.

Matteo Pasquinelli, 2008. Animal spirits: A bestiary of the commons. Rotterdam: NAi Publishers.

Michael Perelman, 1998. Class warfare in the information age. New York: St. Martin’s Press.

Søren Mørk Petersen, 2008. “Loser generated content: From participation to exploitation,” First Monday, volume 13, number 3, at http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2141/1948, accessed 5 July 2010.

Vivian Reding, 2008. “Social networking in Europe: Success and challenges,” speech presented at Safer Internet forum, Luxembourg (26 September), at http://europa.eu/rapid/pressReleasesAction.do?reference=SPEECH/08/465, accessed 4 January 2010.

Greg Sandoval, 2009. “Did Viacom find smoking gun in YouTube case?” at http://news.cnet.com/8301-31001_3-10365329-261.html, accessed 18 December 2009.

Trebor Scholz, 2008. “Market ideology and the myths of Web 2.0,” First Monday, volume 13, number 3, at http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2138/1945, accessed 5 July 2010.

Richard Sennett, 2008. The craftsman. New Haven, Conn.: Yale University Press.

Pelle Snickars, 2009. “The archival cloud,” In: Pelle Snickars and Patrick Vonderau (editors); The YouTube reader. Stockholm: National Library of Sweden, pp. 292–313.

Pelle Snickars and Patrick Vonderau (editors), 2009. The YouTube reader. Stockholm: National Library of Sweden.

Tom Spring, 2009. “A Facebook democracy? Users invited to shape site’s policies,” at http://www.macworld.com/article/139070/2009/02/facebook_policies.html, accessed 18 December 2009.

Alvin Toffler, 1980. The third wave. London: Collins.

William Uricchio, 2004. “Beyond the great divide: Collaborative networks and the challenge to dominant conceptions of creative industries,” International Journal of Cultural Studies volume 7, number 1, pp.79–90.http://dx.doi.org/10.1177/1367877904040607

Francis Wheen, 1999. Karl Marx. London: Fourth Estate.

Slavoj Žižek, 2009. First as tragedy, then as farce. New York: Verso.

 


Editorial history

Paper received 4 January 2010; revised 18 March 2010; accepted 2 June 2010.


Copyright © 2010, First Monday.
Copyright © 2010, Peter Jakobsson and Fredrik Stiernstedt.

Pirates of Silicon Valley: State of exception and dispossession in Web 2.0
by Peter Jakobsson and Fredrik Stiernstedt.
First Monday, Volume 15, Number 7 - 5 July 2010
https://firstmonday.org/ojs/index.php/fm/article/download/2799/2577