Web 2.0 user knowledge and the limits of individual and collective power
First Monday

Web 2.0 user knowledge and the limits of individual and collective power by Nicholas Proferes



Abstract
This paper revisits and builds upon a prominent theme in First Monday’s 2008 “Critical perspectives on Web 2.0” special issue: exploration and critique of the claim that Web 2.0 “... provides novel opportunities for the articulation of individual and collective social power by enhancing participation in media production and cultural expression” (Zimmer, 2008). This article refreshes the critique by examining how impediments to users’ technical knowledge about how Web 2.0 platforms function affect user power.

After reviewing how the scholarly literature on Web 2.0 accounts for the user knowledge/user power connection, this paper suggests Braman’s (2006) concept of “informational power” as a useful heuristic for exploring how access to information about how Web 2.0 platforms function operates as a “genetic” facet of user power. Then, using Braman’s framework, the article reviews two cases of struggle over informational power in regards to the Twitter platform: controversy over the Library of Congress Twitter archive and Occupy Wall Street protesters inaccurately accusing Twitter of censorship because of a misunderstanding about “Trending Topics.” Through the application of Braman’s framework to these cases, it becomes clear how users’ prospects for developing technical knowledge about the platforms shape the limits and horizons of individual and collective power.

Contents

Introduction
Literature review: Dis/empowerment
User knowledge of information flow in Web 2.0 as informational power
Applying the informational power framework
Conclusion

 


 

Introduction

In 2004, Tim O’Reilly began promoting the term “Web 2.0” to describe a new generation of Web-based technologies. He defined Web 2.0 as:

... the network as platform, spanning all connected devices; Web 2.0 applications are those that make the most of the intrinsic advantages of that platform: delivering software as a continually-updated service that gets better the more people use it, consuming and remixing data from multiple sources, including individual users, while providing their own data and services in a form that allows remixing by others, creating network effects through an ‘architecture of participation,’ and going beyond the page metaphor of Web 1.0 to deliver rich user experiences. (O’Reilly, 2005)

Since O’Reilly’s definition and the subsequent proliferation of Web 2.0 services, there has been ongoing discussion about the impact of this “architecture of participation” on individual and collective social power.

Some have argued Web 2.0 technologies enhance user power. For example, Andersen (2007) argues that within Web 2.0 environments, users move from being “passive” consumers of content to having capabilities for “participatory” and “active” engagement [1]. Through Web 2.0, users have the potential to gain or maintain cultural and social capital (Ellison, et al., 2007); undermine the authority of traditional media hierarchies and engage in participatory culture (Jenkins and Deuze, 2008); construct new identities, meet friends and colleagues, and engage with strangers (Albrechtslund, 2008); and even become more active participants in governance (Shirky, 2011). Web 2.0 technologies have been hailed as engendering group coordination and action (Shirky, 2010; Tapscott and Williams, 2008), which, in turn, can facilitate users becoming part of what Castells (2009) refers to as networked counter-power. Neologisms such as “produser” (Bruns, 2008a) and “prosumer” [2] (Quan-Haase and Young, 2010; Ritzer and Jurgenson, 2010) have been used to demarcate the shift in user capabilities.

Often, user empowerment narratives rest on O’Reilly’s logic that Web 2.0 technologies inherently serve as “architectures of participation.” However, these perspectives frequently conflate participation with empowerment. By highlighting the sometimes worrisome political-economic and ideological foundations of Web 2.0 technologies, many have critiqued these optimistic narratives. For example, authors in First Monday’s 2008 “Critical perspectives on Web 2.0” special issue such as Jarrett (2008), Petersen (2008), and Scholz (2008), challenged the narrative of unbridled user empowerment vis-à-vis ideological and political-economic critique. They offered counters to the claim that Web 2.0 “... provides novel opportunities for the articulation of individual and collective social power by enhancing participation in media production and cultural expression” (Zimmer, 2008). More recent work by Andrejevic (2009), Cohen (2008), Fenton and Barassi (2011), Fuchs (2014, 2011), Gehl (2011), Sandoval (2012), and van Dijck (2013, 2012, 2009) shows how user empowerment narratives neglect how growth in user power is proportionally less than the growth of power of other institutional actors; how users are left with little true choice or control regarding how information about them is collected and shared; how user labor is commoditized; and, how users are alienated from the information they produce.

Despite 10 years of ongoing back and forth about whether Web 2.0 empowers or disempowers, there remain outstanding gaps in the conversation. Notably, this debate often only tacitly addresses the connection between user knowledge of Web 2.0 technologies — how data, algorithms, protocols, defaults, policies, platform business practices, and information flows are implemented and arranged — and user power. This misses the important role understandings of technology and its antecedents play in shaping users’ relative power.

This paper suggests reframing the current approach to user power by leveraging Braman’s (2006) “informational power” framework. Braman’s framework facilitates identifying how users’ access to technical knowledge of Web 2.0 platforms operates as a genetic factor in their expressions of power. This article then illustrates two cases of struggles over to informational power in the context of Twitter: controversy over the Library of Congress Twitter archive and Occupy Wall Street protesters inaccurately accusing Twitter of censorship because of a misunderstanding about “Trending Topics.” Through the application of Braman’s framework to these cases, it becomes clear how impediments to users developing knowledge about this platform shape the limits and horizons of individual and collective power.

 

++++++++++

Literature review: Dis/empowerment

Contextualizing the literature

Revisiting “Web 2.0” and the literature about Web 2.0 a decade later makes for somewhat of a challenge. The phrase “Web 2.0” has seemed to have fallen out of favor and some academics reflect on the early optimistic responses to Web 2.0 as, perhaps, joyful naivety. In the contemporary academic literature, there is greater emphasis on that booming outgrowth of Web 2.0 technologies: social media sites. Yet, revisiting the early promises of Web 2.0 alongside the more recent scholarship on social media sites reveals the evolution (perhaps, dawning disappointment) as these so-called technologies of empowerment slowly began to be realized as a data-mined, mobile, always-on, surveillant social media [3]. When viewed this way, the debate over dis/empowerment appears as an evolution in understanding participatory emancipation as empowerment to the (re)imposition of labor alienation, exploitation, and ultimately domination as disempowerment. However, such a narrative paints an incomplete picture, removing from consideration how users’ access to knowledge about how Web 2.0 technologies function impacts users beyond the participation/exploitation storyline.

Locating user knowledge in early Web 2.0 empowerment literature

O’Reilly’s Web 2.0

In 2007, O’Reilly expanded his previous definition of Web 2.0, citing additional undergirding principles such as: the Web as platform, the harnessing of collective intelligence and taking advantage of the wisdom of the crowd, the importance of data (calling it the next “Intel inside”), and the end of the software release cycle, among others. In detailing these principles, O’Reilly paints an optimistic view of what Web 2.0 can enable. However, O’Reilly ignores the role user knowledge plays in making the realization of the principles possible. In doing so, O’Reilly’s visioning glosses over the contingent nature of the emancipatory potential of Web 2.0.

Reflecting on O’Reilly’s claims in light of the 2016 technological milieu surfaces some of these shortcomings. For example, in his discussion of how Web 2.0 technologies can help harness collective intelligence, O’Reilly states:

Wikipedia, an online encyclopedia based on the unlikely notion that an entry can be added by any Web user, and edited by any other, is a radical experiment in trust, applying Eric Raymond’s dictum (originally coined in the context of open source software) that “with enough eyeballs, all bugs are shallow,” to content creation. Wikipedia is already in the top 100 Web sites, and many think it will be in the top ten before long. This is a profound change in the dynamics of content creation! [4]

Today, the claim that Wikipedia can truly be added to by any Web user — and is therefore a shining example of the promise of Web 2.0’s ability to harness collective wisdom — has fallen short. Many have pointed out Wikipedia has only a small number of elite editors who contribute a significant amount of content to the platform (Anderson, 2012; Kittur, et al., 2007; Ortega, et al., 2008); that over 90 percent of Wikipedia’s editors are male (Khanna, 2012); that the global north has far greater representation on Wikipedia than the global south (in addition to many other discrepancies in the equality of representation in content [Simonite, 2013]); and that the number of active editors has been in decline since 2007 (Simonite, 2013). In fact, getting new Wikipedia users to contribute has been a significant challenge for the platform. Doing so requires educating them on the many policies of the platform in addition to training them on how the technical interface works. While Wikipedia has made several attempts to lower the knowledge barrier for new users, these attempts have been met with setbacks and failures (Simonite, 2013). In their study of the role of people’s background attributes and Internet skills in participation on Wikipedia, Hargittai and Shaw (2015) find skills are “an extraordinarily robust predictor of contributing to Wikipedia and also help explain a key dimension of the gender gap among Wikipedia editors.” [5]

O’Reilly also ignores many of the potential consequences of some of Web 2.0’s principles for users’ abilities to develop technical understandings of these platforms. For example, as part of his discussion regarding the end of the software release cycle, O’Reilly argues platforms such as Google Mail, Flickr, and others, can now essentially stay in perpetual beta, undergoing continuous tweaking, changing, and updating to provide new functionalities. However, in this paradigm, users may be less aware of when changes to a Web site or service occur (as users do not have to install a “new release”). Further, they may be less aware of knowing what has changed when updates occur. These changes contribute to a situation where “platforms” are constantly shifting under users’ feet. This can lead to problems for users and, depending on the change, has the potential to undermine users’ earlier decisions. For example, some users were upset when, in 2015, Facebook made all public posts searchable (Bhutia, 2015). Previously, those posts were still “public,” but did not have the same level of discoverability. Users may have made decisions about posting publicly based on their understanding of the platform. Users today may still be making decisions to post publicly based on inaccurate and out-of-date understandings of the platform’s information flows and the discoverability of content. The end of the product rollout means a higher demand on users’ cognitive abilities to continuously parse opaque interfaces and detect change.

Finally, in discussing how the Web 2.0 paradigm shifts software above the level of a single device, O’Reilly argues these technologies “leverage the power of the Web platform, making it a seamless, almost invisible part of their infrastructure” [6]. While O’Reilly celebrates the invisibility of infrastructure, it is exactly this invisibility that carries consequences for users’ abilities to develop technical knowledge.

Produsers

In light of changes to the traditional production/consumption model engendered by Web 2.0 technologies, Bruns (2008a, 2008b, 2007, 2006) introduces the term “produsage” to describe “the collaborative, iterative, and user-led production of content by participants in hybrid user producer, or produser role” [7]. In describing the kinds of education members of society will need to be effective produsers, Bruns identifies four target capacities: creative, collaborative, communicative, and critical. He acknowledges the need for education to “address and problematize the processes and practice of user-led content creation itself, in order to help participants develop a more informed, self-reflexive, and critical perspective on their own practices as information seekers, users, and providers ... .” [8]. However, in his review of each of the capacities, the purpose of knowledge about these spaces is framed as increasing users’ produsive capacities; facilitating co-creation and the ability to work in a group; and fostering users’ abilities to make decisions about when, where, and with whom to produse.

Bruns’ reasoning about the necessity of critical capacities is that they are needed, “to discern whether a particular piece of information is to be trusted, to look beyond the survey to examine the sources for that information, and the process of its produsage (such as, for example, the edit history of a Wikipedia entry) ... .” [9] However, critically evaluating information on a Wikipedia page also requires users have policy knowledge and technical knowledge of the platform: First knowing every user edit is saved within the edit history, and second, knowing how to pull up the revision history page within the Wikipedia interface, how then to make sense of (often quite long) edit histories and the standards used to convey information about the edits. Hence, the ability to evaluate information critically is highly dependent on understanding the technology itself.

Locating user knowledge in Web 2.0 critique

Many scholars have criticized rosy depictions of Web 2.0 user empowerment as neglecting the technology’s exploitative political-economic foundations. For example, Jarrett (2008) argues “Techniques of power which construct and promote this subject position serve to negate the hierarchy of traditional producer/consumer relations. Yet, this strategy can only function in relation to a producer/consumer power relation which remains ... ultimately, unchanged.” [10] Critics argue while users may gain access to expanded communication capabilities or the possibility of engaging in “participatory culture” through Web 2.0 platforms, this power is only gained through the imposition of a laborer/owner power dynamic. As van Dijck and Nieborg (2009) put it, while peer-production models appear to be replacing older top-down approaches, these models exist “entirely inside commodity culture.” [11]

Despite arguments made by scholars such as Jenkins and Deuze (2008) that user empowerment undermines the authority of traditional media hierarchies, users rarely have any measure of control over the information flows within these spaces and generally do not share in any of the profit extracted from their labor (Terranova, 2004, 2000). Ritzer and Jurgenson (2010) argue that the relationship between user and Web 2.0 technologies represents a kind of “prosumer capitalism” in which “control and exploitation take on a different character than in the other forms of capitalism ... there is a trend toward unpaid rather than paid labor and toward offering products at no cost.” [12]

Scholars hailing from critical Marxist traditions argue that this is not inherently a new phenomenon, but is essentially old wine in a “2.0” bottle. They argue the problem stems from age-old alienation and the exploitation of the laborer: the laboring class does not have power or control over the means of production (Petersen, 2008) and is exploited so the capitalist can gain surplus value (Fuchs, 2014, 2011). The fact users do not own or control the means of production of most Web 2.0 platforms means they have little influence over what happens to the information they produce. In this disempowerment discourse, power relations between users and Web 2.0 technologies seem built on a foundation of labor alienation, exploitation, and ultimately domination (Cohen, 2008; Coté and Pybus, 2007; Fuchs, 2011, 2008; Petersen, 2008).

Several authors suggest the exploitative nature of the relationship between user/Web 2.0 purveyor is furthered by impediments to the construction of user knowledge. Social media platforms in particular exacerbate exploitation because they depend so heavily on the surveillance of users in combination with selling user data. Critical accounts of many popular social media platforms reveal how user knowledge can be shaped and influenced by technological design and technological discourse.

Code politics and algorithms

Langlois, et al. (2009) argue a common characteristic of commercial Web 2.0 platforms is that the ways user generated content are commodified are often kept invisible to users, specifically through the technical structuring and design of the sites. Referring to this as “code politics,” the authors state many Web 2.0 purveyors make strategic design decisions to reduce user resistance, purposefully hiding how information flows through these systems and subsequently becomes commodified. To illustrate their argument, Langlois, et al. discuss Facebook’s Beacon program, a controversial system that Facebook implemented that facilitated greater commodification of user generated information. The authors argue that Beacon became controversial not simply because it involved the commodification of user generated information, but because it became visible and known to users. Suddenly, through their use of the site, users were confronted with how the information they generated flowed through Facebook’s platform to third parties and vice versa. Users petitioned Facebook for an immediate halt to Beacon and the program eventually became the basis of a lawsuit brought by a small number of users. Despite Facebook eventually putting a stop to that particular program, the “processes of commercialization ... are still taking place on the Facebook platform ... these processes, however, increasingly take place at the back-end level and because they are invisible to users, they meet with less resistance” [13]. In engaging in “code politics,” platform purveyors can inhibit the development of users’ knowledge in order to culture particular information production and consumption practices among users. As a result, the code politics of a platform can affect user knowledge, potentially reducing “resistance” from users.

Langlois, et al.’s example highlights some of the tensions between the development of user knowledge about how Web 2.0 technologies function and their commercial success. Individuals make use decisions based on their understandings of the platform, and it is often in the economic interest of these companies to promote forms of user knowledge that are conducive to the users participating slash laboring. While user resistance may not result in technological change often, non-use is a pressing and ongoing concern for the businesses that run many of today’s social media sites (Shah, 2016). As a result, platform purveyors have strong economic incentives for structuring their material technologies in ways that promote only certain kinds of understanding about these platforms.

Gillespie’s (2014, 2010) work on algorithms and platforms expands beyond the code politics argument. In his analysis of the increasingly important role algorithms play in selecting what information to show users, he demonstrates how previous modes of scholarly thinking fail to fully account for the reality of invisible change taking place within many of these platforms. While algorithms are not strictly limited to Web 2.0 technologies and social media platforms, their design and implementation in these spaces fits into the ongoing conversation about disempowerment. Gillespie argues that while algorithms may be “black boxed” (Pinch and Bijker, 1984) by designers, the black box metaphor, as traditionally used within the science and technology studies literature, falls short in this space by failing to adequately capture the reality of how algorithms are both obscured and continuously modified. To integrate O’Reilly’s vision of Web 2.0 with Gillespie’s observations, the end of the software release cycle means that algorithms can be “easily, instantly, radically, and invisibly changed” [14]; tweaked endlessly without the changes being observable to users.

Technological discourse

The work of van Dijck and Nieborg (2009) suggests users’ difficulty in developing knowledge about information flows on Web 2.0 platforms may be perpetuated not only by the structuring/code politics of a site, but also by the technological discourse surrounding it. In analyzing a number of Web 2.0 business manifestos, the two observe:

Web 2.0 manifestos ... typically do not provide any technological details about how various sites render profitable business models ... they focus on the emancipation of consumers into users and co-creators, rather than on the technical details concerning how these sites turn a profit. [15]

Gillespie (2010) similarly argues, “Online content providers such as YouTube are carefully positioning themselves to users ... making strategic claims as to what they do and do not do, and how their place in the information landscape should be understood” [16]. The language a Web 2.0 purveyor chooses to describe itself to users is critically important as a factor facilitating users’ knowledge of a platform. As Gillespie (2010) notes, even the decisions to use the phrase “platform” is a strategic choice and is rife with implications for the way these technologies are understood.

Privacy policies and governing documents are another way that Web sites communicate how these platforms work and what happens to information collected from users. According to Jensen and Potts (2004), privacy policies “are meant to inform consumers about business and privacy practices and serve as a basis for decision making for consumers” [17]. However, like the material technologies, these documents are often opaque. Cranor (2003) argues “read-ability experts have found that comprehending privacy policies typically requires college-level reading skills” [18] and Fernback and Papacharissi (2007) suggests while the documents may describe the general kinds of information a company collects, the language is often purposefully broad or vague.

While Jensen and Potts (2004) suggest that privacy policies are meant to inform consumers about what businesses collect and do with individuals’ information, Fernback and Papacharissi (2007) instead argue, “Privacy statements, crafted by staff attorneys, are written to coincide with business models so that firms may maximize the ability to profit from information that they capture” [19]. Rather than being technological discourse designed for benefit of users, these documents are constructed so in the event of a complaint the company can be absolved of legal responsibility.

Taken together with the work on code-politics and ever-changing algorithms, a rather bleak picture of the transparency of Web 2.0 technologies emerges. When the technical details of these spaces are not perceivable through the structure of the sites and are not easily discoverable through the technological discourse around these sites, users are put at a significant disadvantage in their abilities to build knowledge of how these systems function. This paper next argues, as a result, users are put at a de facto disadvantage in their ability to develop power.

Alternative reasons why knowledge of how Web 2.0 sites work is important for users

This review of the Web 2.0 dis/empowerment literature has surfaced several strains of thought as to why there may be cause for concern about user knowledge. First, user knowledge is a concern if it inhibits users from participating/produsing within the platforms, or if users apply incomplete or inaccurate knowledge as part of their technology use and produse incorrectly. User knowledge may also be a concern as barriers to its development further a situation where users are alienated from the information they create and are left with little recourse. Importantly, and of particular focus for the remainder of this article, impediments to the development of user-knowledge inhibit the possibility of informed choice and the ability to participate in the larger social systems these platforms exist within. This can include (but is by no means limited to): the ability to participate in broader social conversations and democratic processes regarding the governance of these spaces; the ability to participate in conversations about Web 2.0 platforms outside of the Web 2.0 platforms themselves; in understanding information from Web 2.0 platforms when it is encountered outside of the Web 2.0 platform (such as when a news organization talks about a “Trending Topic”); and, the ability to make informed decisions about adoption/non-adoption and use/non-use.

Regardless of which consequence is most compelling, a lack of information regarding how these technologies function is a problem for users. Not only do users often have incomplete, inaccurate, or otherwise incorrect understandings of information flows in Web 2.0 spaces [20], but their development of more robust understanding can be hindered by the sites’ structures, code politics, and algorithmic transparency, and by the discourse surrounding them. This article now turns to argue that by adopting a different view of user power, one that explicitly accounts for the consideration of access to information about how Web 2.0 technologies work as part of understandings of power, we can better trace how these impediments create horizons and limits for individual and collective power in Web 2.0.

 

++++++++++

User knowledge of information flow in Web 2.0 as informational power

Authors such as Weber (2009), French and Raven (1959), Lukes (1974), Wrong (1995), Keohane and Nye (1998), and Nye (2008) have approached conceptualizing power relations by classifying what different exercises of power look like, and on what basis those exercises of power are successful. The taxonomies these authors provide focus on how power operates in relation to states, organizations, and institutions [21]. At first glance, this makes them well suited for application to understanding how power operates through centralized platforms, such as Web 2.0 technologies. However, these taxonomies are not built with an explicit consideration for how information technologies affect power relations between organizations and constituents. For that, this article instead turns to the taxonomy of power offered by Braman (2006).

Braman (2006) identifies four forms of power: instrumental power, structural power, symbolic power, and informational power. Instrumental power is power that “shapes human behaviors by manipulating the material world via physical force,” structural power “shapes human behaviors by manipulating the world via rules and institutions,” symbolic power “shapes human behaviors by manipulating the material, social, and symbolic world via ideas, words, and images,” and informational power “shapes human behaviors by manipulating the informational bases of instrumental, structural, and symbolic power” [22]. While each of these forms of power is distinct, Braman observes three important properties of these forms of power. First, they are often co-located. For example, in conflicts among state actors, a state may exercise both kinetic warfare (instrumental power) and propaganda campaigns (symbolic power) simultaneously towards the same end. Second, the forms of power are often layered on and build on each other. For example, “smart weapons” layer informational power on top of conventional instrumental power, as they can target specific individuals based on informational data (such as cellphone locations, GPS information) and can operate more independently of human intervention. Lastly, informational power can often be a precondition for the exercise of other forms of power. For example, a base of informational power can be a precondition for the exercise of instrumental power, such as when having information on how to build a weapon, in the form of blueprints, is a precondition for creating that weapon and subsequently exercising the instrumental power inherent in its use [23]. Thus, Braman refers to informational power as “genetic” [24].

Braman’s observations about how informational power acts as a pre-condition in the exercise of other forms of power offers an entry-point for considering how users’ access to information about how Web 2.0 technologies function can shape users’ later expressions of power. This can occur in two ways. First, access to information about how Web 2.0 technologies function can help individuals build knowledge that helps them realize affordances with that artifact. The realization of affordances can be in produsage, or in other forms of instrumental, structural, or symbolic power, such as protesting user-information commodification through non-use. Second, information about how Web 2.0 technologies function can also enable certain forms of action or exercises of power outside of the material use/non-use of the technology. For example, access to information about how a Web 2.0 technology operates would better enable an individual to petition the system’s designers for specific changes to a material artifact (an expression of symbolic power), or to be able to participate in a democratic process regarding the regulation of a technology (an expression of structural power). Thus, access to information about how a Web 2.0 technology works impacts an individual’s relative informational power.

To ground the application of Braman’s framework in an explicit Web 2.0 example, consider a study by Eslami, et al. (2015) which explored users’ perceptions of Facebook’s “News Feed” algorithm. The authors found more than half of their participants were totally unaware of the algorithm’s presence. After being shown a version of their News Feed unaltered by the algorithm, many respondents indicated they had not realized they did not see all of their friends’ activities and were frustrated by this discovery. The authors found “participants often attributed missing stores to their friends’ decisions to exclude them rather than to Facebook News Feed algorithm” [25]. Participants who were unaware of the algorithm’s presence also reported using News Feed to make inferences about their relationships, “wrongly attributing the composition of their feeds to the habits or intent of their friends and family” [26].

The findings of the Eslami, et al. study help explain why user knowledge, and access to information from which user knowledge about how Web 2.0 platforms work is built, carries implications beyond produsive capacity or political-economic relations. With a broader base of informational power, users are better equipped to make sense not just of the information from these platforms they consume, but to also understand how and why the information is presented to them. It fosters understanding the context of the informational experience and being able to respond to that context; being able to apply the information in new contexts and in other exercises of power. Thus, access to information about how technologies function creates a genetic base for other expressions of power, both in relation to the Web 2.0 environment and outside of it. As the cases discussed next show, informational power — and a lack thereof — helps shape the horizons of individual and collective power.

 

++++++++++

Applying the informational power framework

Library of Congress archive

In 2010, the Library of Congress announced it had struck a deal with Twitter. In a blog post entitled, “How tweet it is!,” the Library declared that, “Every public tweet, ever, since Twitter’s inception in March 2006, will be archived digitally at the Library of Congress” [27]. The deal was celebrated by those who saw tweets as “a history of commentary that can provide valuable insights into what’s happened and how people have reacted” [28]. With more than (at the time) 100 million users tweeting 55 million times a day, Twitter’s archive of user-generated content had seemingly become of important cultural and historical value.

However, some Twitter users were not pleased with the newly instilled sense of permanence tweets were now being given (Zimmer, 2015), with some remarking they had not realized that Twitter had been keeping old tweets. Previously, there had been a technical limitation preventing users from pulling up more than 3,200 old tweets from within the timeline interface (Owens, 2010). Because of this technical structuring, some users may have developed incomplete or inaccurate understandings of what happened to older messages. Further, Baker (2011) describes Twitter’s documentation of the existence Library of Congress’ archive to users as “not reassuring” [29]. In her 2011 examination of Twitter’s terms of service and privacy policies, Baker found not a single mention of this archive, and highlights the fact that Twitter’s messaging to users does not actively disclose that this information flow to the Library of Congress exists. She concludes:

Although it is not providing users with incorrect or false information, Twitter is capable of disclosing the Library of Congress Twitter archive in a more straightforward way. Explicit references to the archive institution and the restrictions placed on the archive would educate users and enable them to make more informed decisions about what they post. [30]

It should be noted that Twitter began mentioning the Library of Congress archive in a mid-2012 privacy policy update, but that Twitter essentially went two years before disclosing this information flow as part of their governance documents. In more recent research, Proferes (2015) found just shy of 80 percent of Twitter users that participated in his study did not know about the Library’s archiving of tweets.

One might ask: what would a user with a more robustly developed base of informational power in regards to the archive do differently in this situation? They might seek out a service like TweetDeleter [31] or TweetEraser [32] which allows a user to delete tweets based on the year they were written, based on specific content they contain, or based on the age of the tweet. In a 2015 article on the tech news site Fusion, Roose interviewed a number of individuals (including a former Twitter employee) who have chosen to use scripts or other programs that allow them to control the lifecycle of tweets through timed deletion. Roose writes of one such interview:

Josh Miller, a product manager at Facebook, wrote a piece of code that deleted his tweets after seven days. He frames his tweet-deleting as a decision to make Twitter more like other forms of conversation. “My opinions aren’t permanent in my head (I often change my mind over time), and they’re not permanent when shared around the dinner table (nobody is recording our conversations),” Miller wrote in an e-mail. “So it just doesn’t make sense to me that they would be permanent online.” [33]

Hence, informational power can foster a greater degree of control (although by no means complete control) over the long-term of information flows, and a kind of renegotiation of power between the user, Twitter, and the Library of Congress.

With greater informational power in relation to the long-term of information flows on the platform, an individual may want to protest such an information flow by writing to their Congressperson, asking them to intervene and stop the archiving. They may be individuals in positions of structural power that could take advantage of the archive. They may want to contact the Library of Congress to secure access to the archive for historical research, to gain a better understanding of how Twitter users responded to various world events. They may want to stop using Twitter altogether to protest such archiving. Conversely, they may decide they are entirely comfortable with the archiving and may continue using the service just as they had been before. What is important is that access to information about how this part of the platform works creates the possibility for the individual to make a choice. Choice creates the possibility for the expression of informational power. These possibilities are closed off when users do not have the basis of informational power from which to enter these fields of action.

Changes in the information flows of the platform (as part of what O’Reilly lauds as the end of the software development lifecycle) affect users’ abilities to self-direct relative to their use of the technology. These changes can undermine the decisions that users made previously. Both technological design and discourse affect access to information from which a user can develop knowledge, thus shaping horizons for users’ informational power.

#Occupy

In an interview with NBC News, Philip Howard, the head of the Digital Activism Research Project defined hashtag activism as, “what happens when someone tries to raise public awareness of a political issue using some clever or biting keyword on social media ... If your idea — linked to a good hashtag — gains traction you’ve started a kind of awareness campaign” (Brewster, 2014). Twitter has become a popular venue for activists attempting to raise awareness about specific issues, causes, or events (Gerbaudo, 2012). One particular way this is done is by getting a hashtag around the cause to show as part of the “Trending Topics.” To accomplish this, users interested in an issue will generate tweets about a topic or event using a single hashtag in a focused manner, such as #KONY2012 (Carr, 2012). When the trending topic algorithm determines a specific topic has enough traffic to be counted as ‘trending,’ the hashtag or commonly used term becomes visible on the interface timelines of other users in the “Trending Topics” section of the interface. Thus, the issue gains prominence within the platform and potentially beyond.

One of the difficulties of making something “trend” is that Twitter does not disclose exactly how the Trending Topic algorithm works (Gillespie, 2012). This lack of algorithmic transparency has led to some confusion about why some topics trend but others do not. For example, during the Occupy Wall Street protests, despite the fact that thousands of users were tweeting with the hashtag #OccupyWallStreet, the term never hit the New York area list of trending topics (Lotan, 2011). Some accused Twitter of censoring the trend (Jeffries, 2011). However, in response to an inquiry from the New York Observer, Twitter representatives pointed to an older company blog post, which states:

Twitter Trends are automatically generated by an algorithm that attempts to identify topics that are being talked about more right now than they were previously. The Trends list is designed to help people discover the ‘most breaking’ breaking news from across the world, in real-time. The Trends list captures the hottest emerging topics, not just what’s most popular ... Sometimes a topic doesn’t break into the Trends list because its popularity isn’t as widespread as people believe. And, sometimes, popular terms don’t make the Trends list because the velocity of conversation isn’t increasing quickly enough, relative to the baseline level of conversation happening on an average day ... [emphasis added] (Twitter.com, 2010)

Commenting on the matter in a blog post, Gilad Lotan, chief data scientist at the Internet start-up studio betaworks, states, “what seems to be happening [in #OccupyWallStreet not showing up as a trending topic] is purely algorithmic” (Lotan, 2011). Lotan concludes, for activists trying to raise the visibility of their cause through an appearance on the Trending Topics list, understanding how the algorithm determines what becomes visible is crucial for choosing appropriate messaging. After analyzing a massive dataset of Tweets mentioning OWS related terms and OWS related Twitter trending topics during the Occupy protests, Lotan concludes:

[T]here’s much to be lost from not using the same hashtag or term. OWS members have been criticized for using multiple hashtags such as #ows, #OccupyWallStreet and #TakeBackWallStreet instead of sending out a clear message to just use one. The joint volume might not have sustained the trend, but would have definitely given it more visibility at the start ... if one seeks to keep a topic trending, it is important to change it every few days. ... a gradual rise in velocity only makes it harder and harder for the term to trend. (Lotan, 2011)

Lotan’s conclusions can be translated back into the concern over informational power and user knowledge. Imagine a hypothetical activist named Zelda. Zelda wants to raise public awareness about a particular issue important to her and her group. She has access to information about how Twitter’s Trending Topic algorithm works and knows that it functions by measuring the relative velocity of hashtag use in tweets in the short-term, rather than long-term sustained use. Thus, Zelda’s base of informational power fosters decision-making about how best to construct the content of her messages (an exercise of symbolic power). Because Zelda has this base of informational power, when she and her fellow activists seek to make a topic trend, she suggests they try to generate a short-term burst of tweets with hashtag #ZeldasCampaign, but also decide that the group should change the hashtag they use every few days to better their chances of appearing in the “Trending Topics.” Thus, Zelda and her group’s exercise of symbolic power is informed by her informational power vis-à-vis her access to information about how Twitter’s Trending Topic algorithm functions. Thus, the individual’s and group’s ability to affect change in the world more broadly through the exercise of symbolic power is shaped in part by her relative informational power.

Twitter’s descriptions of how the Trending Topics algorithms work and how well the site’s design communicates the algorithm’s functioning is important in this situation as it contributes to Zelda’s informational power. While there are certainly other sources that Zelda might go to learn about how Twitter’s Trending Topics algorithms work that also impact her relative informational power (such as other activists, friends or family, newspaper articles about protest movements, Lotan’s blogposts, experience watching other trending topics on the site, etc.) the Twitter.com site itself is an influential point for this knowledge to be built or for inaccurate beliefs to be dispelled. For example, if Twitter prominently describes in its new user orientation how Trending Topics are selected based on their short-term velocity and that topics that have been popular for a longer period of time may not show up as trending, and Zelda read this and internalized it, then Zelda would have a wider base of informational power than someone who had not. However, if Twitter does not prominently describe how its Trending Topic algorithm works, then Zelda’s informational power is entirely dependent on the accuracy of other sources she may (or may not) have used to build her understanding of the information flows on Twitter.

Further, other users’ access to information about how information flows on Twitter are technically constructed impacts their ability to make sense of and understand the context of #ZeldasCampaign if and when they see it as part of the Trending Topics; to know that this is a topic that is popular in the short-term rather than necessarily in the long-term. Their understanding of the technical reasons why this topic is trending may impact their impressions of the informational content conveyed by the campaign. This informational power might also inform their own exercise of symbolic power in choosing to contribute to posts around #ZeldasCampaign or choosing a different hashtag to use as part of a response.

 

++++++++++

Conclusion

The application of Braman’s informational power framework demonstrates the benefits of foregrounding access to information about how Web 2.0 platforms function as part of conversations about the horizons and limits of individual and collective social power. The cases above reveal the vast structural and symbolic power that Web 2.0 purveyors have in influencing how users can learn about how these spaces function technically. At the same time, these cases demonstrate how attention to informational power can take us beyond a conversation about empowerment/disempowerment that is tied merely to produsive capacity or political-economic relations. In application then, studies showing that users frequently maintain inaccurate, incomplete, or incorrect understandings of Web 2.0 platforms cast a long shadow over deterministic, unbridled, user empowerment narratives, instead revealing the boundaries and limits on the possibilities of user power.

Users with diminished states of informational power will face difficulty in exercising power in relation to the wider ecosystem that the information they create becomes a part of. After all, it is difficult to object, protest, or consciously consent to that which you do not know about or cannot learn about. Without knowledge of how these platforms function, users may have difficulty in produsage, in their political-economic relationship with the Web 2.0 world, and in making informed use decisions. A lack of knowledge about how Web 2.0 platforms function limits understanding how the technical environment shapes individual informational experience, thus impacting information sense-making; it inhibits understanding the context of the environment others experience and how others may interpret information from Web 2.0 platforms; and it limits the expression of forms of power with the social, political, cultural, and economic world surrounding the Web 2.0 environment. End of article

 

About the author

Nicholas Proferes is a postdoctoral scholar at the University of Maryland’s College of Information Studies. He studies how discourse, policy, and technological design impact the relationship between users and platforms.
E-mail: proferes [at] umd [dot] edu

 

Notes

1. There is, of course, a large body of earlier scholarly work arguing audiences are never actually “passive.” Instead, many media theorists have suggested different media facilitate different levels of engagement, sense-making, and meaning-making; that audiences’ ability to engage varies by individuals; and that engagement occurs even when an audience member cannot directly “talk back” through the media. See, for example, the differentiation between hot and cool media made by Marshall McLuhan (1994) and work on differentiating audiences by John Fiske (1987).

2. The term “prosumer” is generally originally credited to Toffler (1980).

3. Thank you to an anonymous reviewer for this point.

4. O’Reilly, 2007, p. 23.

5. Hargittai and Shaw, 2015, p. 437.

6. O’Reilly, 2007, p. 34.

7. Bruns, 2006, p. 1, emphasis original to text.

8. Bruns, 2007, p. 1.

9. Bruns, 2007, p. 7.

10. Jarrett, 2008, paragraph 28.

11. van Dijck and Nieborg, 2009, p. 855.

12. Ritzer and Jurgenson, 2010, p. 13. There is critique that this particular notion is not new; see Comor (2011) and Fuchs (2014).

13. Langlois, et al., 2009, paragraph 17.

14. Gillespie, 2014, p. 178.

15. van Dijck and Nieborg, 2009, p. 866, emphasis added.

16. Gillespie, 2010, paragraph 1.

17. Jensen and Potts, 2004, p. 471.

18. Cranor, 2003, p. 50.

19. Fernback and Papacharissi, 2007, p. 719.

20. See, for example, Acquisti and Gross (2006); Fuchs (2009); Park (2013); Eslami, et al. (2015); and, Wang, et al. (2011).

21. I again would like to thank an anonymous reviewer for this point.

22. Braman, 2006, p. 25.

23. Informational power in this exact form has been a driver in some recent international conflicts, particularly ones involving the knowledge and informational basis associated with the development of nuclear weaponry.

24. Braman, 2006, p. 26.

25. Eslami, et al., 2015, p. 153.

26. Eslami, et al., 2015, p. 161.

27. Raymond, 2010, paragraph 2.

28. Singel, 2010, paragraph 10.

29. Baker, 2011, p. 10.

30. Baker, 2011, p. 11.

31. http://www.tweetdeleter.com/en.

32. http://www.tweeteraser.com/.

33. Roose, 2015, paragraph 9.

 

References

Alessandro Acquisti and Ralph Gross, 2006. “Imagined communities: Awareness, information sharing, and privacy on the Facebook,” In: George Danezis and Philippe Golle (editors). Privacy enhancing technologies. Lecture Notes in Computer Science, volume 4258. Berlin: Springer, pp. 36–58.
doi: http://doi.org/10.1007/11957454_3, accessed 31 May 2016.

Anders Albrechtslund, 2008. “Online social networking as participatory surveillance,” First Monday, volume 13, number 3, at http://firstmonday.org/article/view/2142/1949, accessed 31 May 2016.

Paul Anderson, 2012. Web 2.0 and beyond: Principles and technologies. Boca Raton, Fla.: CRC Press.

Mark Andrejevic, 2009. “Critical media studies 2.0: An interactive upgrade,” Interactions: Studies in Communication & Culture, volume 1, number 1, pp. 35–51.
doi: http://doi.org/10.1386/iscc.1.1.35/1, accessed 31 May 2016.

Antoinette Baker, 2011, “Ethical considerations in Web 2.0 archives,” San José State University School of Information Student Research Journal, volume 1, issue 1, at http://scholarworks.sjsu.edu/slissrj/vol1/iss1/4/, accessed 31 May 2016.

Jigmey Bhutia, 2015. “Facebook makes all public posts searchable,” International Business Times (23 October), at http://www.ibtimes.co.uk/facebook-makes-all-public-posts-searchable-1525304, accessed 8 January 2016.

Sandra Braman, 2006. Change of state: Information, policy, and power. Cambridge, Mass.: MIT Press.

Shaquille Brewster, 2014. “After Ferguson: Is ‘hashtag activism’ spurring policy changes?” NBC News (12 December), at http://www.nbcnews.com/politics/first-read/after-ferguson-hashtag-activism-spurring-policy-changes-n267436, accessed 10 February 2015.

Axel Bruns, 2008a. Blogs, Wikipedia, Second Life, and beyond: From production to produsage. New York: Peter Lang.

Axel Bruns, 2008b. “The Future is user-led: The path towards widespread produsage,” Fibreculture Journal, issue 11, at http://eleven.fibreculturejournal.org/fcj-066-the-future-is-user-led-the-path-towards-widespread-produsage/, accessed 31 May 2016.

Axel Bruns, 2007. “Produsage: Towards a broader framework for user-led content creation,” C&C '07: Proceedings of the Sixth ACM SIGCHI Conference on Creativity & Cognition, pp. 99–106.
doi: http://doi.org/10.1145/1254960.1254975, accessed 31 May 2016.

Axel Bruns, 2006. “Towards produsage: Futures for user-led content production,” In: Fay Sudweeks, Herbert Hrachovec, and Charles Ess (editors). Cultural attitudes towards communication and technology 2006; version at http://eprints.qut.edu.au/4863/, accessed 31 May 2016.

David Carr, 2012. “Hashtag activism, and its limits,” New York Times (25 March), at http://www.nytimes.com/2012/03/26/business/media/hashtag-activism-and-its-limits.html, accessed 10 February 2015.

Manuel Castells, 2009. Communication power. New York: Oxford University Press.

Nicole Cohen, 2008. “The valorization of surveillance: Towards a political economy of Facebook,” Democratic Communiqué, volume 22, number 1, at http://journals.fcla.edu/demcom/article/view/76495, accessed 31 May 2016.

Edward Comor, 2011. “Contextualizing and critiquing the fantastic prosumer: Power, alienation and hegemony,” Critical Sociology, volume 37, number 3, pp. 309–327.
doi: http://doi.org/10.1177/0896920510378767, accessed 31 May 2016.

Mark Coté and Jennifer Pybus, 2007. “Learning to immaterial labour 2.0: MySpace and social networks,” Ephemera, volume 7, number 1, pp. 88–106, and at http://instruct.uwo.ca/mit/3771-001/Immaterial_Labour2_0.pdf, accessed 31 May 2016.

Lorrie Cranor, 2003. “P3P: Making privacy policies more useful,” IEEE Security & Privacy, volume 1, number 6, pp. 50–55.
doi: http://doi.org/10.1109/MSECP.2003.1253568, accessed 31 May 2016.

José van Dijck, 2013. The culture of connectivity: A critical history of social media. New York: Oxford University Press.

José van Dijck, 2012. “Facebook as a tool for producing sociality and connectivity,” Television & New Media, volume 13, number 2, pp. 160–176.
doi: http://dx.doi.org/10.1177/1527476411415291, accessed 31 May 2016.

José van Dijck, 2009. “Users like you? Theorizing agency in user-generated content,” Media, Culture & Society, volume 31, number 1, pp. 41–58.
doi: http://dx.doi.org/10.1177/0163443708098245, accessed 31 May 2016.

José van Dijck and David Nieborg, 2009. “Wikinomics and its discontents: A critical analysis of Web 2.0 business manifestos,” Media, Culture & Society, volume 11, number 5, pp. 855–874.
doi: http://dx.doi.org/10.1177/1461444809105356, accessed 31 May 2016.

Nicole B. Ellison, Charles Steinfield and Cliff Lampe, 2007. “The benefits of Facebook ‘friends:’ Social capital and college students’ use of online social network sites,” Journal of Computer-Mediated Communication, volume 12, number 4, pp. 1,143–1,168.
doi: http://dx.doi.org/10.1111/j.1083-6101.2007.00367.x, accessed 31 May 2016.

Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and Christian Sandvig, 2015. “‘I always assumed that I wasn’t really that close To [her]’: Reasoning about invisible algorithms in the News Feed,” CHI ’15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 153–162.
doi: http://dx.doi.org/10.1145/2702123.2702556, accessed 31 May 2016.

Natalie Fenton and Veronica Barassi, 2011. “Alternative media and social networking sites: The politics of individuation and political participation,” Communication Review, volume 14, number 3, pp. 179–196.
doi: http://dx.doi.org/10.1080/10714421.2011.597245, accessed 31 May 2016.

Jan Fernback and Zizi Papacharissi, 2007. “Online privacy as legal safeguard: The relationship among consumer, online portal, and privacy policies,” New Media & Society, volume 9, number 5, pp. 715–1734.
doi: http://dx.doi.org/10.1177/1461444807080336, accessed 31 May 2016.

John Fiske, 1987. Television culture. New York: Methuen.

John French and Bertram Raven, 1959. “The bases of social power,” In: Dorwin Cartwright (editor). Studies in social power. Research Center for Group Dynamics Series, publication number 6. Ann Arbor: Research Center for Group Dynamics, Institute for Social Research, University of Michigan, pp. 150–167.

Christian Fuchs, 2014. Social media: A critical introduction. Los Angeles: Sage.

Christian Fuchs, 2011. “Web 2.0, prosumption, and surveillance,” Surveillance & Society, volume 8, number 3, at http://ojs.library.queensu.ca/index.php/surveillance-and-society/article/view/4165, accessed 31 May 2016.

Christian Fuchs, 2009. “Social networking sites and the surveillance society: A critical case study of the usage of studiVZ, Facebook, and MySpace by students in Salzburg in the context of electronic surveillance,” ICT&S Center (University of Salzburg) Research Report, at http://fuchs.uti.at/wp-content/uploads/SNS_Surveillance_Fuchs.pdf, accessed 31 May 2016.

Christian Fuchs, 2008. Internet and society: Social theory in the information age. New York: Routledge.

Robert Gehl, 2011. “The archive and the processor: The internal logic of Web 2.0,” New Media & Society, volume 13, number 8, pp. 1,228–1,244.
doi: http://dx.doi.org/10.1177/1461444811401735, accessed 31 May 2016.

Paolo Gerbaudo, 2012. Tweets and the streets: Social media and contemporary activism. London: Pluto Press.

Tarleton Gillespie, 2014. “The relevance of algorithms,” In: Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot (editors). Media technologies: Essays on communication, materiality, and society. Cambridge, Mass.: MIT Press, pp. 167–194.
doi: http://dx.doi.org/10.7551/mitpress/9780262525374.003.0009, accessed 31 May 2016.

Tarleton Gillespie, 2012. “Can an algorithm be wrong?” Limn, at http://limn.it/can-an-algorithm-be-wrong/, accessed 6 May 2016.

Tarleton Gillespie, 2010. “The politics of ‘platforms’,” New Media & Society, volume 12, number 3, pp. 347–364.
doi: http://dx.doi.org/10.1177/1461444809342738, accessed 31 May 2016.

Eszter Hargittai and Aaron Shaw, 2015. “Mind the skills gap: The role of Internet know-how and gender in differentiated contributions to Wikipedia,” Information, Communication & Society, volume 18, number 4, pp. 424–442.
doi: http://dx.doi.org/10.1080/1369118X.2014.957711, accessed 31 May 2016.

Kylie Jarrett, 2008. “Interactivity is evil! A critical investigation of Web 2.0,” First Monday volume 13, number 3, at http://firstmonday.org/article/view/2140/1947, accessed 1 September 2013.

Adrianne Jeffries, 2011. “Twitter says it’s not censoring Occupy Wall Street — People really are more concerned with Doritos right now,” Observer (26 September), at http://observer.com/2011/09/twitter-says-its-not-censoring-occupy-wall-street-people-really-are-talking-more-doritos/, accessed 10 February 2015.

Henry Jenkins and Mark Deuze, 2008. “Convergence culture,” Convergence, volume 14, number 1, pp. 5–12.
doi: http://dx.doi.org/10.1177/1354856507084415, accessed 31 May 2016.

Carlos Jensen and Colin Potts, 2004. “Privacy policies as decision-making tools: An evaluation of online privacy notices,” CHI ’04: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 471–478.
doi: http://dx.doi.org/10.1145/985692.985752, accessed 31 May 2016.

Robert Keohane and Joseph Nye, 1998. “Power and interdependence in the information age,” Foreign Affairs, volume 77, number 5, pp. 81–94, and at https://www.foreignaffairs.com/articles/1998-09-01/power-and-interdependence-information-age, accessed 31 May 2016.

Ayush Khanna, 2012. “Nine out of ten Wikipedians continue to be men: Editor survey,” Wikimedia blog (27 April), at https://blog.wikimedia.org/2012/04/27/nine-out-of-ten-wikipedians-continue-to-be-men/, accessed 8 January 2016.

Aniket Kittur, Ed Chi, Bryan Pendleton, Bongwon Suh, and Todd Mytkowicz, 2007. “Power of the few vs. wisdom of the crowd: Wikipedia and the rise of the bourgeoisie,” at http://www-users.cs.umn.edu/~echi/papers/2007-CHI/2007-05-altCHI-Power-Wikipedia.pdf, accessed 31 May 2016.

Ganaele Langlois, Fenwick McKelvey, Greg Elmer, and Kenneth Werbin, 2009. “Mapping commercial Web 2.0 worlds: Towards a new critical ontogenesis,” Fibreculture Journal, number 14, at http://fourteen.fibreculturejournal.org/fcj-095-mapping-commercial-web-2-0-worlds-towards-a-new-critical-ontogenesis/, accessed 31 May 2016.

Gilad Lotan, 2011. “Data reveals that ‘occupying’ Twitter trending topics is harder than it looks!” (12 October), at http://giladlotan.com/2011/10/data-reveals-that-occupying-twitter-trending-topics-is-harder-than-it-looks/, accessed 10 February 2015.

Steven Lukes, 1974. Power: A radical view. London: Macmillan.

Marshal McLuhan, 1994. Understanding media: The extensions of man. Cambridge, Mass.: MIT Press.

Joseph Nye, 2008. “Public diplomacy and soft power,” Annals of the American Academy of Political and Social Science, volume 616, number 1, pp. 94–109.
doi: http://dx.doi.org/10.1177/0002716207311699, accessed 31 May 2016.

Matt Raymond, 2010. “How tweet it is! Library acquires entire Twitter archive,” Library of Congress Blog (14 April), at http://blogs.loc.gov/loc/2010/04/how-tweet-it-is-library-acquires-entire-twitter-archive/, accessed 31 May 2016.

Tim O’Reilly, 2007. “What is Web 2.0: Design patterns and business models for the next generation of software,” Communications & Strategies, number 1, pp. 17–37.

Tim O’Reilly, 2005. “Web 2.0: Compact definition?” (1 October), at http://radar.oreilly.com/2005/10/web-20-compact-definition.html, accessed 31 May 2016.

Felipe Ortega, Jesus M. Gonzalez-Barahona, and Gregorio Robles, 2008. “On the inequality of contributions to Wikipedia,” HICSS ’08: Proceedings of the Proceedings of the 41st Annual Hawaii International Conference on System Sciences, p. 304.
doi: http://dx.doi.org/10.1109/HICSS.2008.333, accessed 31 May 2016.

Michael Owens, 2010. “Why the 3,200 tweet user_timeline limit, and will it ever change?” Twitter Developers Forum (11 July), at https://dev.twitter.com/discussions/276, accessed 4 June 2012.

Yong Jin Park, 2013. “Digital literacy and privacy behavior online,” Communication Research, volume 40, number 2, pp. 215–236.
doi: http://dx.doi.org/10.1177/0093650211418338, accessed 31 May 2016.

Søren Mørk Petersen, 2008. “Loser generated content: From participation to exploitation,” First Monday, volume 13, number 3, at http://firstmonday.org/article/view/2141/1948, accessed 30 July 2012.
doi: http://dx.doi.org/10.5210/fm.v13i3.2141, accessed 31 May 2016.

Trevor Pinch and Wiebe Bijker, 1984. “The social construction of facts and artefacts: Or how the sociology of science and the sociology of technology might benefit each other,” Social Studies of Science, volume 14, number 3, pp. 399–441.
doi: http://dx.doi.org/10.1177/030631284014003004, accessed 31 May 2016.

Nicholas Proferes, 2015. “Informational power on Twitter: A mixed-methods exploration of user knowledge and technological discourse about information flows,” Ph.D. dissertation, University of Wisconsin-Milwaukee, at http://dc.uwm.edu/etd/909/, accessed 31 May 2016.

Anabel Quan-Haase and Alyson Young, 2010. “Uses and gratifications of social media: A comparison of Facebook and instant messaging,” Bulletin of Science, Technology & Society, volume 30, number 5, pp. 350–361.
doi: http://dx.doi.org/10.1177/0270467610380009, accessed 31 May 2016.

George Ritzer and Nathan Jurgenson, 2010. “Production, consumption, prosumption: The nature of capitalism in the age of the digital ‘prosumer’,” Journal of Consumer Culture, volume 10, number 1, pp. 13–36.
doi: http://dx.doi.org/10.1177/1469540509354673, accessed 31 May 2016.

Kevin Roose, 2015. “Meet the tweet-deleters: People who are making their Twitter histories self-destruct,” Fusion (19 February), at http://fusion.net/story/50322/meet-the-tweet-deleters-people-who-are-making-their-twitter-histories-self-destruct/, accessed 23 February 2015.

Marisol Sandoval, 2012. “A critical empirical case study of consumer surveillance on Web 2.0,” In: Christian Fuchs, Kees Boersma, Anders Albrechtslund, and Marisol Sandoval (editors). Internet and surveillance: The challenges of Web 2.0 and social media. New York: Routledge, pp. 147–169.

Trebor Scholz, 2008. “Market ideology and the myths of Web 2.0,” First Monday, volume 13, number 3, at http://firstmonday.org/article/view/2138/1945, accessed 31 May 2016.

Saqib Shah, 2016. “You’re not sharing enough personal info, Facebook worries,” Digital Trends (9 April), at http://www.digitaltrends.com/social-media/facebook-is-worried-youre-not-sharing-enough-personal-posts/, accessed 2 May 2016.

Clay Shirky, 2011. “Political power of social media: Technology, the public sphere, and political change,” Foreign Affairs, volume 90, at https://www.foreignaffairs.com/articles/2010-12-20/political-power-social-media, accessed 31 May 2016.

Clay Shirky, 2010. Cognitive surplus: Creativity and generosity in a connected age. New York: Penguin Press.

Tom Simonite, 2013. “The decline of Wikipedia,” MIT Technology Review (22 October), at http://www.technologyreview.com/featuredstory/520446/the-decline-of-wikipedia/, accessed 8 January 2016.

Ryan Singel, 2010. “Library of Congress Archives Twitter history, While Google searches it,” Wired (14 April), at http://www.wired.com/epicenter/2010/04/loc-google-twitter/, accessed 31 May 2016.

Don Tapscott and Anthony Williams, 2008. Wikinomics: How mass collaboration changes everything. New York: Portfolio.

Tiziana Terranova, 2004. Network culture: Politics for the information age. London: Pluto Press.

Tiziana Terranova, 2000. “Free labor: Producing culture for the digital economy,” Social Text, volume 18, number 2, pp. 33–58, and at http://muse.jhu.edu/article/31873, accessed 31 May 2016.

Alvin Toffler, 1980. The third wave. New York: Morrow.

Twitter.com, 2010. “To trend or not to trend ...,” Twitter blogs (8 December), at https://blog.twitter.com/2010/trend-or-not-trend, accessed 10 February 2015.

Yang Wang, Gregory Norcie, Saranga Komanduri, Alessandro Acquisti, Pedro Giovanni Leon, and Lorrie Cranor, 2011. “‘I regretted the minute I pressed share’: A qualitative study of regrets on Facebook,” SOUPS ’11: Proceedings of the Seventh Symposium on Usable Privacy and Security, article number 10.
doi: http://dx.doi.org/10.1145/2078827.2078841, accessed 31 May 2016.

Max Weber, 2009. From Max Weber: Essays in sociology>. Translated, edited, with an introduction by H.H. Gerth and C. Wright Mills with a new preface by Bryan S. Turner. New York: Routledge.

Dennis Wrong, 1995. Power: Its forms, bases, and uses. New Brunswick, N.J.: Transaction Publishers.

Michael Zimmer, 2015. “The Twitter Archive at the Library of Congress: Challenges for information practice and information policy,” First Monday, volume 20, number 7, at http://firstmonday.org/article/view/5619/4653, accessed 26 August 2015.
doi: http://dx.doi.org/10.5210/fm.v20i7.5619, accessed 31 May 2016.

Michael Zimmer, 2008. “Preface: Critical perspectives on Web 2.0,” First Monday, volume 13, number 3, at http://firstmonday.org/article/view/2137/1943, accessed 31 May 2016.

 


Editorial history

Received 27 May 2016; accepted 28 May 2016.


Creative Commons License
This paper is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Web 2.0 user knowledge and the limits of individual and collective power
by Nicholas Proferes.
First Monday, {$numberTitle}
http://firstmonday.org/ojs/index.php/fm/article/view/6793/5523
doi: http://dx.doi.org/10.5210/fm.v21i6.6793





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2017. ISSN 1396-0466.