Pricey privacy: Framing the economy of information in the digital age
First Monday

Pricey privacy: Framing the economy of information in the digital age by Federica Fornaciari



Abstract
As new information technologies become ubiquitous, individuals are often prompted rethinking disclosure. Available media narratives may influence one’s understanding of the benefits and costs related to sharing personal information. This study, guided by frame theory, undertakes a Critical Discourse Analysis (CDA) of media discourse developed to discuss the privacy concerns related to the corporate collection and trade of personal information. The aim is to investigate the frames — the central organizing ideas — used in the media to discuss such an important aspect of the economics of personal data. The CDA explored 130 articles published in the New York Times between 2000 and 2012. Findings reveal that the articles utilized four frames: confusion and lack of transparency, justification and private interests, law and self-regulation, and commodification of information. Articles used episodic framing often discussing specific instances of infringements rather than broader thematic accounts. Media coverage tended to frame personal information as a commodity that may be traded, rather than as a fundamental value.

Contents

1. Introduction
2. Literature review
3. Method
4. Results and discussion
5. Conclusions

 


 

1. Introduction

In modern Western societies, profiling has become an invaluable source of profit for corporations that collect, aggregate, and trade users’ data. Also, with the wide spread of information technologies, marketing techniques as price discrimination (which is the inference of one’s ability and willingness to pay a specific price and the consequent adjustment thereof) and targeted advertising (the personalization of ads based upon one’s interests) have become more sophisticated, often attracting the attention of researchers and journalists.

From an academic perspective, scholars have progressively investigated the evolution of privacy in modern technological environments to attempt revealing existing relationships between the affordances of technology, the possibilities for surveillance, and the risks of privacy losses (Nissenbaum 2010). From a journalistic perspective, the news has often reported on instances of privacy infringements due to marketing practices. In February 2012, for example, the magazine Forbes denounced Target for collecting and elaborating data about its customers and, as a result, for sending coupons for baby items to a teen girl, based on her “pregnancy scores” (Hill, 2012). In reaction to increasingly nosey marketing strategies enabled by modern technologies, “privacy is dead” has become a common catchphrase used in the news.

Currently, privacy losses often emerge as infringements of situational expectations of flow (Nissenbaum, 2010). In other words, when individual share information they do so with specific expectations as to what the appropriate flow of such information should be. These expectations are inherently contextual, as they depend upon the context of delivery, and are often infringed upon for monetary purposes. The Target story is an example of this context migration: information revealed in one context is collected, decontextualized, ad re-used in unexpected ways. Nissembaum (1998; 2010) developed the framework of contextual integrity to explain that social and cultural norms guide the proper use of information and influence how individuals develop expectations of flow. Informational norms are also connected to overarching frames, individual and collective, that inform how one understands appropriate flow of personal data and suitable disclosure. However, technological developments often encourage renegotiating disclosure, as they redeploy the features of the communication environment. In addition, new technologies allow handling data in different ways. Information, in computerized societies, is greased and easily slides from context to context (Moor, 1997). In fact, computers, the Internet, and social media modified the infosphere by altering the architecture of interaction and the features of the communication environment. Modern information technologies, in other words, influenced the frictions that hamper or facilitate the flow of information (Floridi, 2005). Doing so, they re-ontologized the space of information challenging existing management strategies and norms of flow (Floridi, 2005; Nissenbaum, 2010). Resulting tendencies and tensions may emerge from a variety of social forums, including mainstream media.

Lately, online corporate managers have often claimed the death of privacy (Sprenger, 1999). Facebook CEO Mark Zuckerberg has insistently argued that informational norms have changed. He has emphasized that social media are platforms for sharing and for social connection, not for privacy. Yet, research has revealed that people still value privacy as a fundamental aspect of their everyday lives (Nippert-Eng, 2010). Research has also suggested that some companies may have remarkable monetary interests in collecting and aggregating information to implement practices of customized marketing (Odlyzko, 2003; 2007; Solove, 2001; 2005). In this context, mainstream media may have provided frames to discuss the corporate collection and trade of users’ data, thereby informing how individuals approach disclosure. The current study, guided by frame theory (Goffman, 1974; Entman, 1993), implemented a Critical Discourse Analysis (CDA) of recent media discourse to investigate the frames used to depict the corporate collection and use of information, the actors involved, and the values at stake.

Possibly, mainstream media crafted frames to discuss how private companies collect and use data and thereby provided readers with “information processing schemata” (Goffman, 1974) to understand an important aspect of the economics of personal information. Adopting and renegotiating available frames, individuals could further understand and discuss the complex relationship between privacy, disclosure, and economic interests of the sharing industry. Even though media coverage of the economics of privacy could help exploring power relationships and particular interests of the sharing industry (Fuchs, 2012), CDA has not been used yet to investigate media framing of privacy. The current research project addresses this gap by exploring how media coverage has framed the complex relationship between information sharing and corporate (ab)use of personal data.

The current study is relevant to recent directions in communication research for a number of reasons. First, research has revealed an increasing attention to privacy investigating it from a number of methods and theoretical perspectives. Even though media discourse is an important platform that informs and reflects the formation of public opinion and culture, research has not addressed media framing of privacy as of yet. In addition, privacy is arguably a fundamental value of modern democracies. Its relationship with profit, though, may reduce its intrinsic worth to a more instrumental one (Moor, 1997); such tendency may emerge from media narratives of privacy as well. Exploring the economics of personal information may help revealing the political economy of private interests that take part in the database industry. When new technologies emerge, individuals often lack vocabulary to describe, approach, and understand them. This study provides a stepping stone to start exploring how mainstream media have created or adapted their language to frame and reframe the boundaries of a concept, privacy, whose value is often negotiate against other values.

 

++++++++++

2. Literature review

2.1. Privacy, norms, and media discourse

The “right to privacy”, in modern Western societies, was brought into focus by Warren and Brandeis’ (1890) landmark article published in the Harvard Law Review to denounce the unacceptable intrusion of photography in the private sphere and the increasing tendency of sensationalist press to broadcast “sacred precincts of private and domestic life” [1]. In that paper, they argued that political, social, and economic changes often stimulate the need for new laws to protect fundamental rights. These are necessary to address new risks of infringement that may emerge and thereby meet new requirements of an evolving society. In the late nineteenth century, the developments of photography increasingly allowed information to flow in unexpected ways making it easier to broadcast private events and often infringing one’s expectations of privacy. Needless to say, technology has evolved at a fast pace often reshaping the features of the communication environment. Political, social, cultural, and economic systems have also changed, perhaps encouraging the development of new norms for sharing information. These evolutions may also emerge in media discourse of privacy and economy across time.

Privacy is an elastic concept (Allen, 1988). Its boundaries constantly evolve, often reflecting changes of the communication environments. In modern democracies, discussions around privacy have become ubiquitous, partly as a consequence of the evolution of technology that made it increasingly easy to decontextualize, collect, aggregate, and spread information (Nissenbaum, 2010; Solove, 2005). Revealing an increasing attention to privacy, scholars developed theoretical and empirical approaches tackling its complexity from a variety of perspectives. Some focused on the instrumental values of privacy considering it as a necessary condition to achieve other benefits or avoid practical harm (Dwyer, et al., 2007; Waldo, et al., 2007). Others recognized the intrinsic value of privacy as a necessary condition for human wellbeing, autonomy, liberty, identity, dignity, freedom, and psychological integrity (e.g., Altman, 1977; Floridi, 2005; Moor, 1997; Nissenbaum, 2010; Rachels, 1975).

Focusing on the role of informational norms of flow for privacy management, Nissenbaum (2004; 2010) suggested the use of a new framework of contextual integrity based on the idea that contexts of delivery and situational cues influence one’s expectations of privacy. In particular, Nissenbaum (2010) explained that one might disclose information in a specific circle, but desire to keep it private from other circles. In other words, information is not either secret or overt. There are, instead, many nuances of secrecy and disclosure. And there are norms that help one to understand and manage these nuances. Contextual norms depend on shared culture and social experiences rather than on a written set of rules. These norms evolve as the features and the architectures of the communication environments change.

The evolution of norms is often reflected in, and perhaps influenced by, personal and collective narratives that contribute to framing norms, roles, expectations, actions, and practices. By deeply investigating and explaining meanings and by generating in-depth interpretations of norms that emerge in media coverage of privacy, CDA contributes to understanding how informational norms of sharing develop in today’s society (e.g., Fairclough, 2000; van Dijk, 1997). CDA of media narratives, informed by frame analysis, may thus contribute to furthering our understanding of privacy in modern communication environments. Media outlets, in fact, have the potential to inform individuals providing them with “models of conversational interactions” [2] that one may adopt to understand and discuss privacy. Offering interpretive lenses to address the norms of sharing and their evolution, media frames may prompt readers rethinking their understanding of flow, their social expectations, their attitudes, and their sharing behaviors. Media narratives, in other words, may have “considerable cultural salience” [3]. CDA investigates these narratives with the thoughtfulness and depth necessary to provide a conceptual map that describes them. Discourse analysis of media text, informed by framing theory (Entman, 1991), is therefore an important approach available to understand the complexity of privacy in modern Western societies.

2.2 Frame theory

Frame theory presents an important and interdisciplinary model for research. And yet, literature on framing is scattered and fractured, and there is not a unifying agreement on its methodological paradigm (Entman, 1993). In the attempt to define or apply framing analysis, scholars have adopted a variety of theoretical perspectives including critical, constructionist, and cognitivist approaches (Matthes, 2009). Goffman (1974) introduced framing theory by suggesting that individuals tend to perceive complex events and issues using primary frameworks, or schemata, that inform how one understands and describes the complexity of reality. Frames, in other words, are “principles of selection” (Gitlin, 2003) that manufacture the understanding of reality and structure one’s experience, often encouraging emphases and omissions. Different interests, as well as underlying ideologies, may encourage different framing of the same object, giving salience to certain features and neglecting others. Entman (1991, 1993) further described frames as internalized guides, necessary principles for information processing. He argued that repetition, placement, and reinforcement allow some ideas to become more readily available and discernible than others. Frames, obviously, do not invent reality. However, they offer lenses and perspectives that magnify or shrink selected existing objects suggesting their importance and political relevance.

Frame theory explains that any entity, including the media, have the ability to frame reality, emphasizing one explanation and focusing on one characteristic over others. As a consequence, frames become the synthesis of words, images, and thoughts that individuals rely upon when making sense of complex events, concepts, and processes (Fairhurst and Sarr, 1996; Gitlin, 2003). Frames, thus, provide conceptual maps and contexts for the classification and understanding of messages. Thereby, frames build upon existing knowledge and contribute to informing individuals’ comprehension of their environments.

As this study investigates frames of privacy that emerge in media discourse, Entman’s (1993) definition of framing was considered the most suitable. Entman’s definition is also appropriate for the scope of this study because it connects media frames with causal interpretation, value judgments, and policy recommendation. These are fundamental aspects to investigate when exploring media coverage of the economics of personal information and of privacy.

Informed by Entman’s (1993) definition of framing, the current study implemented CDA to address the following research question (RQ):

RQ: How did mainstream media discourse frame the economics of personal data in the digital age?

 

++++++++++

3. Method

CDA adopts a critical approach suggesting that discourse may be a signifier for power relations, ideologies, and hegemony (van Dijk, 1997), thus it is suitable for a study of media discourse that implements Entman’s perspective on framing. In particular, CDA adopts skeptical attitudes to analyzing narratives, in the attempt to locate discourse within overarching ideological backgrounds (Fairclough, 2000). CDA studies texts to identify dynamics of power, which may also emerge from mediated discourses of privacy that tackle the economic interests of personal information. Doing so, CDA starts from the belief that a number of interest groups may be more or less involved in an issue, and that their relationships go beyond the simple dichotomy dominator/oppressed (Hammersley, 1992). Using CDA to explore how the media discussed the corporate practice of collection and use of personal data, thus, also helped identifying the interest groups that play a role in the database industry. The connection between use of language and exercise of power may be cryptic. Therefore, when analyzing frames of the economics of privacy, CDA encouraged questioning their neutrality and legitimacy. A critical approach also motivated a focus on omissions, latent content, and connotatively charged words that contributed to revealing practices of power. The ability to detect power relationships, in fact, is crucial to implement successful and meaningful CDA (Fairclough, 2000).

To address the research question, the current CDA explored 130 New York Times editorials and articles published between 1 January 2000 and 31 December 2012. Articles were retrieved from the database LexisNexis using the following string of keywords: “privacy AND price OR econom! OR marketing OR advertis! OR target! OR behavioral.” The initial selection resulted in 522 articles for the 13-year period. Then, the researcher randomly selected 10 articles per year. After a preliminary screening, all articles were considered appropriate for the study, thus yielding a sample of 130 articles for the analysis.

The New York Times is a nationally and internationally influential newspaper (Clark and Illman, 2003). It has been considered a leader in a number of areas that include technology, and it has often influenced other media outlets (Weiss, 1974). The New York Times’ writers are prominent thinkers that convey important ideas and discuss practices and implications of emerging technologies (Nelkin, 1995). Due to its large readership, its national and international reach, and its influential power, the New York Times was considered to be a fundamental outlet to begin studying media frames of the economics of privacy. Even though it is understood that limiting the sample to editorials collected from a single source may have partly limited the scope of this study, having a manageable sample also allowed developing a deeper investigation of frames implemented to discuss the economics of privacy. Such depth enabled to explore and further understand the public discussion around privacy and economics, thus providing a valuable contribution to existing privacy scholarship.

The timeframe 2000-2012 was selected to capture the narratives of the economics of privacy that developed during years of considerable growth of the Internet, also intensified by the introduction and wide spread of social media. Even though the Internet begun spreading publicly in the early 1990s, its potential for commercial use became more tangible in the early 2000s (Evans, 2009). In addition, social media became increasingly popular during the 2000s: SixDegrees was launched in 1997, Friendster in 2002, MySpace in 2003, Facebook in 2005, and Twitter in 2006 increasingly encouraging users to generate content and share information (boyd and Ellison, 2007).

For the purposes of the current CDA, the articles selected were read and reread carefully, several times. During each reading, the researcher focused on language use and tone, recurring keywords, main themes, silences, and absences (Fairclough, 2000; van Dijk, 1997) also paying particular attention to how problems were constructed, connected, and framed (Tonkiss, 2004). In addition, the researcher investigated what metaphors and catchphrases mainstream media used when talking about the economy of information, and whether specific dynamics of power emerged. To do so, the researcher looked for variations within the text to spot conflicting ideas, internal hesitations or inconsistencies (Tonkiss, 2004). Entman’s (1993) definition of frame theory informed this study it three ways. It was prescriptive as it guided the conceptualization and the design of a project. It was analytical as it informed the interpretation of results. And it was methodological as it helped implementing the theoretical approach. In addition, since it is fundamental to contextualize texts when implementing CDA (van Dijk, 1997) the passages were also interpreted placing them into the context of the political economy of capitalism, and keeping in mind that “critical discourse analysis challenges taken-for-granted assumptions by putting them into the context of power structures in society” [4].

The researcher attempted to control for subjectivity in a number of ways. First, by providing extensive examples of how the themes emerged from the text. Second, by clearly defining the standards of the project, grounding it in theory and data, and using rigor in data collection and analysis. And yet, it is understood that self-reflexivity is still a component that needs to be accounted for in qualitative research. This project, however, does not claim generalizability. Rather, it focuses on the qualitative potential for discovery that results from its in-depth approach (Fairclough, 2000; van Dijk, 1997). Interpretive frames that emerged were contextualized, when possible, within existing research in the attempt to better explain the scope and relevance of findings. In addition, the researcher problematized core concepts investigated and accounted for examples that did not fit in the pattern (Hammersley, 1997; Silverman, 2005), trying to assume the viewpoints of different audiences.

 

++++++++++

4. Results and discussion

After several careful readings of the articles selected, four main frames emerged from the CDA. These included: confusion and lack of transparency, justification and private interests, law and self-regulation, and commodification of information. In the following section each frame will be further discussed. Textual examples are provided to allow a further understanding of how the frames emerged, and to explain their relationship — or lack thereof — with existing literature.

4.1. Frame one: Confusion and lack of transparency

In the attempt to find a balance between behavioral marketing and individual privacy, Berger (2011) suggested that consumers must be given the possibility to evaluate the tradeoffs of disclosure and to opt out of collection. The implications of their sharing behaviors should be clear, user-friendly, and concise (Berger, 2011). And yet, transparency may become an issue, as it risks damaging economic interests of private companies (Fuchs, 2012). The main obstacle that users encounter in making informed decisions, in fact, is that privacy policies are often complex and obscure, and that some companies modify them without notice. Nissenbaum (2004) further explained that the scarce usability of privacy policies might reflect the existence of different economic interests related to privacy. She added that, “outside the legal arena, norms of decency, etiquette, sociability, convention, and morality frequently address appropriateness and distribution of information” [5]. Mainstream media represent one of the social planes where a discussion about “appropriateness” may take place. For example, the media describe instances in which personal information is distributed more or less appropriately, also debate whether privacy policies reflect or contradict the norms of flow.

When discussing the economy of privacy, most of the articles in the sample highlighted the lack of control that individuals have over their personal data. These articles suggested that confusion about how information is used is the main reason why individuals lack control. Generally, such confusion was explained as being the consequence of the complex wording of existing privacy policies, as it also emerged from previous research (Fernback and Papacharissi, 2007; Fuchs, 2011; Järvinen, 2005). In particular, the articles analyzed in the current study suggested that most users would be concerned about their personal information if they had clear understanding of how private companies might use personal data but “most Internet users have no way to know the (advertising) networks exist and what they do.” [6] Lack of transparency is a recurring critique against online corporations. This frame was also evident in statements as,

Most consumers have little idea that unseen advertising networks on the Internet track their movements across multiple Web sites. Most do not know that Web sites can collect and sell data about them. [7]

Even though confusion was mostly related to lack of transparency, some articles would also hint that users could better understand privacy risks but most chose not to inform themselves and do not read the privacy policies. Some articles throughout the sample, for example, suggested that,

Many Internet users have no idea that records of their actions are being collected and used. They might find out about these practices only if they read the fine print of Web site privacy policies. [8]

Interestingly, articles that developed the frame “confusion and lack of transparency” often discussed particular instances of privacy loss adopting an episodic approach, as Iyengar’s (1991) would define it. Iyengar, more specifically, suggested that the news frame events using episodic or thematic approaches. Episodic frames focus on specific instances and, “tend to elicit individualistic rather than societal attributions of responsibility” [9]. Thematic frames present a larger perspective providing context and background to the issue discussed and encouraging a societal attribution of responsibility. For example, an episodic story would highlight a specific instance in which an individual’s privacy was violated. A thematic one would adopt a broader perspective contextualizing and discussing the issue of privacy. Within the current frame, Iyengar’s distinction becomes specifically relevant. In particular, the use of episodic frames to discuss privacy loss within the “confusion and lack of transparency” frame may reflect a focus on individual rather than societal responsibility. In other words, it suggests that users may be held accountable for privacy loss because, most of the time, they fail to read available privacy policies.

Further, when the frame of confusion and lack of transparency surfaced, the tone of the article was often negative, or pessimistic. An example of negative tone implemented in media discourse, also combined with the over-emphasis of users’ helplessness, emerges in excerpts such as the following:

And then, without your knowledge or consent, this information is often delivered to companies that market lists, who then sell them to anyone willing to pay the price. [10]

Previous research suggested that the use of negative tone is particularly effective in capturing readers’ attention and prompting them to consider the issue as a crucial problem (Sheafer, 2007). The lack of transparency of privacy policies is not a novel finding. Yet, statements as the one above almost suggest that individuals are powerless towards unexpected flows of personal information as if privacy no longer exists. This becomes particularly interesting when compared to research that reveals that negative framing has negative consequences on how one perceives the issue being discussed (Shaefer, 2007) and, overall, provides negative impacts on public opinion (Schuck and de Vreese, 2006). Even though these findings are consistent with common expectations, the effects of news tone upon public opinion have not been tested for media discourse of privacy. Assuming that findings on the role of negative framing over public opinion (Shafer, 2007; Schuck and de Vreese, 2006) extend to media discourse of privacy, one may hypothesize that an over-exposure to negative frames of privacy primes individuals to lower their expectations thereof and to accept that “privacy is dead”. [11] These considerations open up important directions and questions for future research. In particular, it is crucial to further investigate whether and how existing frames, individual and collective, discuss or challenge the death of privacy. One may also speculate whether claiming the death of privacy is a hegemonic attempt to encourage users to give up on their privacy and keep disclosing — thus facilitating companies who make profit over personal data.

4.2. Frame two: Justification and private interests

Existing privacy scholarship revealed that private companies increasingly collect and aggregate data, more or less openly, to implement practices of behavioral marketing as targeted advertising and price discrimination. Research has often emphasized that companies tend to obfuscate these practices (Fernback and Papacharissi, 2007; Fuchs, 2011; Järvinen, 2005) and adopt a variety of strategies to collect and use information for profit (Berger, 2011; Fuchs, 2012; Odlyzko, 2003). Among these practices, price discrimination has emerged as a more controversial topic, as it entails various degrees of discriminations based on the inference of one’s willingness and ability to pay a specific price.

As behavioral marketing practices become ubiquitous, it is not surprising that these practices find their place in media discourse of privacy. Many articles, in fact, discussed behavioral practices claiming that companies “harvest” and “use” personal information for commercial purposes. The word “harvest,” in particular, suggests a very interesting metaphor that transforms information into something that is almost tangible. This process will be further discussed in the “commodification of information” frame.

Most of the time articles in the frame “justification and private interests” discussed the implementation of targeted advertising and barely mentioned that of price discrimination. This is a considerable omission. Especially taking into account that price discrimination has been identified as a more controversial and worrisome process (Odlyzsko, 2003, 2007). Shifting the focus away from discriminatory practices and discussing others that are less controversial — yet still unbalanced — as targeted advertising, the articles missed a fundamental piece of the puzzle.

When discussing behavioral strategies, the articles tended to frame them as practices that foster the private interests of online companies. Yet, they only marginally mentioned the unbalanced distribution of interests between users and private companies. In particular, the articles often used direct quotes and paraphrases, as for journalistic standards of objectivity (Hackett, 1984), to report on how advertising companies justify their behavioral practices. One excerpt for example reported,

Many executives in the advertising industry do not see anything wrong with online targeting. They argue that the practice benefits consumers, who see more relevant ads. And they contend that for consumers, relinquishing some innocuous personal data is a small tradeoff for free access to the rich content of the Internet, much of which is ad-supported. [12]

Articles developing the frame “justification and private interests”, discussed the role of “tradeoffs” in information disclosure and collection. They also mentioned the role of “deliberate choice” as information is “freely” and “willingly” provided by Internet users. Even though the “tradeoff” and the “deliberate choice” arguments have often been discussed in connection to the disclosure of personal information, research has also tended to highlight the unbalanced relationships that emerge between users who disclose and private companies who collect, aggregate, and sell information (Berger, 2011; Odlyzko, 2003, 2007). Discussion around such unbalanced relationship emerged only marginally in the articles analyzed.

Also, research has highlighted that deliberate choice should be an informed one. Lack of transparency, unfortunately, often prevents users from taking informed decisions. The analyzed articles made it clear that arguments such as “tradeoffs” and “deliberate choice” were, mostly, quoted or paraphrased from advertising companies. And yet, journalists rarely provided counterarguments to challenge them and to facilitate a more comprehensive understanding of the problem. The New York Times articles developed, separately, the frames of Lack of transparency and of Justification. However, the two hardly co-occurred within the same article, making it more difficult for readers to develop a complete map. Readers, in other words, have to connect the dots — actively engaging in the issue — to understand the big picture. Combining content and discourse analysis of a bigger sample of articles, future research could further investigate this finding, for example exploring the co-occurrence of different frames.

4.3. Frame three: Law and self-regulation

Before 1960 the right to privacy, highly recognized across American courts, evolved to include protection from four main torts: unreasonable intrusion upon one’s seclusion, public disclosure of private facts, appropriation of one’s name or likeness, and placing someone in a false light before the public. Tort laws, though, punished violators after privacy infringements instead of preventing intrusions (Regan, 1995) and was recognized insufficient to protect privacy in the era of modern technologies (McClurg, 2007). Admittedly, tort laws were created to protect privacy under an older paradigm that emphasized the importance of secrecy over that of contextual integrity. Therefore, these regulations were more suitable to protect concealed information rather than foster appropriate flow (Nissenbaum, 2010). Since the 1960s, U.S. Congress has approved a number of laws and regulations for privacy protection, in particular to keep up with the evolution of technology (Regan, 1995). The challenge for legislators has been issuing laws sufficiently detailed to warrant the protection of a fundamental right, extensive enough to embrace the possibilities of technological development, and adequately sensitive to respond to cultural-specific differences (McClurg, 2007) and to contextual expectations (Nissenbaum, 2010).

Consistently with existing research, the majority of the articles explained how existing laws and regulations are often not suitable to effectively regulate the use of personal information online. Even though some articles attempted to explain the intricacy of the issue, few offered an in-depth explanation that could fully inform readers and help them to understand the complexity of contextual privacy protection. Coverage of the economics of privacy, in the sample analyzed, tended to flatten the discourse around laws. Articles implementing the law frame limited the discussion to only one or two specific areas at the time. This allowed journalists to provide more specific explanations of certain aspects of the law; yet, it also limited the scope of the discussion thereby providing a partial image of the issue. The following excerpt exemplifies how the legal aspects of privacy were typically covered in the New York Times,

The founders wrote the Fourth Amendment — guaranteeing protection against illegal search and seizure — at a time when people were most concerned about protecting the privacy of their homes and bodies. The amendment, and more recent federal laws, have been extended to cover telephone communications. Now work has to be done to give Internet activities the same level of privacy protection. [13]

Statements such at the latter framed the Internet as a specific, flat context for which privacy protection laws often do not exist, or still need to adapt. Similarly to this, most articles tended to frame the internet as a homogeneous dimension, or a sum of (mostly undistinguished) activities. Research, yet, has consistently revealed that the Internet is a rather multidimensional reality that includes a number of distinct contexts. In other words, articles implementing the current frame often failed to depict the layered nature of the economics of online privacy, and did not adopt the perspective of contextual integrity (Nissenbaum, 2010). Few articles brought up contextual cues to problematize privacy protection. They suggested that some information, as medical condition and religious affiliation, is more sensitive than other and deserves higher protection. In particular, one article argued that, “the legislation also would require consumers to opt-in to the collection of sensitive information like their medical condition or religious affiliation.” [14] This article mentioned the role of HIPAA in granting the protection of medical privacy and emphasized that sensitive information always deserves legal enforcement. However, these are limited distinctions that fail to represent the complexity of contextual integrity (Nissenbaum, 2010).

Among the articles implementing legal or regulatory frames, many suggested that the best alternative to guarantee the safety of personal information would be the implementation of strategic self-regulation. And yet, these articles failed to acknowledge that self-regulation often means non-participation, as most Web sites force users to accept their privacy policy before creating an account. However, none of the articles analyzed discussed possible strategies to protect personal information online, investing the readers with yet one more responsibility.

4.4. Frame four: Commodification of information

Attributing an economic value to personal information is not a new phenomenon, as forms of behavioral marketing have been around since the 1970s (Solove, 2005). The Internet and social media, though, have facilitated these practices often encouraging individuals to disclose information online where the tradeoffs are unclear. Most early studies addressing privacy in the digital environment concentrated on Internet consumers (Malhotra, et al., 2004; Udo, 2001), probably because e-commerce was the area that generated most privacy concerns then. A growing, more recent, scholarship focuses on Internet users instead (Acquisti and Grossklags, 2005). Later research also suggests that information itself has become a valuable product that may be exchanged on the internet for other — real or perceived — benefits (Campbell and Carlson, 2002). This implies disclosure has increasingly turned into a commodity that internet users, more or less willingly, trade online. Some criticize the modern commodification of information claiming that online companies, as Facebook, have clearly capitalized personal data (Fuchs, 2012).

These trends partly emerged from media discourse analyzed in this study as well. In earlier articles, for example, victims of privacy infringements were mostly addressed as consumers (as in Malhotra, et al., 2004; Udo, 2001). In later articles, instead, they were more often addressed as users (as in Acquisti and Grossklags, 2005; Marwick, et al., 2010). This finding is consistent with existing research that suggested a shift in terminology. Framing individuals as consumers, possibly, contributed to focusing one’s attention on practices of e-commerce. Labeling them as users, instead, shifted the focus from the economic value of purchasing to the economic value of disclosing. It would be interesting to further investigate whether the word “user” — as opposed to “consumer” — was consistently adopted in more recent news. Considering the small scale of the sample analyzed, this study only begins to reveal such a change.

The articles that implemented the current frame, defined personal information as a commodity that can be sold and traded. The following excerpt provides a typical exemplification of such a process,

Your information can then be stored, analyzed, indexed and sold as a commodity to data brokers who in turn might sell it to advertisers, employers, health insurers or credit rating agencies. [15]

Terms as “transactions”, “exchange value”, “price”, and “tradeoff” were often used in relation to personal information online in such a way that might have contributed to the commodification of information in the mind of the readers. Articles, for example, stated that,

People have been willing to give away their data while the companies make money. But there is some momentum for the idea that personal data could function as a kind of online currency, to be cashed in directly or exchanged for other items of value. [16]

The perception of personal data as a commodity may contribute to shifting one’s attention away from the intrinsic value of privacy and to directing it towards a more instrumental understanding of privacy (Moor, 1997).

 

++++++++++

5. Conclusions

This study focused on investigating frames implemented in articles published in the New York Times between 2000 and 2012 that discuss the economics of personal information. The goal of the study was to provide an in-depth investigation of how frames emerged in news coverage of privacy. Informed by frame theory, this study held that media frames reflect and/or contribute to shaping public opinion. Thus, it also aimed at suggesting what may be the consequences of existing media frames in how our culture understands the intersection of privacy, disclosure, and economic interests, and how individuals are likely to discuss it. In fact, “media discourse dominates the larger issue culture, both reflecting it and contributing to its creation” [17].

As it emerged from this study, the articles analyzed tended to simplify the complexity of online privacy, often providing only few facets of the issue. They suggested that, most of the time, lack of control and inappropriate flows happen because of obscure privacy policies. At times, articles also emphasized users’ responsibilities, suggesting that one has agency and, thus, may be held accountable for failing to read the privacy policy of a Web site before accessing it. Although it is understood that newspaper articles may not have the length necessary to tackle the complexity of online privacy, this study revealed that many articles failed to present valid counterarguments to inform readers, with necessary details, as to how their information may be collected and used, and how they may act to detect and prevent inappropriate flows of personal data.

In addition, many articles tended to assume a dystopian perspective suggesting that “privacy is dead,” that existing laws are not adequate to protect one’s privacy online, and that self-regulation is often ineffective because of lack of transparency (as policies are obscure) and lack of users’ engagement (as most do not read policies). Often, the articles warned Internet users of the risks of targeted advertising, but failed to discuss the risks of more problematic practices as price discrimination. Articles also failed to provide readers with strategies that might help them develop informed self-regulation.

Finally, the frame of personal information as commodity that emerged in a number of articles seems to reflect a modern trend of Western societies, that is, “the reconceptualization of privacy in the consumer’s mind from a right or civil liberty to a commodity that can be exchanged for perceived benefits” [18]. This is particularly interesting. In fact, it seems to suggest that, in America, the attention is shifting from the protection of privacy for its own sake, to the instrumental protection of personal data. End of article

 

About the author

Federica Fornaciari received a Ph.D. in communication with a concentration in electronic security and privacy from the University of Illinois at Chicago, were she was also an IGERT fellow. She has a M.A. in journalism and mass communication from Marshall University. Her background is in mass communication theory and research, with a concentration in social media, new technologies, privacy issues, and political communication. She focused her research on privacy online, social network sites, and use of social media for political movements. Her current research interests involve self-presentation online, media framing of technologies, evolution of privacy issues, and political use of new media.
E-mail: fforna3 [at] uic [dot] edu

 

Notes

1. Warren and Brandeis, 1890, p. 195.

2. Fairclough, 2000, p. 316.

3. ibid.

4. Fuchs, 2011, p. 10.

5. Nissenbaum, 2004, p. 157.

6. Saul Hansell, 2008. “Privacy rights and diseases,” New York Times (22 December).

7. “Privacy on the Internet,” New York Times (22 February 2000), at http://www.nytimes.com/2000/02/22/opinion/privacy-on-the-internet.html, accessed 30 November 2014.

8. Saul Hansell, 2006. “Marketers trace paths users leave on Internet,” New York Times (15 August), at http://www.nytimes.com/2006/08/15/technology/15search.html, accessed 30 November 2014.

9. Iyengar, 1991, pp. 15-16.

10. Denise Caruso, 1996. “As privacy grows scarcer on the Internet, people finally start to take notice,” New York Times (3 June), at http://www.nytimes.com/1996/06/03/business/technology-digital-commerce-privacy-grows-scarcer-internet-people-finally-start.html, accessed 30 November 2014.

11. Quote from Sun Microsystems Inc. CEO Scott McNealy; see Matt Hamblen, 2001. “McNealy calls for smart cards to help security,” Computerworld (12 October), at http://www.computerworld.com/article/2585627/security0/mcnealy-calls-for-smart-cards-to-help-security.html, accessed 15 March 2013.

12. Louise Story, 2007. “F.T.C. to review online ads and privacy,” New York Times (1 November), at http://www.nytimes.com/2007/11/01/technology/01Privacy.html, accessed 30 November 2014.

13. Adam Cohen, 2008. “The already big thing on the Internet: Spying on users,” New York Times (5 April), at http://www.nytimes.com/2008/04/05/opinion/05sat4.html, accessed 30 November 2014.

14. Tanzina Vega, 2011. “Senators propose new online privacy law,” New York Times (12 April), at http://mediadecoder.blogs.nytimes.com/2011/04/12/senators-propose-new-online-privacy-law/, accessed 30 November 2014.

15. Kate Murphy, 2012. “How to muddy your tracks on the Internet,” New York Times (3 May), at http://www.nytimes.com/2012/05/03/technology/personaltech/how-to-muddy-your-tracks-on-the-internet.html, accessed 30 November 2014.

16. Joshua Brustein, 2012. “Start-ups seek to help users put a price on their personal data,” New York Times (13 February), at http://www.nytimes.com/2012/02/13/technology/start-ups-aim-to-help-users-put-a-price-on-their-personal-data.html, accessed 30 November 2014.

17. Gamson and Modigliani, 1989, p. 3.

18. Campbell and Carlson, 2002, p. 588.

 

References

A. Acquisti and J. Grossklags, 2005. “Privacy and rationality in individual decision making,” IEEE Security & Privacy, volume 3, number 1, pp. 26–33.
doi: http://dx.doi.org/10.1109/MSP.2005.22, accessed 30 November 2014.

A. Allen, 1988. Uneasy access: Privacy for women in a free society. Totowa, N.J.: Rowman & Littlefield.

I. Altman, 1977. “Privacy regulation: Culturally universal or culturally specific?” Journal of Social Issues, volume 33, number 3, pp. 66–84.
doi: http://dx.doi.org/10.1111/j.1540-4560.1977.tb01883.x, accessed 30 November 2014.

D. Berger, 2011. “Balancing consumer privacy with behavioral targeting,” Santa Clara High Technology Law Journal, volume 27, number 1, at http://digitalcommons.law.scu.edu/chtlj/vol27/iss1/2, accessed 30 November 2014.

d. boyd and N. Ellison, 2007. “Social network sites: Definition, history, and scholarship,” Journal of Computer-Mediated Communication, volume 13, number 1, pp. 210–230.
doi: http://dx.doi.org/10.1111/j.1083-6101.2007.00393.x, accessed 30 November 2014.

J. Campbell and M. Carlson, 2002. “Panopticon.com: Online surveillance and the commodification of privacy,” Journal of Broadcasting & Electronic Media, volume 46, number 4, pp. 586–606.
doi: http://dx.doi.org/10.1207/s15506878jobem4604_6, accessed 30 November 2014.

F. Clark and D. Illman, 2003. “Content analysis of New York Times coverage of space issues for the year 2000,” Science Communication, volume 25, number 1, pp. 14–38.
doi: http://dx.doi.org/10.1177/1075547003255300, accessed 30 November 2014.

C. Dwyer, S. Hiltz, and K. Passerini, 2007. “Trust and privacy concern within social networking sites: A comparison of Facebook and MySpace,” AMCIS 2007 Proceedings, paper 339, at http://aisel.aisnet.org/amcis2007/339, accessed 30 November 2014.

R. Entman, 1993. “Framing: Toward clarification of a fractured paradigm,” Journal of Communication, volume 43, number 4, pp. 51-58.
doi: http://dx.doi.org/10.1111/j.1460-2466.1993.tb01304.x, accessed 30 November 2014.

R. Entman, 1991. “Framing U.S. coverage of international news: Contrasts in narratives of the KAL and Iran air incidents,,” Journal of Communication, volume 41, number 4, pp. 6-27.
doi: http://dx.doi.org/10.1111/j.1460-2466.1991.tb02328.x, accessed 30 November 2014.

D. Evans, 2009. “The online advertising industry: Economics, evolution, and privacy,” Journal of Economic Perspectives, volume 23, number 3, pp. 37-60.
doi: http://dx.doi.org/10.1257/jep.23.3.37, accessed 30 November 2014.

N. Fairclough, 2000. “Critical analysis of media discourse,” In: P. Marris and S. Thornham (editors). Media studies: A reader. New York: New York University Press, pp. 308–325.

G. Fairhurst and R. Sarr, 1996. The art of framing: Managing the language of leadership. San Francisco: Jossey-Bass.

J. Fernback and Z. Papacharissi, 2007. “Online privacy as legal safeguard: The relationship among consumer, online portal, and privacy policies,” New Media & Society, volume 9, number 5, pp. 715–734.
doi: http://dx.doi.org/10.1177/1461444807080336, accessed 30 November 2014.

L. Floridi, 2005. “The ontological interpretation of informational privacy,” Ethics and Information Technology, volume 7, number 4, pp. 185–200.
doi: http://dx.doi.org/10.1007/s10676-006-0001-7, accessed 30 November 2014.

C. Fuchs, 2012. “The political economy of privacy on Facebook,” Television & New Media, volume 13, number 2, pp. 139–159.
doi: http://dx.doi.org/10.1177/1527476411415699, accessed 30 November 2014.

C. Fuchs, 2011. “The Internet and surveillance,” at http://fuchs.uti.at/, accessed 30 November 2014.

W. Gamson and A. Modigliani, 1989. “Media discourse and public opinion on nuclear power: A constructionist approach,” American Journal of Sociology, volume 95, number 1, pp. 1-37.
doi: http://dx.doi.org/10.1086/229213, accessed 30 November 2014.

T. Gitlin, 2003. The whole world is watching: Mass media in the making & unmaking of the New Left. Berkeley: University of California Press.

E. Goffman, 1974. Frame analysis: An essay in the organization of experience. Cambridge, Mass.: Harvard University Press.

R. Hackett, 1984. “Decline of a paradigm? Bias and objectivity in news media studies,” Critical Studies in Mass Communication, volume 1, number 3, pp. 229-259.
doi: http://dx.doi.org/10.1080/15295038409360036, accessed 30 November 2014.

M. Hammersley, 1997. “On the foundation of critical discourse analysis,” Language & Communication, volume 17, number 3, pp. 237-248.
doi: http://dx.doi.org/10.1016/S0271-5309(97)00013-X, accessed 30 November 2014.

M. Hammersley, 1992. What’s wrong with ethnography? Methodological explorations. London: Routledge.

K. Hill, 2012. “How Target figured out a teen girl was pregnant before her father did,” Forbes (16 February), at http://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/, accessed 30 November 2014.

S. Iyengar, 1991. Is anyone responsible? How television frames political issues. Chicago: University of Chicago Press.

O. Järvinen, 2005. “Privacy management of e-health: Content analysis of 39 U.S. health providers’ privacy policies,” publication of the Turku School of Economics and Business Administration, at http://info.tse.fi/julkaisut/vk/Ae3_2005.pdf, accessed 30 November 2014.

A. McClurg, 2007. “In the face of danger: Facial recognition and the limits of privacy law,” Harvard Law Review, volume 120, number 7, pp 1,870-1,891, and at http://harvardlawreview.org/2007/05/in-the-face-of-danger-facial-recognition-and-the-limits-of-privacy-law/, accessed 30 November 2014.

N. Malhotra, S. Kim, and J. Agarwal, 2004. “Internet users information privacy concerns (IUIPC): The construct, the scale, and a causal model,” Information System Research, volume 15, number 4, pp. 336-355, and at http://pubsonline.informs.org/doi/abs/10.1287/isre.1040.0032, accessed 30 November 2014.

J. Matthes, 2009. “What’s in a frame? A content analysis of media framing studies in the world’s leading communication journals, 1990-2005,” Journalism & Mass Communication Quarterly, volume 86, number 2, pp. 349-367.
doi: http://dx.doi.org/10.1177/107769900908600206, accessed 30 November 2014.

J. Moor, 1997. “Towards a theory of privacy in the information age,” ACM SIGCAS Computers and Society, volume 27, number 3, pp. 27-32.
doi: http://dx.doi.org/10.1145/270858.270866, accessed 30 November 2014.

D. Nelkin, 1995. “Science’s fall from grace,” Humanist, volume 55, number 5, pp. 14-19.

C. Nippert-Eng, 2010. Islands of privacy. Chicago: University of Chicago Press.

H. Nissenbaum, 2010. Privacy in context: Technology, policy, and the integrity of social life. Stanford, Calif.: Stanford Law Books.

H. Nissenbaum, 2004. “Privacy as contextual integrity,” Washington Law Review, volume 79, number 1, pp. 119–158.

A. Odlyzko, 2007. “Privacy and the clandestine evolution of e-commerce,” ICEC ’07: Proceedings of the Ninth International Conference on Electronic Commerce, pp. 3–6.
doi: http://dx.doi.org/10.1145/1282100.1282104, accessed 30 November 2014.

A. Odlyzko, 2003. “Privacy, economics, and price discrimination on the Internet,” ICEC ’03: Proceedings of the Fifth International Conference on Electronic Commerce, pp. 355–366.
doi: http://dx.doi.org/10.1145/948005.948051, accessed 30 November 2014.

J. Rachels, 1975. “Why privacy is important,” Philosophy & Public Affairs, volume 4, number 4, pp. 323–333.

P. Regan, 1995. Legislating privacy: Technology, social values, and public policy. Chapel Hill: University of North Carolina Press.

A. Schuck and C. de Vreese, 2006. “Between risk and opportunity: News framing and its effects on public support for EU enlargement,” European Journal of Communication, volume 21, number 1, pp. 5–32.
doi: http://dx.doi.org/10.1177/0267323106060987, accessed 30 November 2014.

T. Sheafer, 2007. “How to evaluate it: The role of story-evaluative tone in agenda setting and priming,” Journal of Communication, volume 57, number 1, pp. 21-39.
doi: http://dx.doi.org/10.1111/j.0021-9916.2007.00327.x, accessed 30 November 2014.

D. Silverman, 2005. Doing qualitative research: A practical handbook. Second edition. Thousand Oaks, Calif.: Sage.

D. Solove, 2005. “A taxonomy of privacy,” University of Pennsylvania Law Review, volume 154, number 3, pp. 477-560, and at https://www.law.upenn.edu/journals/lawreview/articles/volume154/issue3/Solove154U.Pa.L.Rev.477%282006%29.pdf, accessed 30 November 2014.

D. Solove, 2001. “Privacy and power: Computer databases and metaphors for information privacy,” Stanford Law Review, volume 53, number 6, pp. 1,393–1,462.

P. Sprenger, 1999. “Sun on privacy: ‘Get over it’,” Wired (26 January), at http://archive.wired.com/politics/law/news/1999/01/17538, accessed 30 November 2014.

F. Tonkiss, 2004. “Analyzing text and speech: Content and discourse analysis,” In: C. Seale (editor). Researching society and culture. Second edition. London: Sage, pp. 368–395.

G. Udo, 2001. “Privacy and security concerns as major barriers for e-commerce: A survey study,” Information Management & Computer Security, volume 9, number 4, pp. 165–174.
doi: http://dx.doi.org/10.1108/EUM0000000005808, accessed 30 November 2014.

T. van Dijk, 1997. “The study of discourse.” In: T. van Dijk (editor). Discourse as structure and process. London: Sage, pp. 1-–4.

J. Waldo, H. Lin, and L. Millett (editors), 2007. Engaging privacy and information technology in a digital age. Washington, D.C.: National Academies Press.

S. Warren and L. Brandeis, 1890. “Right to privacy,” Harvard Law Review, volume 4, number 5; version at http://groups.csail.mit.edu/mac/classes/6.805/articles/privacy/Privacy_brand_warr2.html, accessed 30 November 2014.

C. Weiss, 1974. “What America’s leaders read,” Public Opinion Quarterly, volume 38, number 1, pp. 1-22.
doi: http://dx.doi.org/10.1086/268131, accessed 30 November 2014.

 


Editorial history

Received 14 January 2014; revised 30 November 2014; accepted 30 November 2014.


Creative Commons License
“Pricey privacy: Framing the economy of information in the digital age” by Federica Fornaciari is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Pricey privacy: Framing the economy of information in the digital age
by Federica Fornaciari.
First Monday, Volume 19, Number 12 - 1 December 2014
http://firstmonday.org/ojs/index.php/fm/article/view/5008/4184
doi: http://dx.doi.org/10.5210/fm.v19i12.5008





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2017. ISSN 1396-0466.