I see you, you see me: Mobile advertisements and privacy
First Monday

I see you, you see me: Mobile advertisements and privacy by Theodore Book and Chris Bronk



Abstract
We present a summary of our own research, showing that mobile advertisements collect and use significant private data, including personally identifiable information. We examine uses of this information beyond ad targeting. We then explore the privacy implications of this practice, and evaluate various potential technical and regulatory responses. We conclude that both regulatory attention and technical measures are needed to avoid potential serious harm to users.

Contents

1. Introduction
2. How do ads gather my data?
3. What happens to my data once they have it?
4. Discussion
5. Remedies
6. Conclusion

 


 

1. Introduction

While users may try to ignore them, advertisements have become ubiquitous on mobile devices such as tablets and smartphones. Most of us consider the invitations to install an app or try a product as simply one of the minor nuisances of modern life. When seen from a privacy perspective, however, those small flashing boxes become areas for serious reflection. Recent research that we conducted reveals that these ads have serious privacy implications (Book, et al., 2013; Book and Wallach, 2015; 2013). In short, those advertisements are not only sharing information with you — they are gathering information about you.

Our research on Android ad libraries has shown that mobile ads, in some cases, can collect a user’s location and contacts, and even listen in with their microphone and camera (Book, et al., 2013; Book and Wallach, 2013). Table 1 presents a summary of some types of information that ad libraries are able to access. While some of these behaviors are rare, even the most common mobile ads are often able to collect enough data to identify a user’s general location and to target ads based on a demographic profile including gender, age, and interests (Book and Wallach, 2015).

 

Table 1: Install weighted ad library permission usage (February 2013).
Type of informationPercentage of libraries
Device location49.6%
Device ID, phone number and call information49.3%
Vibration9.1%
Read user account IDs8.2%
Camera6.6%
Read contacts and social network posts5.3%
Read Web bookmarks1.7%
Record audio1.5%
Add Web bookmarks1.3%

 

In principle, users (likely including the reader) are assumed to have given their consent to the collection, sharing, and processing of this data through their acceptance of a privacy policy included with the application. Users who do not want their information collected in this way may simply choose not to install the app in question. However, given that mobile devices (and hence mobile apps and ads) have become for many an essential tool, the choice to opt out of the mobile world may be seen as non-tenable for many. In this way, mobile apps (and their associated ads) have moved from being another feature in the software marketplace to becoming a part of the fabric of society, and hence have placed themselves in need of greater regulatory scrutiny.

 

++++++++++

2. How do ads gather my data?

Advertisements on mobile devices are generally delivered by an ad library, which is a small program that runs inside an app. The library requests advertisements, displays them, and reacts when a user clicks on them [1]. They can do the same things as any other program, and generally collect some data which is transmitted off the mobile device when an ad is requested. At a minimum, the ad library generally sends back a unique ID for a device, the name of the app being used, and the Internet (IP) address of the phone, which is needed to receive the ad. This exchange also reveals some information regarding the user’s location. The ad library may also attach other pieces of data about the user and her device to the request.

The data transmitted with any given request is relatively limited. However, it becomes powerful when it is combined with data from other requests, as well as data from other users. Let’s take a hypothetical case where an Android phone only transmits a phone ID, app ID, and its IP address — little more than the basic information needed to display an ad. First, we should note that there is a fourth piece of information that is also implicitly being transmitted — the time at which the user is making the request. Given those pieces of data, it then becomes relatively easy to determine usage patterns for your device. The advertising company might know that the user checks a weather app early in the morning, checks traffic at certain times, uses a social networking app at others, and reads the news in the evening. The IP address provides more details. The company can learn that the phone is on a home network in a certain neighborhood at night, on the corporate network of a given company during the day, and appears on a coffee shop network on Tuesday evenings. All of this data can provide a detailed profile of user habits. Where and when the user lives, works, and plays is now known, and can be aggregated with other data about the user’s home zip code, employer, and those with whom she interacts.

That’s not all, however. An advertising company utilizing mobile ad libraries aggregates information about many users. From home network data, the company can infer who may be family members. Social connections may be gleaned from access across public WiFi hotspots. Also becomes easy to identify co-workers, and to map the people that the user spends the most time with. Personal interests can be understood with great accuracy. Not only does the company know the apps that users employ, but it also knows what they do with the app and when they do it. Show up frequently at football games? Rock concerts? Political rallies? Does the user frequent establishments popular with other individuals of a certain mindset? It becomes easy to see that mobile ads provide enough information for advertising companies to build an extremely complete dossier on any person, their interests, and their views.

 

++++++++++

3. What happens to my data once they have it?

If all of this information was only used by computers to target ads for breakfast cereal at some consumers and ads for luxury cars at others, the privacy impact would be relatively low. The potential for abuse becomes much higher when the information in question is linked to a unique individual — at that point, it becomes possible to combine the data with every other piece of information known about a given individual, both for placing ads, and for other purposes. That is to say, when the advertising company is able to move from knowing that a device belongs to an individual with certain interests and habits, to knowing that it belongs to an individual with a specific name and address, new privacy concerns emerge.

In general, information that is sufficient to uniquely identify an individual is referred to as personally identifiable information (PII), and is often subject to stricter controls and scrutiny than anonymous or aggregate data. However, in a world of big data, PII is much broader than unique identifiers such as a name or government id number. Indeed, the U.S. Office of Management and Budget (OMB) defines PII as “information which can be used to distinguish or trace an individual’s identity, such as their name, social security number, biometric records, etc. alone, or when combined with other personal or identifying information which is linked or linkable to a specific individual” (U.S. Office of Management and Budget, 2007). It is clear that the data collected by mobile ads is sufficient to uniquely identify a great many individuals. Golle (2006) was able to determine that gender, zip code, and date of birth alone were sufficient to uniquely identify 63 percent of the U.S. population. Consider that mobile ads collect far more detailed information. Given enough “anonymous” details about an individual, it becomes relatively simple to uniquely identify them. In many cases, mobile ad data clearly provides enough data to uniquely identify individuals.

Indeed, there is evidence that ad data is being linked to other databases on individuals. Not only did our investigation find evidence that ads were being targeted based on user data outside the scope of what was provided by the ad libraries, Acxiom, a major data broker, claims to be able to target mobile ads based on the same individual lists that it makes available for other forms of marketing (Book and Wallach, 2015; Acxiom, 2015b). This means that they claim to have the ability to correlate mobile ad IDs with individual consumers — logically and physically.

The scope of this correlation goes far beyond simply targeting ads. Because mobile ad data presents a rich portrait of the user in question, it could be used to build a more accurate assessment of an individual’s risk as an insurance customer, the likelihood that they will default on a loan, or their outcome as an employee of a company. Indeed, there is some evidence that this is already taking place. In a case study published on its Web site, Acxiom states that it has used data from DoubleClick for advertisers (which includes both mobile and Web data) to assist “a major financial services company” to make “more precise just-in-time decision making for the company.” It notes: “An adjustment in credit approvals of just one to two percent can result in millions in additional customer lifetime value” (Acxiom, 2015a). The claim that advertising data is being used to drive decisions related to the extension of credit shows just how integrated advertising data has already become with other consumer data sets. Interesting as well is that DoubleClick is a service owned by Google, the company that develops the Android operating system, and earns a majority of its corporate revenues via the sale of advertising.

Once advertising data has been integrated into data brokers’ storehouses, it is not just used for commercial purposes. Political groups also take advantage of it. Indeed, Acxiom data has been used by political campaigns since at least 2004 (Fournier, 2014). Today, both major United States political parties have built extensive “get out the vote” micro-targeting systems that rely on information from third party data brokers. Nor is this the end of political use of this data. The World Privacy Forum reports that “The U.S. federal government is one of the largest and most frequent customers of commercial data brokers” (Gellman and Dixon, 2013). The same information is likewise available to other governments throughout the world, both regarding their own citizens and the citizens of other states. In short, it is clear that the privacy impact of mobile ads extends well beyond the realm of mobile ad targeting.

 

++++++++++

4. Discusson

In many ways, the question of how to address the data privacy norms that should be expected from digital ads relates to the question of what privacy norms should govern online interaction. When the U.S. Constitution was drafted, most Americans lived in rural communities where most of their activity was known by their neighbors, and practical privacy was a factor of relatively sparse population density. The urbanization that followed the Industrial Revolution permitted a new anonymity, where an individual could assume that most actions taken in public would be essentially unknowable to all but the collection of strangers who might happen to be present on a busy street corner. We contend that the aggregation of data from mobile devices is reducing this form of “urban anonymity” for those who choose to use them.

Perfect anonymity is impossible in a world where everyone is milliseconds away from everyone else, claims to a “right to be forgotten” notwithstanding (Rosen, 2012; Bennett, 2012). The advent of massive data aggregation, labeled big data, makes it possible for various entities to know a great deal about those producing digital information, their “digital exhaust” (Deibert, 2015). Big data allows micro targeting that brings a power beyond the ability to know facts about a specific individual — the production of a short list of customers with specific characteristics, individuals suffering from a given illness, or likely members of extremist groups. The difference is clear when one considers the actions of totalitarian states in which the suppression, censorship, or elimination of individuals and groups opposed to the state leadership is deemed acceptable. Thus, if a database query can produce a list of likely enemies of the state, than the potential risks are clear.

Because of this concern, the question of data collection from mobile ads fits into the larger context of data privacy and norms for data collection. After all, the privacy concerns arise from the data being collected, not from the ads being shown. Restrictions aimed exclusively at mobile ads would not stop others from collecting user data and building profiles. For example, the (apparently now defunct) company Appayable (2013) offered a library which would collect user data without displaying ads, and pay app developers based on the data collected. Additionally, app developers who collect data for legitimate purposes are also free to sell that data to data brokers or use it in ways unrelated to the functioning of their application. In this context, a lively debate regarding application privacy policies has emerged, without, however, having produced any definitive societal norms regarding such behavior (Earp, et al., 2005).

Likewise, other parties involved in the digital economy are in a position to collect similar information. Telecom providers have access to unencrypted information flowing over their networks, and could use it to build detailed user profiles (Bergen and Kantrowitz, 2014). Governments are also in a position to obtain access to telecommunications networks and build similar profiles (Soltani and Gellman, 2013). Web site operators are likewise in a position to build detailed profiles about their users, and often do (Hosea, et al., 2012).

Furthermore, the question of privacy as it relates to mobile advertising is situated within the context of privacy as a whole. While it may be easier to obtain a profile of an individual’s movements from a mobile phone, the profile itself is similar to a profile obtained by attaching a tracking device to their car, or by simply following them around throughout the day. It would seem logical for the same norms to apply to data collected by digital means than to more intrusive ones.

4.1. Whose responsibility?

Etzioni (1999) provides ample fodder for discussion in considering the balance to be struck in protecting individual rights to privacy versus the common need for open disclosure of information. Advances in computing and networking technologies now allow nearly two billion people to carry a mobile computing device capable of performing many or most of the functions of larger computers. Users of these devices are sensors, constantly emitting their “digital exhaust” that may be collected, tracked, analyzed, mined, and monetized. Unfortunately, insufficient attention is given to policy, connecting the breakneck pace of IT innovation to social or political issues that may infringe civil liberties or rights. Google and Facebook, which monetize user behavior and user-provided data through the sale of advertising, are companies harnessing large numbers of users to produce revenues, but what of other firms part of the Internet ecosystem?

Wireless telephone carriers are likely even better positioned to collect information on their customers that can be packaged for advertising purposes. Verizon, the largest U.S. wireless company, has produced a technology, the Unique Identification Header (UIDH), that tracks interactions between cell phones and Web sites facilitated by its network. The technology was labeled a “privacy killing machine,” in the tech media [2]. It is illustrative of the differences in how privacy regulation is undertaken in the United States versus the European Union.

Verizon’s approach to UIDH is typical of privacy policies employed by U.S. firms that allow users to “opt-out” of privacy eroding mechanisms or processes. On a Verizon phone, the UIDH is a feature of the device turned on upon its activation at the time of sale. It is incumbent upon the user to turn off the UIDH feature, a non-trivial task. These sort of opt-out agreements are typically the modus operandi of how U.S. tech firms collect user data. Sure, the user can turn off the potentially intrusive feature, but only after finding out that it exists and learning how to disable it.

In the EU, data privacy rules have generally followed an “opt-in” rule. This translates to a privacy specific action in which the user is provided a description of user data that may be collected by the company, as well as offering the choice of whether or not this is desired upon activation of the account or service. While this may represent a minor nuisance in signing up for something online, it places the privacy issue in front of the user in a more transparent manner. Digital privacy advocates generally prefer such policies, as consumers often acquiesce to default privacy options (Bouckaert and Degryse, 2006).

4.2. The privacy disconnect

Researcher and policy advocate Jacob Apelbaum asserts that in the digital age, “Privacy is freedom.” [3] He makes the argument in the wake of the massive disclosure of digital eavesdropping efforts made public through the documents purloined by U.S. National Security Agency (NSA) contractor Edward Snowden. The Snowden archive demonstrated how the NSA was increasingly able to employ the platforms of social media, software, and Internet companies to collect massive quantities of information.

This information is very much a part of the contemporary U.S. intelligence cycle for military and counter-terrorism operations. A recent U.S. Department of Defense news release documenting how a “selfie” portrait photograph by an Islamic State operative posted to social media led to a U.S. airstrike. Within 24 hours of the image’s posting to the Internet, U.S. aircraft dropped GPS-guided bombs on an ISIS command facility in Syria [4]. That Facebook, Twitter, or Instagram could provide the necessary coordinates for targeting precision-guided ordinance speaks volumes about how adept the Pentagon has grown in converting the massive amount of individually-generated content posted to the Internet into targeting data.

The intersection of mobile privacy concerns and national security issues came to light with the Snowden affair in 2013. Silicon Valley Internet firms, including Google, made public statements highly critical of the NSA’s methods involving their data and platforms when they became public in 2013. While they lament the erosion of privacy found in NSA and GCHQ (the U.K. signals intelligence agency) documentation on exploitation of Internet sources for intelligence purposes, the companies of Silicon Valley continue to reap the rewards of business strategies designed to convert user behavior into advertising revenue. That such content may be harnessed to kill Islamic insurgents should be no surprise, but should prompt discussion on the ethics of such business, especially as they pertain to surveillance against U.S. citizens or persons.

We contend that U.S. lawmakers consider this problem with concern for the privacy interests of individuals, whether U.S. individuals or foreign nationals. There is also need for debate in the public sphere regarding to what extent data streams from mobile devices are legitimate sources of information for government agencies. Certainly, information that is collected for advertising or other purposes has the potential to be repurposed for ends very different from those originally intended. The difficulties that arise in keeping abreast of changing technology, understanding potential privacy impacts, and preventing abuses of collected information all speak to the need for stronger policy advocacy on the issue regarding the activities of the U.S. government.

 

++++++++++

5. Remedies

Questions of privacy, even digital privacy, have predate the advent of mobile advertisments. International organizations such as the OECD have provided guidelines of regulations protecting privacy (Organisation for Economic Co-operation and Development (OECD), 2013). The European Union has a substantial canon of privacy law, and the U.S. Federal Communications Comission has published self-regulatory guidelines (Council of Europe, 1981; U.S. Federal Trade Commission, 2009). The credit card industry has developed a set of standards for protection of sensitive data known as the Payment Card Industry Data Security Standard (PCI DSS) (PCI Security Standards Council, 2015). Web browser-based ad blockers report hundreds of millions of installations by individuals seeking greater privacy as well as their preference for ad-free browsing. However, as mobile advertising presents new opportunities to collect user data, it also presents some new challenges related to data privacy and protection. To understand the best ways of addressing those challenges, we need a deeper overview of how the mobile ad market works.

5.1. Market participants

There are a number of distinct roles in the mobile advertising market. App developers build mobile applications. There are several means by which they earn revenue, including app sales, sales of “in-app purchases,” and using an app to drive sales through other channels, but one of the largest revenue sources is advertisements (Statista, 2015). Most app developers do not handle ad placements themselves. Rather, they include the ad libraries described earlier that are provided by one or more advertising brokers as a part of their applications. These advertising brokers display ads in an application app and pay a portion of the revenue to the app developers. They also control the data that their libraries collect and with whom it is shared. Also, there are ad exchanges, often run by data brokers, where potential advertisers and their agents have access to the data collected from users, and make bids to place ads on their devices. Additionally, there are platform vendors (such as Apple or Google) who build mobile operating systems, and have some control over the capabilities of software that runs on their platforms. Finally, there are data brokers, who collect data from many sources, including advertisements, and then re-sell that data for a variety of purposes including ad targeting.

The players that have the greatest control in the market are the platform vendors. They have two essential points of leverage. First of all, their control of the platform limits what data an app is technically able to collect. Secondly, and more importantly, they control the app stores where users download apps. Their control of the app stores allows them to place restrictions on the behavior of applications, including the collection of personal data. We have identified well over a hundred ad brokers, located in many different jurisdictions (Book and Wallach, 2013). There are likewise large numbers of ad exchanges and data brokers, and it is relatively easy to start new ones. Given the number of participants, and the ease of entry into the market, it seems unlikely that any effective self-regulation can be expected from those parties. The platform vendors, by contrast, have the power to unilaterally establish and enforce any restrictions on mobile advertising that they see fit. They also stand to benefit if their platform is perceived as more secure or respectful of user privacy than their competitors.

5.2. Economics

Mobile advertising is big business. The Internet Advertising Bureau (2015) reports that mobile advertising revenue in the United States in 2014 was US$12.5 billion — up 76 percent from the prior year, but still a relatively small share of the US$49.5 billion total Internet advertising market. It is more difficult to put a value on the data collected through mobile ads. Certainly, much of the value of mobile advertising comes from the ability to target mobile ads at appropriate consumers. A report commissioned by the Data Driven Marketing Institute places the value of mobile customer relationship management (conducted by data brokers) in the United States at US$2 billion annually (Deighton and Johnson, 2013). However, this would appear to exclude the value of mobile data sold for other purposes, and include the value of non-mobile data sold for mobile ad targeting. List and database service providers, which sell consumer data, were reported as having revenues of US$7 billion annually. While these numbers do not exactly compare apples to apples, it would appear that the value of targeted mobile ads is significantly larger than the value that comes from selling user information collected by advertising libraries alone. This implies that advertising brokers, who connect sellers to prospective buyers, would seek to maintain their advertising revenue, even if it requires accepting lower revenue from selling data to data brokers.

Indeed, the growth of mobile advertising has produced a change in the revenue model in the software industry. A decade ago, software revenue came from two primary sources: the sale of licenses and the sale of support contracts. Today, much software is distributed for free, with revenue coming from advertisements and the sale of “in-app products” — or purchases made by the user after the software is already in use. The Android and Windows operating systems are good examples of this. Until recently, Microsoft’s business model for Windows relied on the sale of licenses to run the software on personal computers. Android, by contrast, has always been given away for free. Google receives revenue when apps or media are purchased by users, as well as a share of in-app revenue from apps distributed on through Google’s Android app store. Likewise, Google generates revenues when developers choose to incorporate its advertising platform within their apps.

5.3. Technical remedies

When dealing with technical problems, technical solutions are generally preferable to regulatory ones. Not only are they easier to adapt to ongoing technical development, but they impose no regulatory burden, and have no risk of non-compliance, either by private parties or by governments, themselves. However, because the process of collecting user data and displaying ads is technically very similar to the process of displaying user content, it is not easy to develop a technical solution that can be implemented without participation from the advertising companies. Software that is similar to existing antivirus software or Web ad-blocking software could be used to prevent the collection of personal data (and the display of advertisements) but this is made difficult due to security features on modern mobile devices. Developing this sort of software would either require the participation of the platform vendors (who themselves make revenue from advertisements) or the requirement to use defects in their platforms to “root” the device — a requirement that would restrict the software to a small, technical audience.

With the participation of platform vendors or regulatory action, less blunt approaches become possible. Apart from blocking all advertisements, it is relatively trivial for platform vendors (through their app stores) to place restrictions on the type of information collected by advertisements. Indeed, both Google and Apple currently have policies restricting the type of device identifiers that can be collected by advertisers (Google, 2015; Apple, 2014). While these policies would permit the use of an “advertising identifier” which is for most intents and purposes a unique device ID, it would be easy for the platforms to require the use of an ID that makes it difficult to link two apps on the same device with the same user, or even to link multiple requests with the same device. While the device’s Internet address, which is needed for an ad library to function, would still allow some degree of user tracking, it would become extremely difficult to track a user from one network to another as they went about their day.

Beyond simple restrictions on what data can be collected by ad libraries, researchers have also developed solutions using cryptography that enable the delivery of targeted ads without revealing information about the individual receiving the ads (Toubiana, et al., 2010). Such technical solutions could enable targeted advertising to continue, along with its associated benefits, such as apps and content that are available at no cost to the user, while still preventing the collection of data sets used for purposes outside of targeting advertisements. While ongoing research continues in this area, it seems clear that technical measures, applied properly, can allow the use of targeted ads, while preventing the assembly of data sets on a particular user, with their associated privacy concerns. Ease of use for implementation of such features may be a challenge, however.

5.4. Regulatory remedies

While it is beyond the scope of this paper to examine all of the regulatory questions related to data privacy in the context of mobile advertisements, we want to lay out some general principles that relate to any regulation. Many public interest groups, such as the EFF and the ACLU, have already articulated positions regarding data privacy. What follows is intended as our contribution, as researchers to the discussion.

To the extent that platform vendors fail to protect the public interest in user privacy, it is possible for regulators to force market participants to limit certain practices. There are essentially three approaches to privacy regulation — approaches aimed at clarifying what data is being collected and how it is used, approaches allowing users to opt out of data collection, or requiring them to opt in, and approaches that prohibit certain actions: either types of data collection or uses of the collected data. All of these approaches have strengths and weaknesses. Perhaps the least effective regulation is the attempt to regulate specific technologies. Technological developments frequently render prior technologies obsolete, or provide other ways of collecting the same information. Likewise, regulations directed at specific technologies may impede legitimate functionality. The European Union’s efforts to regulate the use of Web browser cookies is an excellent example of the limitations of this sort of regulation (European Parliament and the Council of the European Union, 2002). On the one hand, it does little to improve privacy, as the same data obtained through cookies may be obtained in other ways. On the other hand, it has a broad negative impact on usability, as it requires users to click on a special dialog on every Web site they visit. This failure provides a cautionary warning on the efforts of slow moving and technologically incompetent bureaucracies to control quickly adapting technologies.

A second sort of regulation addresses not specific technologies, but specific types of data. For example, the U.S. Health Insurance Portability and Accountability Act (HIPAA) places particularly burdensome requirements on the transmission of personally identifiable information (PII) in a health care context. In this way, the question of the technical means of collecting and transmitting the information is not addressed by regulators, but the use and transmittal of the information is. In this way, regulators avoid stepping into technical areas outside their areas of competency and focus on core issues related to privacy. However, this approach also presents weaknesses. As we have seen, many pieces of data can be inferred from other data sets. For example, a trace of a person’s location does not uniquely identify them, indicate their membership in any protected group, or expose their political or religious beliefs. However, when combined with other data sets, all of these things can potentially be deduced. The same is true with other sorts of data — shopping habits, consumer preferences, even address information, can yield detailed profiles on an individual not expressly contained within the data being collected.

A third sort of regulation addresses not the type of data being collected, but the use of that data. For example, United States civil rights legislation prohibits activities that have a disparate impact on certain protected groups. In this way, using data to discriminate on the basis of ethnicity in the provision of public services is banned, regardless of the specific data that is used to discriminate. In this way, regulators are able to abstract themselves from specific questions about the possible uses of a specific data set, and focus on the central regulatory issue — harmful behavior which needs to be limited or prohibited. The primary limitation of this approach is that it permits the compilation of data sets that could easily be used in violation of the law, while restricting specific uses of that data. However, as lawbreakers are, by definition, individuals who do not obey the law, any regulation leaves open the possibility of abuse by those who choose not to obey the regulation in question.

Regardless of the level at which regulators seek to act, they have several basic choices in how they regulate the use of data. They can require the individuals using the data to inform the public of what data they are collecting, they can require that they allow others to opt out of the data use, they can require that participants opt into the use or they can ban the use all together. Each of these approaches has a legitimate function depending on the sensitivity of the data use in question. On the one hand, only the most trivial uses of personal data should be permitted without at least informing the public of the data use. On the other hand, only the most abusive uses of data should be prohibited all together. Most applications of personal data should either allow a user to opt out, or, for more sensitive uses, require a user to opt in. Exactly which use should be placed in which category is a matter for public debate, but the general principal that data usages that have greater potential to harm an individual should be subject to tighter controls remains valid. One might imagine that an individual would have to opt out of data being used to target advertisements, but might have to opt in to the same data being used to set insurance premiums.

 

++++++++++

6. Conclusion

As we have seen, while mobile advertising presents the challenge of privacy in a new light, it is not an essentially new challenge. While the era of big data makes it possible to build more and more detailed profiles on individuals, the basic regulatory challenge remains the same: restricting activities that may have a harmful effect on others. High level regulations accompanied by low level technical solutions offer the possibility of protecting individuals from the abuses that new technology makes possible, while preserving its benefits for current and future generations.

Much work remains to be done, both in terms of understanding the privacy implications of mobile advertising, and in terms of developing the technical and policy responses necesary to appropriately preserve user privacy. This includes building a greater societal consensus on what limits should exist on the collecting and sharing of personal information. It is our hope that this work both contributes to this discourse and encourages others to do the same. Our own future work will continue to measure the privacy impact of mobile ads, including tracking of the data collected by the ads, and the ways in which information is used. End of article

 

About the authors

Theodore Book is a doctoral candidate at Rice University in Houston, Texas. His research focuses on exploring the privacy impact of mobile advertisements.
E-mail: tbook [at] rice [dot] edu

Chris Bronk is an assistant professor at the University of Houston, and holds additional appointments in computer science at Rice University and cyber geopolitics at Toronto’s Munk School.
E-mail: rcbronk [at] central [dot] uh [dot] edu

 

Notes

1. It should be noted that we are speaking of mobile ads embedded in applications. Mobile ads found on Web pages have there own privacy issues, but they are largely identical to the issues posed by Web advertisements in general, and will not be discussed in this paper.

2. http://www.wired.com/2014/10/verizons-perma-cookie/, accessed 17 February 2016.

3. https://en.wikipedia.org/wiki/Citizenfour.

4. http://www.jpost.com/Middle-East/US-lead-coalition-strikes-ISIS-base-because-of-selfie-405142.

 

References

Acxiom, 2015a. “Financial services company reduces risk in credit decisions,” at http://www.acxiom.com/resources/financial-services-company-reduces-risk-credit-decisions/, accessed 17 February 2016.

Acxiom, 2015b. “Advertise your business on mobile devices,” at http://myacxiompartner.com/mobileadvertising.html, accessed 17 February 2016.

Appayable, 2013. “Plug & play app monetization,” at https://web.archive.org/web/ 20131127025238/https://www.appayable.com/, accessed 17 February 2016.

Apple, 2014. “iOS developer program license agreement,” at https://developer.apple.com/programs/terms/ios/standard/ ios_program_standard_agreement_20140909.pdf, accessed 17 February 2016.

S.C. Bennett, 2012. “The ‘right to be forgotten’: Reconciling EU and US perspectives,” Berkeley Journal of International Law, volume 30, number 1, pp. 161–195, and at http://scholarship.law.berkeley.edu/bjil/vol30/iss1/4/, accessed 17 February 2016.

M. Bergen and A. Kantrowitz, 2014. “Verizon looks to target its mobile subscribers with ads,” Advertising Age (21 May), at http://adage.com/article/digital/verizon-target-mobile-subscribers-ads/293356/, accessed 17 February 2016.

T. Book and D.S. Wallach, 2015. “An empirical study of mobile ad targeting,” arXiv.org (23 February), at http://arxiv.org/abs/1502.06577, accessed 17 February 2016.

T. Book and D.S. Wallach, 2013. “A case of collusion: A study of the interface between ad libraries and their apps,” SPSM ’13: Proceedings of the Third ACM Workshop on Security and Privacy in Smartphones & Mobile Devices, pp. 79–86.
doi: http://dx.doi.org/10.1145/2516760.2516762, accessed 17 February 2016.

T. Book, A. Pridgen, and D.S. Wallach, 2013. “Longitudinal analysis of Android ad library permissions,” paper presented at IEEE CS Security and Privacy Workshop (San Francisco); version at http://arxiv.org/pdf/1303.0857.pdf, accessed 17 February 2016.

J. Bouckaert and H. Degryse, 2006. “Opt in versus opt out: A free-entry analysis of privacy policies,” Technical report, CESifo working paper; version at http://www.econinfosec.org/archive/weis2006/docs/34.pdf, accessed 17 February 2016.

Council of Europe, 1981. “Convention for the protection of individuals with regard to automatic processing of personal data,” at http://www.coe.int/en/web/conventions/full-list/-/conventions/treaty/108, accessed 17 February 2016.

R. Deibert, 2015. “The geopolitics of cyberspace after Snowden,” Current History (January), pp. 9–15, and at http://www.currenthistory.com/Deibert_CurrentHistory.pdf, accessed 17 February 2016.

J. Deighton and P.A. Johnson, 2013. “The value of data: Consequences for insight, innovation & efficiency in the U.S. economy” (18 October), at http://thedma.org/wp-content/uploads/The_Value_of_Data_Consequences_for_Insight_Innovation_and_Efficiency_in_the_US_Economy_WEB.pdf, accessed 17 February 2016.

J.B. Earp, A.I. Antón, L. Aiman-Smith, and W.H. Stufflebeam, 2005. “Examining Internet privacy policies within the context of user privacy values,” IEEE Transactions on Engineering Management, volume 52, number 2, pp. 227–237.
doi: http://dx.doi.org/10.1109/TEM.2005.844927, accessed 17 February 2016.

A. Etzioni, 1999. The limits of privacy. New York: Basic Books.

European Parliament and the Council of the European Union, 2002. “Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector,” at http://eur-lex.europa.eu/, accessed 17 February 2016.

R. Fournier, 2014. “GOP pierces Democratic monopoly on technology, targeting, and voter mobilization,” National Journal (5 November), at https://www.nationaljournal.com/politics/2014/11/05/gop-pierces-democratic-monopoly-technology-targeting-voter-mobilization, accessed 17 February 2016.

R. Gellman and P. Dixon, 2013. “Data brokers and the federal government: A new front in the battle for privacy opens,” World Privacy Forum (30 October), at https://www.worldprivacyforum.org/2013/10/report-data-brokers-and-the-federal-government-a-new-front-in-the-battle-for-privacy-opens/, accessed 17 February 2016.

P. Golle, 2006. “Revisiting the uniqueness of simple demographics in the U.S. population,” WPES ’06: Proceedings of the Fifth ACM Workshop on Privacy in Electronic Society, pp. 77–80.
doi: http://dx.doi.org/10.1145/1179601.1179615, accessed 17 February 2016.

Google, 2015. “Google Play developer program policies,” at https://play.google.com/intl/en/about/developer-content-policy.html, accessed 17 February 2016.

D. Hosea, A. Rascon, R. Zimmerman, A. Oddo, and N. Thurston, 2012. “Method and system for Web user profiling and selective content delivery,” U.S. patent 8,108,245 B1 (31 January), at http://www.google.com/patents/US8108245, accessed 17 February 2016.

Internet Advertising Bureau, 2015. “IAB Internet advertising revenue report: 2014 full year results,” version at http://www.iab.net/media/file/PwC_IAB_Webinar_Presentation_FY2014_PWC.pdf, accessed 17 February 2016.

Organisation for Economic Co-operation and Development (OECD), 2013. “OECD guidelines on the protection of privacy and transborder flows of personal data,” at http://www.oecd.org/sti/ieconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm, accessed 17 February 2016.

PCI Security Standards Council, 2015. “Payment card industry (PCI) data security standard,” version 3.1, at https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-1.pdf, accessed 17 February 2016.

J. Rosen, 2012. “The right to be forgotten,” Stanford Law Review, volume 64, pp. 88–92, at http://www.stanfordlawreview.org/online/privacy-paradox/right-to-be-forgotten, accessed 17 February 2016.

A. Soltani and B. Gellman, 2013. “New documents show how the NSA infers relationships based on mobile location data,” Washington Post (10 December), at https://www.washingtonpost.com/news/the-switch/wp/2013/12/10/new-documents-show-how-the-nsa-infers-relationships-based-on-mobile-location-data/, accessed 17 February 2016.

Statista, 2015. “Global app economy revenues in 2015, by revenue source,” at http://www.statista.com/statistics/284083/apps-global-revenue-sources/, accessed 17 February 2016.

V. Toubiana, A. Narayanan, D. Boneh, H. Nissenbaum, and S. Barocas, 2010. “Adnostic: Privacy preserving targeted advertising,” paper presented at the ISOC Network and Distributed System Security Symposium (NDSS) 2010; version at https://crypto.stanford.edu/adnostic/adnostic.pdf, accessed 17 February 2016.

U.S. Federal Trade Commission (FTC), 2009. “FTC staff report: Self-regulatory principles for online behavioral advertising” (February), at https://www.ftc.gov/sites/default/files/documents/reports/federal-trade-commission-staff-report-self-regulatory-principles-online-behavioral-advertising/p085400behavadreport.pdf, accessed 17 February 2016.

U.S. Office of Management and Budget (OMB), 2007. “Safeguarding against and responding to the breach of personally identifiable information,” OMB Memorandum M–07–16 (22 May), at https://www.whitehouse.gov/sites/default/files/omb/memoranda/fy2007/m07-16.pdf, accessed 17 February 2016.

 


Editorial history

Received 22 August 2015; accepted 17 February 2016.


Creative Commons License
This paper is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.

I see you, you see me: Mobile advertisements and privacy
by Theodore Book and Chris Bronk.
First Monday, Volume 21, Number 3 - 7 March 2016
http://firstmonday.org/ojs/index.php/fm/article/view/6154/5215
doi: http://dx.doi.org/10.5210/fm.v21i3.6154





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2016.