Digital inclusion and data profiling
First Monday

Digital inclusion and data profiling by Seeta Pena Gangadharan



Abstract
In the United States, digital inclusion policies designed to introduce poor people, communities of color, indigenous, and migrants (collectively, “chronically underserved communities” or “the underserved”) to the economic, social, and political benefits of broadband lie in tension with new practices and techniques of online surveillance. While online surveillance activity affects all broadband users, members of chronically underserved communities are potentially more vulnerable to the harmful effects of surveillant technologies. This paper examines specific examples of commercial data profiling against a longer history of low–tech data profiling of chronically underserved communities. It concludes by calling for issues of online privacy and surveillance to punctuate digital inclusion discourse. Until this happens, digital inclusion policies threaten to bring chronically underserved communities into online worlds that, as Gandy (2009) argued, reinforce and exacerbate social exclusion and inequalities.

Contents

Introduction
Three cases of low–tech data profiling
Surveillance in the twenty–first century: Commercial data profiling and the underserved
Rethinking digital inclusion
Conclusion

 


 

Introduction

Since the 1990s, digital inclusion discourse has come a long way in addressing the role of social context and social infrastructures in making Internet access meaningful. Scholars, such as Dailey, et al. (2010), Selwyn (2004), Hargittai (2002), Warschauer (2002), and DiMaggio and Hargittai (2001), demonstrated that going online requires more than a live wire into the home. According to these works, digital inclusion requires attention to individual skills and know–how, social and community support systems, and results in various modes of access. These works helped to expand digital inclusion policies in order to better serve the poor, communities of color, migrants, and indigenous groups (collectively, “chronically underserved communities” or “the underserved”) [1].

With few exceptions (Eubanks, 2011; Sandvig, 2003, 2006; Viseu, et al., 2004), the study of digital inclusion has yet to engage with issues of privacy and surveillance that are also a marker of digitally integrated life. Data profiling, which in general terms involves sorting through data, discovering patterns and relationships within this data, and making predictive determinations of behavior (McClurg, 2003), is greatly improved by software that, for example, enables electronic commerce and online transactions. Profiles created from online behavior are frequently analyzed alongside of databases that track offline behavior. Though not all modern–day data profiling is for ill, it transforms and facilitates the ability of state and corporate actors to use data profiles for persuasive, coercive ends. As argued by legal scholars and social scientists, like Turow (2005), Solove (2006), Hoofnagle, et al. (2010), and Ayenson, et al. (2011), the ease of tracking personal data confuses notions of privacy, creating non–transparent, asymmetric power relations between the profilers and profiled, in political, social, and economic contexts.

Unfortunately, much of the scholarly literature on online privacy tends to focus on the impacts of surveillant digital technologies — either upon the average consumer (Turow, et al., 2009) or, as it is known in legal discourse, “reasonable person” [2]. But, in a world where social prejudice can easily be grafted onto digital tools, members of chronically underserved communities are potentially prime candidates for exploitation: twenty–first century data profiling preys on those with little or no resources to identify and challenge abusive practices by state or corporate perpetrators. With this in mind, the goal of this paper is twofold. First, it tilts discussions about online privacy in a historical direction and connects low–tech and present–day, high–tech instances of data profiling of poor people and, especially in the United States, communities of color. Second, it folds this longer history of data profiling into discussions about digital inclusion, which tend to view access to broadband technologies as a path to a positive, prosperous future.

The paper proceeds in three main parts. I begin by briefly reviewing three instances of surveillance which have impacted the political freedom (racial profiling), economic well–being (redlining), and health (medical profiling) of the underserved. The paper then considers commercial data profiling — specifically data profiling by lenders, brokers, and credit information companies involved in the subprime mortgage crisis — as examples of these practices extending into an online context. The third section suggests that the problem of data profiling and the mortgage crisis relates to larger questions as to what digital inclusion means to policy–makers and whether digital inclusion can be automatically associated with positive social impacts. The paper concludes by calling for better coordination between policy–making in the areas of digital inclusion and online privacy. Digital inclusion policies threaten to bring chronically underserved communities into online worlds that, as Gandy (2009) argued, reinforce and exacerbate social exclusion and inequalities.

 

++++++++++

Three cases of low–tech data profiling

Throughout the course of American history, the problem of corporate and state surveillance and exploitation of chronically underserved communities has appeared in stark terms and been the cause of moral outrage. Among the most well–known cases in American history are those of racial profiling, redlining, and medical profiling. Each represents an instance where corporate or state actors used social sorting techniques to classify and exclude individuals and groups from political, social, and economic well–being.

The first of these, racial profiling, refers to the categorizing, monitoring, and control of individuals based on racial or ethnic characteristics, usually under the pretense of maintaining social order [3]. In the 1990s, racial profiling became part of public discourse in the wake of a series of court cases and journalistic exposés concerning the excessive targeting of Blacks and other minorities in relation to drug interdiction efforts.

The use of profiling in relation to the drug trade in the U.S. was not always racially coded. As Harris (2002) showed, the originator of profiling techniques, former Florida sheriff Bob Vogel, never intended for race to be a factor in making determinations about motorists most likely to fall into a category of drug offender. Vogel’s interest in profiling was criminal profiling, which involves generating an empirically accurate portrait of drug trafficking and drug traffickers, and making predictions based on these data. Criminal profiling depends on reliable statistical information in order to anticipate certain forms of illicit activities.

According to Harris, however, when several divisions of the United States Drug Enforcement Agency (DEA) applied Vogel’s idea to their work, they merged racist cultural norms onto what might have otherwise been laudable profiling techniques. DEA manuals for drug interdiction used minorities as offenders in training material for law enforcement. DEA intelligence reports began classifying types of drug offenders along racial lines, alerting agents to look out for racially identifying features, such as type of hair and skin color. These interpretations were wildly divergent from crime statistics. As Harris (1999) reported, in the 1990s, “hit rate” statistics for vehicular searches were the same for African Americans and Whites. For pat–down searches (in airports), “hit rates” were lower for African Americans and Latinos as compared to Whites.

Throughout the 1980s and into the 1990s, racial profiling practices grew. As documented in Harris’s work, a high–profile court proceeding in New Jersey revealed in 1993 and 1994, Black motorists accounted for only 13.5 percent of all motorists on the road. The probability that motorists who were stopped, ticketed, and arrested were Black was 35 percent. Later research produced by a task force in the office of the New Jersey Attorney General found that in 1997 and 1998, the probability that a motorist stopped, ticketed, and arrested was a minority was 40 percent. Subsequent evidence surfaced by Harris demonstrated the scope of the problem of racial profiling nationwide, affecting law enforcement practices in states including Maryland, Michigan, Florida, and Colorado.

For communities of color, racial profiling has had both individual and community–wide consequences. Minority motorists subject to stop–and–search procedures experience embarrassment, shock, humiliation, and anger. The threat of police harassment serves as a form of control or way of curtailing minorities’ freedom of movement. In addition, as Gandy (2009) has explained, racial profiling in the 1980s, 1990s, and continuing today causes cumulative disadvantage. Blacks and other minorities who are perceived (inaccurately) to be drug offenders are stopped, searched, ticketed, and arrested at higher rates, thereby increasing baseline crime statistics. Racial profiling inspires mistrust and disrespect of law enforcement by minority communities, exacerbating tensions and inviting further transgressions. As a result, racial profiling has the power to take myths about race and turn racial differences in criminality into reality (Gandy, 2009; Schauer, 2003). Altogether, racial profiling serves as a basic illustration of how surveillant, sorting techniques combine with prejudices to exclude minorities, constrain their behavior, and deepen political inequalities.

Redlining

Redlining is another form of low–tech data profiling that demonstrates the exclusionary, surveillant practices of state and corporate actors. Redlining, a form of residential segregation, has been associated with long–term effects on the economic well being of communities. It first arose in the early twentieth century as a practice embraced first by private lenders in the housing industry (Hillier, 2003) and later by regulatory bodies and public–private lending associations. It entails the calculation of risk to investments in residential areas by using occupation, income, and ethnicity as some of the key variables in the determination of real estate values.

The term came about when the Home Ownership Loan Corporation (HOLC), started to create risk–based real estate maps during the Great Depression in the United States. Jackson’s (1980) study of urban development and urban flight details how HOLC appraisers collected demographic and other data in order to draw red lines around neighborhoods with a high risk assessment (and thus low investment prospects). Red lines served as an indication to lenders to stay away and avoid granting mortgage loans. These neighborhoods were Black, ethnic, and/or poor neighborhoods and designated as already in a state of decline. By comparison, green lines represented first or “A” neighborhoods experiencing growth spurts and populated by American business and professional men. HOLC used blue lines to represent “B” neighborhoods that were desirable but had already grown or matured. These neighborhoods, risk appraisers anticipated, would remain stable for years and represented good investment. Yellow lines, the third or “C” category, denoted neighborhoods that were beginning to decline.

Throughout the 1940s, as residential segregation expanded, governmental and non–governmental directives reinforced redlining logic. Residential covenants were drawn up by housing associations to prevent homeowners from selling properties to African Americans and other undesirables. The Federal Housing Authority (FHA), the parent agency to HOLC, communicated to lenders and housing associations by creating another set of maps that demarcated Black neighborhoods (whereby a single Black–owned home designated neighborhoods as Black) and proffering segregationist advice. In documents it distributed, FHA wrote: “If a neighborhood is to retain stability, it is necessary that properties shall continue to be occupied by the same social and racial classes.” [4]

The categorization of neighborhoods that created the redlining required institutions like HOLC to create path dependencies that exacerbated racial and class divisions. An extensive body of research shows the negative impact that redlining has had on the ability of Black and other poor, ethnic neighborhoods to become economically self–sufficient and help make local neighborhoods prosperous. Jackson’s (1985) seminal study, for example, linked the problem of redlining to the flight of middle class, White populations from urban centers. In conjunction with the post–war housing boom, favorable terms for long–term mortgages and risk appraisals made suburban living more attractive than urban residence. New housing developments could literally control the racial composition of neighborhoods and keep minorities away. As suburbanization took root, urban neighborhoods fell further into a state of decline.

Thus, just as racial profiling caused cumulative disadvantage (Gandy, 2009), so too has redlining. The use of income, race, and ethnicity as key determinant factors in the calculation of real estate values has had long–term consequences for the economic health of poor, minority communities. State and corporate actors relied on low–tech data profiling with exclusionary consequences for the underserved.

Medical profiling

The third case of non–digital surveillance and exploitation concerns medical profiling by state actors. The most well known case of medical profiling is the Tuskegee experiment, in which the United States Public Health Service (USPHS) researched the effects of syphilis on poor Black men from the rural South.

As documented by Brandt (1978), the USPHS, which expanded the study from an initial philanthropist–funded investigation, demonstrated extreme prejudice in its choice of study participants. Researchers, like the lead investigator, Taliaferro Clark, questioned the extent to which African American men were equals worthy of careful, ethical treatment as medical subjects. For instance, Brandt (1978) showed how the lead USPHS researcher on the Tuskegee experiment, Taliaferro Clark, justified the importance of the experiment based on presuppositions about his Black subjects. African Americans, according to Clark, were promiscuous and possessed low intelligence. Peers affirmed Clark’s racist justifications for the study. One leading doctor in the field of venereal disease, O.C. Wenger, wrote to Clark, saying: “‘We must remember we are dealing with a group of people who are illiterate, have no conception of time, and whose personal history is always indefinite’.” [5]

Soon after, Clark initiated what would become a four–decade–long study of more than 600 poor Black sharecroppers. From the outset, the experiment involved intense monitoring and deception. The USPHS coaxed subjects to participate in the study by advertising a placebo drug which doctors said would alleviate “bad blood,” a generic term used to refer to any number of sicknesses, including syphilis. Over 40 years, as researchers waited for subjects to die, USPHS subjected study participants to regularly administered placebos while conducting spinal taps to chart the course of syphilitic symptoms. Once subjects died, researchers offered to cover burial expenses in order to guarantee familial compliance with post–mortem analyses.

Historical research suggests that the Tuskegee experiment was part of a larger effort by the USPHS to understand and control sexually transmitted diseases. As Roy’s (1995) examination shows, the agency deceived other vulnerable communities into participating in studies related to the development of commercial vaccines. At a federal correctional facility in Terre Haute, Indiana, USPHS recruited “volunteers” to receive injections of gonorrhea (Mahoney, et al., 1946). In Guatemala, a country effectively controlled by the U.S. government and the United Fruit Company in the 1940s, USPHS infected inmates in Guatemala City’s Central Penitentiary by inviting prostitutes who tested positive for syphilis or who were otherwise exposed to syphilis and then ordered to offer their services in the prison (Reverby, 2011). In the 1950s in New York, USPHS again conducted studies on prisoners and syphilis, though on a smaller scale (82 subjects, as opposed to the more than 1,500 in Guatemala), with specific focus on inoculation (Reverby, 2011).

These multiple USPHS experiments demonstrate systematic harms due to a form of profiling. Researchers identified and targeted disempowered people in the rural, Black south, and in prisons, both domestic and foreign. They used initial data collected about the prevalence of sexually transmitted disease in Alabama and then mined subjects’ bodies for the sake of medical advancement. As Reverby (2008) has shown, these practices of medical profiling continue to affect how and whether poor, especially Black populations trust medical researchers and medical practice, more broadly. Racism in medical research has thus played a role in fomenting cumulative disadvantage that affects the health and physical well being of the underserved.

 

++++++++++

Surveillance in the twenty–first century: Commercial data profiling and the underserved

In today’s environment, as policy–makers push to get chronically underserved communities online (U.S. Federal Communications Commission [FCC], 2010), digitally enabled forms of corporate and state surveillance are shaping — and constraining — individual and collective behavior in new ways. As Solove (2006) explained, these harms are multiple in nature, though often interrelated. For poor people, migrant workers, indigenous groups, and communities of color, the possibility of digital data profiling creates “the risk that a person might be harmed in the future.” [6] In this next section, I examine credit profiling, a specific form of digital data profiling, and evidence of its harm to home owners, specifically African American and Latino subprime mortgage borrowers. Credit profiling continues a tradition of corporate and state use of social sorting techniques to exploit and exclude the underserved, albeit in a new, digitally sophisticated manner.

Credit scores and credit debt

Credit scoring or the numerical classification of an individual’s credit files and credit worthiness have path–determining consequences for economic and political well–being, both at the individual and community level. As detailed in Pasquale’s (forthcoming) book, credit scoring and ranking systems now use software systems to help automate categorization of consumers. Algorithmic formulas classify an individual along a scale of low to high risk. Moreover, the sophistication of credit analytics is correlated with an increasing lack of recourse for consumers. Many consumers simply do not know why they have been categorized in a particular way and cannot act to modify their behavior to arrive at a more favorable credit score.

For chronically underserved communities, the rise of credit analytics comes at a time when many are being drawn into a credit–based economy in greater numbers. Reports by the U.S. Federal Reserve Survey of American Consumers (Board of Governors of the Federal Reserve System, 2009), U.S. Joint Economic Committee (2009), and the National Association for the Advancement of Colored People (NAACP, 2010) demonstrate how deregulatory changes in the banking and finance industries in the 1990s opened the credit card business to a wider range of populations in the United States. That opening has brought a greater burden of debt to those in lower income brackets.

Using data from 2003 and 2004, Wheary and Draut (2007) showed that credit card companies disproportionately penalize poorer, minority card holders as compared to more wealthy, white households. For example, many households earning less than US$10,000 per year are making interest payments of 20 percent, a rate five times greater than those in US$100,000 household income bracket are paying. These poorer households spend more than 40 percent of their income to pay off debt. The same authors found that African American and Latino households have seen increases in credit card debt over the years. Meanwhile, according to Congressional documents, the proportion of credit card debt among African Americans and Latinos grew (respectively) by 20 percent and 48 percent between the years 2001 and 2007 (United States, 2009).

Credit profiling and subprime lending

As chronically underserved communities have turned to credit, corporate actors have begun using credit analytics to profile and exploit low–scoring credit holders. Nowhere is this clearer than in the subprime mortgage crisis. Throughout the late 1990s and 2000s, the subprime industry took advantage of predictive technologies to identify consumers with low–credit ratings (and hence high–risk borrowers). Using new digital tools, the mortgage industry targeted members of underserved communities for purchase of first–time subprime mortgages, home equity loans, and refinanced mortgages [7].

Although not all subprime lending qualifies as predatory lending [8], scholarly research, journalistic accounts, and court documents show that various actors within the mortgage industry engaged in predatory lending. For example, Fisher (2009) showed how lenders compiled mortgage histories about prospective borrowers, triangulating credit scores with other types of publicly available data. Because of the ways in which neighborhood or geographical data correlates with race, Fisher (2009) observed that “racial targeting can be easily accomplished without knowing the race of individuals.” [9]

Indeed, race was a key variable in subprime borrowing. Been, et al. (2009) found that in 200 metropolitan areas, Blacks were three times more likely to receive a subprime loan for first–time home purchases than whites. Latinos were not far behind, ranking at 2.6 times more likely than whites. Within the New York metropolitan area alone, the authors discovered that “less than 8% of the first lien home purchase loans issued to white borrowers were high–cost, compared to over 40% of the loans issued to black borrowers, and over 30% of the loans issued to Hispanic borrowers.” [10] Moreover, in neighborhoods that were racially and ethnically homogeneous — i.e., in predominantly Black neighborhoods or predominantly Latino neighborhoods — rates of high–cost subprime loans were much higher than in more diverse residential areas.

Off–line and online marketing

As first demonstrated by Fisher (2009) and Center for Digital Democracy (CDD, 2007), once credit profiles merged with other data, such as demographic data, to identify prospective borrowers, different actors in the mortgage industry created and used data profiles to sway these prospects into making a purchase. That is, lenders worked both directly and indirectly with brokers, credit information companies, Web analytics companies, and search engines to merge online and off–line data about an individual and craft strategic and targeted marketing campaigns. For example, armed with (digitally compiled) information about the race of prospective borrowers, brokers and marketing firms developed both telemarketing scripts and promotional techniques for specifically persuading Blacks borrowers. Brokers also developed training manuals to sensitize staff members to connect with minority populations and feign cultural sensitivity. As one former employee testified during a case against Wells Fargo, not only did staff refer to subprime mortgages as “ghetto loans” but also the bank “had software to generate marketing materials for minorities,” including materials that spoke the “‘language of African–American’.” [11].

Person–to–person interactions and traditional marketing campaigns were also augmented by online marketing and advertising. Web sites, like Bankrate.com and LowerMyBills, ostensibly intended to help individuals manage their personal finances, sold and continue to sell information about their site users to other companies. For instance, as revealed in CDD’s (2007) filings to the Federal Trade Commission, these firms flashed different mortgage ads to site visitors depending on the kinds of search terms that brought users to a site and the types of comparison shopping that site visitors used on these sites. User behavior determined whether that ad came from a prime or subprime lender.

Though the precise effects of site visitors’ exposure to online ads or marketing campaigns is difficult to know (especially at this point in time), the size of spending by subprime lenders and credit information companies suggests that the mortgage industry expected returns on investments in online advertising and marketing. Subprime lenders featured among a list of top ten biggest online advertisers in the United States in 2007. Citing Nielsen/NetRating data, CDD showed how Low Rate Source invested US$46 million and Countrywide — nearly US$35 million — in online advertising.

Additional analysis of Nielsen/NetRating data reveals that credit information companies, who were directly mining site visitors and shuttling traffic to potential subprime borrowers, featured among top spenders in online advertising. In 2008, Experian, one of the largest credit information companies, spent US$54 million online advertising. In 2007, Experian and Privacy Matters, another credit information company, spent US$43 and nearly US$17 million on online advertising. In 2006, GUS, which at the time was the parent company of Experian, topped the list, allotting US$47 million for its online advertising campaigns. In 2005, Lower My Bills, a GUS subsidiary, also placed on Nielsen’s list as having paid approximately US$14.7 million for online advertising [12].

Players in the mortgage industry had long recognized the importance of online marketing and advertising efforts. In 1997, when much of the software which powers data profiling was not yet in place and the commercial World Wide Web was still relatively young, the trade journal National Underwriter featured a news article on the advantages of Internet–based communication for the subprime market (Otis, 1997). One industry expert argued that reaching borrowers through the Internet was superior to face–to–face techniques. Online marketing could serve as an efficient tool for reaching, rather than excluding, underserved communities. Whereas a resident of a low–income neighborhood might have trouble finding a place to learn about financial products, the expert claimed that the Internet makes finding information low cost, easy, and stigma–free. Just three years later, a piece in the ABA Banking Journal also reflected on the question of ease (Barefoot, 2000). Online marketing made it easy and effective to target consumers for subprime loans. As the journal reported, strategic marketing customizes messages to consumers and so obviates the need for comparison shopping, at least in the eyes of the targeted consumer.

Considered as a whole, the combination of credit analytics, online marketing, and online advertising reflects a set of data profiling practices with very tangible and devastating consequences for minorities. Minorities have been hit the hardest by subprime lending practices and foreclosures. As of July 2010, three million properties have received foreclosure notices (Aaron, 2011). Kochhar, et al. (2009) were not able to correlate race and ethnicity with foreclosure data (foreclosure reports do not identify the race or ethnicity of owners), but they did investigate home ownership rates, finding that Blacks and Latinos experienced the sharpest rate of decline in recent years. In this sense, the subprime lending boom highlights the harmful consequences of mining, triangulating, and targeting. It highlights the intersection between data profiling and exclusionary practices of the subprime industry to exploit and disempower the underserved.

Connecting past and present data profiling practices

On the face of it, the case of data profiling by the subprime industry most obviously links to redlining: predatory lending associated with the selling of subprime loans is now commonly referred to as “reverse redlining” (Fisher, 2009). But the case also shares broad similarities to the earlier, low–tech history of profiling and exploitation of chronically underserved communities. In particular, the case evidences themes seen before: those in power purport to know what is best for groups and can and will orient the underserved towards those choices. Whether in the area of economics, politics, or health, communities of color, poor populations, indigenous groups, and migrants face formidable challenges in defining themselves, acting autonomously both as individuals and as communities, and taking control over different life paths before them.

Though it is too early to assess the full set of consequences from data profiling by the subprime industry, we can speculate that such surveillant efforts will be associated with cumulative disadvantage. Like Tuskegee, where the egregiousness of the experiment has built deep mistrust among African Americans of the medical profession and medical research; as with racial profiling, where the pervasiveness of stop–and–search procedures has correlated with mistrust, confrontation, and greater crime statistics applied to minorities; like redlining, in which residential segregation expanded in conjunction with the rise of urban ghettos, data profiling by corporations in the digital age condemns chronically underserved communities to further credit–based woes. It is not impossible to imagine that credit analytics are now using foreclosure status as a determinant of future economic instability and, thus, as a reasons to keep subjects’ credit scores low. Low credit scores mean higher interest rates for credit cards or other loans, pushing the casualties of the subprime crisis further into a cycle of debt.

 

++++++++++

Rethinking digital inclusion

The case of credit profiling and the subprime mortgage crisis as well as the low–tech examples of data profiling speak to the very real possibility that digital inclusion entails harms as much as benefits. That is, being digitally included means being included in social practices and social activities that harm chronically underserved communities with deep and lasting effects. For the poor, communities of color, indigenous groups, and migrants, being a part of digitally mediated worlds means being made vulnerable in ways that resemble past instances of surveillance and exploitation. This fact does not suggest there is nothing new about digital technologies or the infrastructures on which they run. Digitally dependent surveillant technologies do work differently in how they collect, categorize, target, and overall exploit users. As these technologies emerge as central to the current economy, old forms of prejudice and injustice can be grafted onto these new tools.

More importantly, this discussion highlights the need for a more capacious understanding of the nature of digital inclusion and exclusion. The history of mistreatment detailed above suggests that it may be more helpful to frame discussions about the digital divide with reference to conditions of inequality that run throughout society, rather than assume a qualitative difference between online and off–line worlds. This point echoes Eubanks’ (2011) assessment of digital inclusion policies and projects. In a study of a community technology program at a local women's organization, Eubanks found that the women she worked with and studied felt the most pressing barrier to digital inclusion was “power, privilege, and oppression” not whether one could create a Web site, gain access to a home computer, or send an e–mail message. For these reasons, surveillant digital technologies should be of special concern to digital inclusion policymaking. As social creations, new digital tools reflect, reinforce, and exacerbate inequalities throughout society (Gandy, 2009).

Digital inclusion policies and practices remain trained on a bounded set of online activities and experiences that entertain only the positive aspects of digitally mediated worlds (Mossberger, et al., 2008; U.S. Federal Communications Commission, 2010) [13]. They envision a future in which all individuals are digital citizens, benefit from access to the Internet, and improve their chances of economic prosperity, political visibility, level of education, and so forth. In other words, a conventional framework of digital inclusion prepares individuals for participation in idyllic online worlds. But such visions are blind to established histories of state and corporate surveillance and exploitation of chronically underserved communities. Until policy–makers begin a frank discussion of how to account for benefits and harms of experiencing online worlds and to confront the need to protect collective and individual privacy online, oppressive practices will continue.

One way to avoid the utopianism of digital inclusion rhetoric is to rethink digital inclusion with reference to the concepts of internal exclusion and external exclusion. These terms, appropriated from Iris Marion Young’s (1990, 2000) work, begin to address the complexity of what participation and incorporation into online worlds entail for the poor, communities of color, migrants, and indigenous groups. As members of chronically underserved communities attempt to go online, they will encounter barriers to meaningful involvement both before and within online worlds. External exclusion most closely links up with conventional discussions about the digital divide and refers to conditions of social, economic, and political life that prevent one from adopting digital tools. A trickier problem, however, concerns internal exclusion: once incorporated into digitally mediated society, the underserved face additional barriers to fair and just participation. Commercial data profiling counts as one of those barriers to meaningful involvement, and until adequate protections are put into place to curb its harmful applications, those on the wrong side of the digital divide will remain vulnerable to social, economic, and political exploitation.

 

++++++++++

Conclusion

There is some indication that some civil society groups working at the intersection of technology access and other social causes such as racial justice, economic justice, and community health, have begun to tackle the complexity of digital inclusion vis–à–vis surveillance and privacy concerns. Groups, like VozMob, People’s Production House, and Philadelphia FIGHT [14], work with chronically underserved communities, helping them develop literacy skills while sharing knowledge about issues related to online privacy and surveillance.

The last of these groups, for example, has begun to incorporate learning about online privacy tools into its basic Internet skills training curriculum to its constituents, which consist primarily of individuals living with AIDS or HIV. Meanwhile, People’s Production House and VozMob are working with migrant populations to use mobile phones in ways that allow them to document their daily lives while protecting their anonymity and those they interact with. While groups’ teaching materials do not represent a comprehensive treatment of surveillance, their existence evidences a willingness to explore the contradictions of digital inclusion more thoroughly.

Digital inclusion enthusiasts can take cues from these nascent discussions on the ground. The groups described above recognize that it is as likely that systematic uses of new technologies will create social exclusion as much as promote equality and social justice. They are confronting the complexity of what digital inclusion means in ways that anticipate when and where surveillance may interfere with meaningful involvement in digitally mediated worlds. Until regulators and policy practitioners are willing to develop a historically informed and pragmatic understanding of digital inclusion, the harmful effects of profiling and surveillance will undermine the positive aspects of bringing the underserved online. End of article

 

About the author

Seeta Peña Gangadharan is a senior research fellow with the Open Technology Institute at the New America Foundation and a visiting fellow with the Information Society Project at Yale Law School. She is broadly interested in the conditions for democratic communication policies and democratic communication practices. Her most recent work examines the nature and meaning of inclusion in digitally mediated societies.
E–mail: gangadharan [at] newamerica [dot] net

 

Notes

1. By identifying groups as such, I do not wish to essentialize them, take these groups for granted, or ignore the diversity within such groups. For reasons of space, however, I do not elaborate on the history of chronically underserved communities.

2. Holmes, 1881, p. 111.

3. Racial profiling, as Maclin (1998) suggested, has a long history that extends back to colonial America. As early as the seventeenth century, militia patrols watched over the activities of slaves, requiring the bondsmen to carry permission slips to travel and engage in activities such as selling their owners’ goods (Hadden, 2001).

4. Quoted in Jackson, 1980, p. 436.

5. Brandt, 1978, p. 21.

6. Solove, 2006, p. 487.

7. In basic terms, subprime loans feature an annual percentage rate that is three percent higher or more than rate set by the United States Treasury for prime loans. They may also involve adjustable rates, such that interest payments get higher over time.

8. Broadly speaking, predatory lending means in this case that lenders prey on low–knowledge consumers.

9. Fisher, 2009, p. 114.

10. Been, et al., 2009, p. 364.

11. Quoted in Fisher, 2009, p. 120.

12. CDD mentions credit information companies but only pays attention to one single year.

13. See also the digital inclusion provider, One Economy, http://www.one-economy.com.

14. See http://vozmob.net, http://peoplesproductionhouse.org, and http://www.fight.org.

 

References

Kat Aaron, 2011. “Putting a face on the financial crisis,” Investigative Reporting Workshop (21 July), at http://americawhatwentwrong.org/story/putting-human-face-financial-crisis/, accessed 6 September 2011.

Mika Ayenson, Dietrich J. Wambach, Ashkan Soltani, Nathan Good, and Chris J. Hoofnagle, 2011. “Flash cookies and privacy II: Now with HTML5 and etag respawning” (29 July), at http://ssrn.com/abstract=1898390, accessed 5 October 2011.

Jo Ann S. Barefoot, 2000. “How can banks can avoid ‘channel discrimination’?” ABA Banking Journal, volume 92, number 3, pp. 33–36.

Vicki Been, Ingrid Ellen, and Josiah Madar, 2009. “The high cost of segregation: Exploring racial disparities in high–cost lending,” Fordham Urban Law Journal, volume 36, pp. 361–393.

Allan M. Brandt, 1978. “Racism and research: The case of the Tuskegee syphilis study,” Hastings Center Report, volume 8, number 6, pp. 21–29.http://dx.doi.org/10.2307/3561468

Center for Digital Democracy, 2007. “Supplemental statement in support of complaint and request for inquiry and injunctive relief concerning unfair and deceptive online marketing practices” (1 November). Washington, D.C.: Center for Digital Democracy.

Dharma Dailey, Amelia Bryne, Alison Powell, Joe Karaganis, and Jaewon Chung, 2010. Broadband adoption in low–income communities. Brooklyn, N.Y.: Social Science Research Council, at http://www.ssrc.org/programs/broadband-adoption-in-low-income-communities/, accessed 12 April 2012.

Paul DiMaggio and Eszter Hargittai, 2001. “From the ‘digital divide’ to ‘digital inequality’: Studying Internet use as penetration increases,” Princeton University, Center for Arts and Cultural Policy Studies, Working Paper, number 15, at http://www.princeton.edu/~artspol/workpap15.html, accessed 12 April 2012.

Virginia Eubanks, 2011. Digital dead end: Fighting for social justice in the information age. Cambridge, Mass.: MIT Press.

Linda E. Fisher, 2009. “Target marketing of subprime loans: Racialized consumer fraud and reverse redlining,” Brooklyn Journal of Law and Policy, volume 18, number 1, pp. 101–135.

Oscar H. Gandy, 2009. Coming to terms with chance: Engaging rational discrimination and cumulative disadvantage. Burlington, Vt.: Ashgate.

Sally E. Hadden, 2001. Slave patrols: Law and violence in Virginia and the Carolinas. Cambridge, Mass.: Harvard University Press.

Eszter Hargittai, 2002. “Second–level digital divide: Differences in people’s online skills,” First Monday, volume 7, number 4, at http://firstmonday.org/article/view/942/864, accessed 5 October 2011.

David A. Harris, 2002. Profiles in injustice: Why racial profiling cannot work. New York: New Press.

David A. Harris, 1999. “The stories, the statistics, and the law: Why ‘driving while black’ matters,” Minnesota Law Review, volume 84, number 2, pp. 265–326.

Amy E. Hillier, 2003. “Redlining and the Home Owners’ Loan Corporation,” Journal of Urban History, volume 29, number 4, pp. 394–420.http://dx.doi.org/10.1177/0096144203029004002

Chris J. Hoofnagle, Jennifer King, Su Li, and Joseph Turow, 2010. “How different are younger adults from older adults when it comes to information privacy attitudes and policies?” at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1589864, accessed 10 May 2011.

Oliver Wendell Holmes, 1881. The common law. Boston: Little, Brown.

Kenneth T. Jackson, 1985. Crabgrass frontier: Suburbanization of the United States. New York: Oxford University Press.

Kenneth T. Jackson, 1980. “Race, ethnicity, and real estate appraisal: The Home Owners Loan Corporation and the Federal Housing Administration,” Journal of Urban History, volume 6, number 4, pp. 419–452.

Rakesh Kochhar, Ana Gonzalez–Barrera, and Daniel Dockterman. 2009. Through boom and bust: Minorities, immigrants and homeownership. Washington, D.C.: Pew Hispanic Center, at http://www.pewhispanic.org/2009/05/12/through-boom-and-bust/, accessed 13 October 2011.

Tracey Maclin, 1998. “Race and the Fourth Amendment,” Vanderbilt Law Review, volume 51, number 2, pp. 333–393.

John F. Mahoney, C J. Van Slyke, J C. Cutler, J. C., and H L. Blum, 1946. “Experimental gonococcic urethritis in human volunteers,” American Journal of Syphilis, Gonorrhea, and Venereal Diseases, volume 30, pp. 1–39.

Andrew J. McClurg, 2003. “A thousand words are worth a picture: A privacy tort response to consumer data profiling,” Northwestern University Law Review, volume 98, number 1, pp. 63–143.

Karen Mossberger, Caroline J. Tolbert, and Ramona S. McNeal, 2008. Digital citizenship: The Internet, society, and participation Cambridge, Mass.: MIT Press.

National Association for the Advancement of Colored People (NAACP), 2010. “Economic justice toolkit,” at http://naacp.3cdn.net/7ec67dbb5d3e89a083_wkm6vt3dt.pdf, accessed 12 April 2012.

L.H. Otis, 1997. “Regulator: Net ‘redlining’ unlikely,” National Underwriter (5 May), at http://www.lifehealthpro.com/1997/05/05/regulator-net-redlining-unlikely, accessed 5 October 2011.

Frank Pasquale, forthcoming. Evolving toward obscurity: From credit history to scoring to analytics.

Benjamin Roy, 1995. “The Tuskegee Syphilis Experiment: Biotechnology and the administrative state,” Journal of the National Medical Association, volume 87, number 1, pp. 56–67.

Susan M. Reverby, 2011. “‘Normal exposure’ and inoculation syphilis: A PHS ‘Tuskegee’ doctor in Guatemala, 1946–1948,” Journal of Policy History, volume 23, number 1, pp. 6–28.http://dx.doi.org/10.1017/S0898030610000291

Susan M. Reverby, 2008. “Inclusion and exclusion: The politics of history, difference, and medical research,” Journal of the History of Medicine and Allied Sciences, volume 63, number 1, pp. 103–113.http://dx.doi.org/10.1093/jhmas/jrm030

Benjamin Roy, 1995. “The Tuskegee Syphilis Experiment: Biotechnology and the administrative state,” Journal of the National Medical Association, volume 87, number 1, pp. 56–67.

Christian Sandvig, 2006. “The Internet at play: Child users of public Internet connections,” Journal of Computer–Mediated Communication, volume 11, number 4, pp. 932–956, and at http://jcmc.indiana.edu/vol11/issue4/sandvig.html, accessed 12 April 2012.

Christian Sandvig, 2003. “Public Internet access for young children in the inner city: Evidence to inform access subsidy and content regulation,” Information Society, volume 19, number 2, pp. 171–183.http://dx.doi.org/10.1080/01972240309461

Frederick F. Schauer, 2003. Profiles, probabilities, and stereotypes. Cambridge, Mass.: Belknap Press of Harvard University Press.

Neil Selwyn, 2004. “Reconsidering political and popular understandings of the digital divide,” New Media & Society, volume 6, number 3, pp. 341–362.http://dx.doi.org/10.1177/1461444804042519

Daniel Solove, 2006. “A taxonomy of privacy,” University of Pennsylvania Law Review, volume 154, number 3, pp. 477–560.http://dx.doi.org/10.2307/40041279

Joseph Turow, 2005. “Audience construction and culture production: Marketing surveillance in the digital age,” Annals of the American Academy of Political and Social Science, volume 597, number 1, pp. 103–121.http://dx.doi.org/10.1177/0002716204270469

Joseph Turow, Jennifer King, Chris J. Hoofnagle, Amy Bleakley, and Michael Hennessy, 2009. “Contrary to what marketers say, Americans reject tailored advertising and three activities that enable it,” at http://ssrn.com/abstract=1478214, accessed 10 May 2011.

U.S. Board of Governors of the Federal Reserve System, 2009. Changes in U.S. family finances from 2004 to 2007: Evidence from the survey of consumer finances. Washington, D.C.: Federal Reserve Board, at http://www.federalreserve.gov/pubs/bulletin/2009/pdf/scf09.pdf, accessed 12 April 2012.

U.S. Congress. Joint Economic Committee, 2009. Vicious cycle: How unfair credit card practices are squeezing consumers and undermining the recovery: A report (12 May). Washington, D.C.: Joint Economic Committee, at http://jec.senate.gov/public/, accessed 12 April 2012.

U.S. Federal Communications Commission (FCC), 2010. National broadband plan: Connecting America. Washington, D.C.: Federal Communications Commission, at http://www.broadband.gov/, accessed 12 April 2012.

Ana Viseu, Andrew Clement, and Jane Aspinall, 2004. “Situating privacy online: Complex perceptions and everyday practices,” Information, Communication & Society, volume 7, number 1, pp. 92–114.http://dx.doi.org/10.1080/1369118042000208924

Mark Warschauer, 2002. “Reconceptualizing the digital divide,” First Monday, volume 7, number 7, at http://firstmonday.org/article/view/967/888, accessed 5 October 2011.

Jennifer Wheary and Tamara Draut, 2007. Who pays? The winners and losers of credit card deregulation. New York: Demos.

Iris M. Young, 2000. Inclusion and democracy. Oxford: Oxford University Press.

Iris M. Young, 1990. Justice and the politics of difference. Princeton, N.J: Princeton University Press.

 


Editorial history

Received 16 October 2011; accepted 5 April 2012.


Creative Commons License
“Digital inclusion and data profiling” by Seeta Peña Gangadharan is licensed under a Creative Commons Attribution–NonCommercial–NoDerivs 3.0 Unported License.

Digital inclusion and data profiling
by Seeta Peña Gangadharan
First Monday, Volume 17, Number 5 - 7 May 2012
http://firstmonday.org/ojs/index.php/fm/article/view/3821/3199
doi:10.5210/fm.v17i5.3821





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2014.