Marking the 25th anniversary of the “digital divide,” we continue our metaphor of the digital inequality stack by mapping out the rapidly evolving nature of digital inequality using a broad lens. We tackle complex, and often unseen, inequalities spawned by the platform economy, automation, big data, algorithms, cybercrime, cybersafety, gaming, emotional well-being, assistive technologies, civic engagement, and mobility. These inequalities are woven throughout the digital inequality stack in many ways including differentiated access, use, consumption, literacies, skills, and production. While many users are competent prosumers who nimbly work within different layers of the stack, very few individuals are “full stack engineers” able to create or recreate digital devices, networks, and software platforms as pure producers. This new frontier of digital inequalities further differentiates digitally skilled creators from mere users. Therefore, we document emergent forms of inequality that radically diminish individuals’ agency and augment the power of technology creators, big tech, and other already powerful social actors whose dominance is increasing.
Introducing emergent inequalities in the information age
Accessibility as a human right
Platform economy and digital labor
Big data and algorithms
Digital intersections with criminal justice and security
Civic engagement and mobility
Well-being and the life course
3.0: Implications of emergent digital inequalities
Introducing emergent inequalities in the information age
We continue our examination of emergent inequalities in the Information Age in this two-article series marking the 25th anniversary of the “digital divide” (U.S. National Telecommunications and Information Administration (NTIA), 1995). In the first article, “Digital inequalities 2.0: Legacy inequalities in the information age,” we introduced the concept of the “digital inequality stack” . The digital inequality stack captures the complex layers that must all work together to produce digital inclusion. As we have shown, legacy digital inequalities remain present in the stack including economic class, gender, sexuality, race and ethnicity, aging, disability, healthcare, education, rural residency, networks, and global geographies.
From this broad perspective, it is evident how rapidly digital inequalities are becoming implicit in every field of human endeavor and, more importantly, leaving those without resources ever further behind. From educational institutions to policy-makers to non-profit organizations, no one has been unable to equal the playing field or reign in the ever widening advantages conferred to those with digital resources. On the contrary, as we show, each technological “advance” gives birth to new disparities and social problems as digital resources are insinuating themselves into our daily lives.
Therefore, in this second article, we add additional layers to the digital inequality stack by mapping out the rapidly evolving nature of digital inequality using a broad lens. We tackle complex, and often unseen, inequalities spawned by the platform economy, automation, big data, algorithms, cybercrime, cybersafety, civic engagement, mobility, gaming, emotional well-being, and assistive technologies. These inequalities are woven throughout the digital inequality stack in many ways from differentiated access, use, and consumption, literacies and skills, and production.
Further, though we live in an age where mobile digital technologies are increasingly pervasive, emergent inequalities are growing in terms of power and production. While many users are competent prosumers who nimbly work within different layers of the stack, very few individuals are “full stack engineers” able to create or recreate digital devices, networks, and software platforms as pure producers. New sorting mechanisms generated by big data and algorithms are not fully transparent either to their targets or all but the most sophisticated professionals generating them. This new frontier of digital inequalities further differentiates digitally skilled creators from mere users.
Therefore, we document emergent forms of inequality that radically diminish individuals’ agency and augment the power of technology creators, big tech, and other already powerful social actors. This is a fundamental shift from categorical and institutional inequalities to radically different inequalities that could not exist in the absence of the Internet. As sociologists and social scientists we must understand the origin of power increasingly in the hands of those creating and using technologies at the highest levels of power from the economy to the incarceration system to the digital public sphere. Therefore, we ask sociologists, social scientists, and those in positions of power to consider digital resources as human rights and primary goods that must be used to benefit humanity rather than to create emergent digital bonds detrimental to us all.
Accessibility as a human right
Access, as the foundational layer of the digital inequality stack, is increasingly recognized in terms of human rights. In several countries of the world, Internet access is regarded as a human right. Such statutes or policy declarations are known to exist in Finland, Costa Rica, Estonia, Greece, and France. With most government services and corporate networks now operating from Internet platforms, citizen use of these platforms is often taken for granted. But these statutory or operational norms cannot be realized without broadband access at affordable costs.
The Alliance for Affordable Internet (A4AI) leaves no doubt about the need for public policies and commercial rates that can enable citizens the world over to use the Internet: “For the 50 percent of the world unable to connect, the greatest barrier remains affordability. Across Africa, the average cost for just 1GB data is approximately seven percent of the average monthly salary. In some countries, 1GB costs as much as 20 percent of the average salary ... .” (Alliance for Affordable Internet, 2019b). Therefore, according to the Alliance for Affordable Internet, broadband access should be a right rather than a luxury. Writing in their recent “Affordability report,” the Alliance for Affordable Internet states: “Not only a pathway to information, communication, and economic opportunity, the Internet is increasingly necessary to access basic commercial and public services.” (Alliance for Affordable Internet, 2019a).
This perspective on the vital importance of affordable Internet access is supported by the International Telecommunications Union’s (ITU) Broadband Commission for Sustainable Development. In its 2018 “Report”, the Commission focused on access as a means of speeding up sustainable development globally. It noted that while 2018 marked a milestone when half the world’s population gained a measure of online access, there was still a great deal to be done. “Advances in mobile broadband (such as 4G and 5G) and in next generation satellite technologies,” the Commission indicated, “will mean the delivery of digital services more quickly and reliably, with implications for the future of eHealth, transportation, education and disaster relief.” (International Telecommunications Union, 2018)
The notion of using wireless broadband to meet perceived needs for global south development has a long pedigree. A 2003 publication by infoDev, Wireless Internet Institute, and the United Nations Information and Communication Technology Task Force, 2003, entitled “The wireless Internet opportunity for developing countries,” called for wireless deployment as a quick and cheap option for developing countries to get online. “The promise of wireless Internet technologies has generated much interest on the part of the international development community. While in developed nations these technologies have been associated with mobility applications and local area networking in homes and offices, their most intriguing application in developing nations is the deployment of low-cost broadband infrastructure and last mile distribution.” (infoDev, 2003)
Overall, it would appear that equitable and affordable access to data and to the Internet remain an important barrier to global development. The issue of lack of access is joined by other global inequality issues. High level policy initiatives, especially on the part of the Federal Communications Commission (FCC) in the U.S. aim at the discontinuation of the doctrine of net neutrality. Such initiatives threaten to scuttle equitable access speeds and cost parity to the detriment of weaker and economically disadvantaged Internet users. Indeed, Pickard and Berman, in their 2019 book After net neutrality: A new deal for the digital age, advocate a reframing of the threat to net neutrality. In their view, net neutrality is more than a conflict between digital leviathans such as Google and Internet service providers like Comcast. Rather it is part of a much wider project to commercialize the public sphere and undermine the free speech essential for democracy (Pickard and Berman, 2019).
It appears clear that disparities in Internet access and usage still abound. A return to Internet public policy-making on the scale of the World Summits of the Information Society (WSIS) may be inevitable to redress many of the global impediments to broad-based, affordable Internet for development among all demographic groups and global regions to combat digital inequalities on an international scale.
Platform economy and digital labor
Another global phenomenon is the rise of digital platforms — from Facebook and YouTube to Uber, Upwork, and Amazon Mechanical Turk — and their impact on the world of work, as they provide on-demand earning opportunities through technology-driven intermediation. In principle, the effects on inequalities of such platforms are ambiguous (Sundararajan, 2016; Hoang, et al., 2020). They may widen participation of underrepresented groups into labor markets historically shaped by gender, race, and class divisions. But platform labor is still for the most part unregulated and consists largely of low-paid, unstable, and unprotected work activities — which may reinforce existing gaps or generate new vulnerabilities.
To disentangle the net impact of platform-mediated work, it is useful to break it down into its different forms. While varying typologies have been proposed (Berg, et al., 2018), a key distinction is between projects or “gigs” and shorter, fragmented “tasks.” Examples of the former comprise driving passengers, delivering purchases, and designing a company’s logo, while examples of the latter include tagging objects in images, transcribing bits of text, taking pictures of products in shops, and flagging some online content as adult or inappropriate (Tubaro, et al., 2020). Gigs and tasks can be either location-based (necessitating physical presence in a given place) or online-only (allowing remote execution).
Location-based gig platforms such as AirBnb and Uber have attracted a great deal of popular attention, in light of their technology-based efficiency gains as well as their promise to include a diverse workforce — such as drivers previously excluded from the highly regulated taxicab industry. However, participation reproduces some biases inherited from the broader gender and race divides (and stereotypes) of our society, resulting for example in more women cleaning and more men driving and delivering (van Doorn, 2017). Of note, a specific crowding-out effect raises income inequality among the bottom 80 percent of the distribution (Schor, 2017): well educated people who have full-time jobs in addition to platform labor, engage in manual activities such as cleaning, moving, and driving, which were traditionally left to workers with low educational attainment.
More prominently, platforms produce a shift in the balance of power between capital and labor. The practice of classifying providers as independent contractors rather than employees deprives them of welfare benefits, social protections, pension contributions, and training opportunities that in industrial countries mitigate life-long effects of labor-capital inequalities. Likewise, opaque methods of “algorithmic management” produce information asymmetries and surveillance that restrict workers’ autonomy (Rosenblat, 2018), to the advantage of the platform and its clients.
To a great extent, these considerations extend to gigs that are performed entirely online on platforms like Upwork by freelance subcontractors such as graphic designers, software developers, translators, and other “virtual” workers (Huws, 2003). These activities open market opportunities to professionals in emerging and low-income economies (Lehdonvirta, et al., 2019) and can thus be construed as a remedy to global rather than local asymmetries. Nevertheless, they generate competition between workers worldwide, driving down remunerations and shifting bargaining power toward clients — usually tech companies based in developed countries. Geography plays a role that, “rarely bolster(s) both the structural and associational power of workers” (Graham and Anwar, 2019).
Less-qualified tasks that are provided through platforms such as Amazon Mechanical Turk are known as “micro-work” (Irani, 2015; Ekbia and Nardi, 2017; Tubaro and Casilli, 2019). These tasks can be online-only (for example, labeling images) or location-based (taking pictures in shops). While requiring only Internet connection and minimal digital literacy, micro-work still attracts relatively highly qualified providers (Berg, et al., 2018), often excluded from the formal labor market owing for example to family duties or disability (Gray and Suri, 2019). Gender differentials persist, insofar as women are more numerous to micro-work in countries (like the U.S.) where this activity constitutes a supplementary source of income, while more men micro-work in countries (like India) where it represents a primary source of income (Ipeirotis, 2010).
As with other forms of digital platform labor, widened access does not significantly contribute to closing the income gap. Remunerations for micro-tasks can be as low as a few cents, and hourly pay rates are well below minimum wage (Hara, et al., 2018). While micro-work is a recent phenomenon and its long-run effects on providers’ personal and professional trajectories are yet unknown, the repetitive nature of tasks, volatility in their availability, and anonymity of individuals in the “crowd,” hinder any effort to consolidate skills and to accumulate human capital to develop stable careers.
Online-only micro-work maintains strong disparities related to geographic location, whereby workers in emerging nations are most affected by uneven Internet connectivity, time zones, language, security, and pay mechanisms. Conscious of these global asymmetries, digital platform users acknowledge lack of transparency and interiorize their activity as a “global digital sweatshop,” mirroring other relatively low-status occupations such as sex work, fast-food work, or low-level agricultural and farming jobs (Martin, et al., 2016).
In-between gigs and tasks is what we can call “social networked labor,” a set of activities that provide and qualify content and data for social media. It includes underpaid or non-remunerated activities such as production, moderation and annotation of videos, images or text. The instability of this type of activity renders it precarious, while professionals who perform it are often downgraded to simple “users” (van Dijck, 2009). Key motivations for this type of digital labor are opportunity to build a portfolio toward employability (Kuehn and Corrigan, 2013) and to access a global audience, especially for workers residing in emerging countries (Roberts, 2019). It remains to be seen whether these efforts ultimately pay in terms of reducing inequalities grounded in gender, culture, or geography.
Overall, platform-mediated digital labor diminishes labor power relative to capital and fails to level the playing field between workers in the emerging and developed world, although its effects on other axes of social inequality are more ambivalent. Contractual stability, whether achieved via reclassification or through a special status for platform workers, does not suffice to curb the tendency to consider “humans as a service” (Prassl, 2018). More promising solutions are being progressively put in place by workers and activists to increase their autonomy and bargaining power (Graham and Woodcock, 2018), and to recognize and leverage the potential for skills formation on platforms (Margaryan, 2019). Until such changes occur, however, the platform economy and digital labor remain embedded in the digital inequality stack.
Automation also poses a threat of exacerbating digital inequalities in new ways at the intersection of education, the workforce, and the economy. Increasingly, industrial applications of smart technologies will eliminate many middle-skill jobs that feature repetitive tasks (Autor, 2015). Yet, as of 2019, areas of the world with advanced digital technologies lack employees trained to take advantage of them. For example, a 2019 survey by MIT’s Sloan School of Management and the Boston Consulting Group reported that 70 percent of business executives in the U.S. reported little to no impact from their automation projects. That parallels a 2016 report by the European Commission (EC) suggesting that only 1.7 percent of European businesses are poised to realize the full potential of advanced digital technologies, while 41 percent are not positioned to capitalize on them at all.
The obstacle is the lack of a properly trained workforce. Even though the EC report forecast that 90 percent of jobs by 2025 will require some digital skills, 47 percent of the European employees don’t possess them. The pattern is similar in the U.S., where 71 percent of current jobs require medium to high-level digital skills, up from 45 percent in 2002. Consequently, both the EU and the U.S. are encouraging more emphasis on science, technology, engineering, and mathematics (STEM) skills education, such as artificial intelligence, design thinking, nanotechnology, and robotics.
However, while that training will be a key for those entering the workforce, it can create severe dislocation for those already in the workforce who lack that training. That is especially true for those lacking digital skills, many of whom work in the jobs featuring repetitive tasks that automation will replace. Workers left on the wrong side of the digital skill divide will suffer the consequences of this global shift. As Autor cautions: “... the ability of the U.S. education and job training system (both public and private) to produce the kinds of workers who will thrive in these middle-skill jobs of the future can be called into question” (Autor, 2015).
One way to avoid being left on the wrong side of the digital skill divide is to become proficient in so-called “soft skills” resistant to automation. Research by the MIT-IBM Watson AI Lab reveals that individuals who can master tasks that require “common sense, judgment, intuition, creativity, and spoken language” will be highly valued by corporations in the future (MIT-IBM Watson AI Lab, 2019). Still, the increasing move toward automation shows how new elements must be included in the digital inequality stack as digital technologies become more deeply entrenched in the workplace.
Big data and algorithms
Big data and AI add new dimensions to the digital inequality stack due to the asymmetrical control of data and privacy, the fast-growing automated production and distribution of services and resources, and the underrepresentation of marginalized voices in algorithmic infrastructure. Big data and AI bring great opportunities and challenges for prosperity, security, law and order, and the future of work in profound ways. Thus, it is vital to understand their built-in inequalities, potential biases, and long-term, yet often hidden-impacts on resource allocation.
Big data fundamentally challenges privacy rights and can create new inequalities. Privacy erosion has rendered individuals, especially the structurally or culturally marginalized, under the watchful eyes of big and small brothers ranging from nation state governments to big and small tech firms (Chen, et al., 2018). Facial and voice recognition are introduced in both authoritarian and democratic societies. Many users are concerned but not well informed. For instance, about three-quarters of adult American Facebook users did not know that the site collected data on user traits and interests for advertisers (Pew Research Center, 2019).
In addition, individuals’ data can be used to maintain existing inequalities and create new inequalities with mechanisms such as dynamic pricing, social credit scores, and behavior targeting (Chen, 2019). Thus, big data and AI can generate algorithmic inequalities through new sorting metrics, reflecting the opaque demands of commercial and governmental interests.
Big global tech firms often have treasure troves of data perhaps greater than most of the nation state governments. Such data are hidden from public view. This is especially troubling as algorithms are increasingly used to make consequential decisions with high stakes implicationscredit and loan qualification, child protection, hiring and promotion in jobs, healthcare and insurance, and even prison sentencing. The potential built-in racial, gender, class, and other biases in big data and algorithms have been shown to be dangerous and destructive across many life realms including prison sentencing (Barocas and Selbst, 2016; Noble, 2018; McClain, 2019).
In sum, the combination of big data and algorithms can lead to both the reinforcement of legacy digital inequalities and the generation of emergent forms of digital inequality. Guarded by proprietary data, patents, and NDAs, the algorithmic black boxes have made discrimination and exploitation invisible from public scrutiny. Finally, another consequence of the black box algorithms is the intellectual debt accumulated through unexplained knowledge that undervalues the understanding of causal effects (Zittrain, 2019).
Digital intersections with criminal justice and security
Another layer in the digital inequality stack relates to risk, particularly vulnerability to cybercrime and surveillance. The anonymous environment of cyberspace allows for new avenues of criminal activity by increasing the modalities of criminality and the range of victims, from individuals to governments to corporations. Not only have the populations of both aspiring criminals and potential victims increased over time, but the continuous improvements that make ICTs ever faster, cheaper, and easier to use have reduced the technical skills required for cybercrime while vastly complicating cybersecurity measures.
As these crimes continue to rise each year and account for billions of dollars in annual losses (U.S. Federal Bureau of Investigation. Internet Crime Complaint Center, 2019), many network and computer breaches that give criminals access to valuable personal information are avoidable (Bellasio, et al., 2018). However, a user’s ability to guard against victimization heavily depends on their access to needed education and resources. Just as social inequalities shape digital inequalities, they also affect vulnerability to cybercrime. Connected individuals with disabilities, for example, are disproportionately subjected to harassment, stalking, bullying, and disability-related hate crimes in cyberspace (Alhaboby, et al., 2017). As we will see in the next section, cybercrime disproportionately victimizes members of disadvantaged groups, who are typically less skilled yet more likely to use unsecured (often free) networks that may expose them to cyberthreats.
Other issues of inequality stem from increasing efforts to employ technology to solve an array of urban problems. Municipalities worldwide have invested heavily in sensor networks, video surveillance, and predictive analytics to collect data about behavioral patterns such as traffic flow, pedestrian movement, and use of public services (Eubanks, 2017; Monahan, 2018). Yet these efforts also may produce harmful consequences for urban residents, especially members of socially disadvantaged groups. For example, the 2014 effort to replace New York City payphones with citywide Wi-Fi was ostensibly intended to increase access to digital technology, but it also exposed lower-income residentsthose who most relied on the open, unsecured networksto security breaches and mass surveillance (Hornbeck, 2018). Moreover, increasing use of automated eligibility systems for social services, which often are integrated across multiple programs and agencies, forces lower-income residents to decide between giving up sensitive personal information with no assurance that it will be protected, or maintaining their privacy by not applying for much-needed assistance (Eubanks, 2017).
Law enforcement agencies increasingly employ data-mining techniques to track, to help solve, and even to predict crimes (Hassani, et al., 2016). Software for assessing an offender’s risk of reoffending is routinely used in courtrooms in the U.S. (Angwin, et al., 2016), and data from body cameras and gunshot detection devices are frequently treated as objective evidence of sound police work (Merrill, 2017). Yet, studies of the efficacy of such technologies have uncovered low success rates (Dror and Mnookin, 2010; Merrill, 2017), as well as biases based on race, ethnicity, gender, and socioeconomic status (Angwin, et al., 2016; McClain, 2019; Noble, 2018; Nunn, 2001). Using facial imagery and DNA-based technologies to uncover crime patterns, for example, has disproportionately focused on people of color, thereby reinforcing the criminalization of minority group members and increasing their risk of stigmatization (Machado and Granja, 2020; Skinner, 2020).
The tendency for ICT-related surveillance to target economically disadvantaged communities (Nunn, 2001) also increases the likelihood of crime detection in those areas, as well as rates of prosecution and imprisonment of the residents (Brayne, 2017; Monahan, 2017). Considering that criminal conviction leads to exclusion from digital life in ways that hinder a former offender’s successful reintegration into society (Toreld, et al., 2018), including inabilities to keep up with technological advances and to maintain a digital profile that may be used as part of an employment background check, this generates yet another layer of digital inequalities that may be opaque to the victims.
As we saw in the previous section, cybervictimization incidents are prevalent worldwide; they affect close to 40 percent of adult Internet users in Greece, New Zealand, the U.S., Switzerland, and Taiwan (Zhou, 2017) with consequences on individual, institutional, and governmental levels (Anderson, et al., 2013). Therefore, cybersafety is another layer in the digital inequality stack closely related to cybercrime.
Cybersafety skills and risk management are capital-enhancing digital activities. As more regular tasks and resources depend on digital technologies (from our thermostats to critical infrastructures), individuals and groups are increasingly exposed to greater digital risks (Livingstone, et al., 2015; Dodel and Mesch, 2019). Here, legacy digital inequalities play out in new ways because skills are unequally distributed across populations, with significant disparities according to socio-economic status, age, disability, and gender (Büchi, et al., 2017; Dodel and Mesch, 2019).
In this way, cybersafety skill gaps have implications for new permutations of digital inequalities. Technological skills are necessary to maintain protective software, manage privacy settings, and practice password hygiene. Informed and knowledgeable cybersafety practices comprise a diverse set of behaviors and preventative measures. These include but are not limited to: anti-spyware software adoption (Liang and Xue, 2010), password practices (Ur, et al., 2016), adequate privacy and sharing configurations (Büchi, et al., 2017; Park, 2013), identity theft prevention, and behaviors specific to children’s online safety (Büchi, et al., 2017; Dodel and Mesch, 2019). Such cybersafety skills confer an advantage to those who have them and to children in their care.
The effects of gender and age on cybersafety are complex. On the one hand, women and older users tend to report lower levels of technological self-efficacy and digital skills (van Deursen, et al., 2016) that could facilitate cybersafety practices. On the other hand, women and older users also express heightened perceptions of vulnerability and anticipate more severe and lasting consequences of threats (Box, et al., 1988; Sacco, 1990) that could increase their engagement in preventive behaviors (Dodel and Mesch, 2019). Whereas the latter has — paradoxically — some positive effects for cybersafety, both instances reflect the consequences of inequalities suffered by vulnerable social groups.
However, there is no countervailing effect of motivation on cybersafety behaviors when it comes to the effects of socioeconomic disadvantage. Skills gaps are often impacted by economic barriers, given that many programs, software, and services have prohibitively high costs that further increase the effects of socioeconomic disparities. Further, lower socioeconomic status (SES) individuals often lag behind in the adoption of cybersafety behaviors because of educational skill gaps (Büchi, et al., 2017; Dodel and Mesch, 2019; 2018). Indeed, socioeconomic disparities are statistically significant predictors of cybersafety behaviors that are mediated through digital skills, cognitive beliefs, routine activities, and parental oversight of children’s Internet use (Dodel and Mesch, 2019; Reyns, et al., 2016; Leukfeldt and Yar, 2016; Arachchilage and Love, 2014; Hanus and Wu, 2016).
Civic engagement and mobility
Two other areas of vulnerability in the digital inequality stack are civic engagement and mobility. The relationship between community, political, or civic engagement and information technologies (ICTs) was first studied before the smartphone era. This first wave examined whether increased use of ICTs drew one’s interests away from community, political, or civic engagement and other social behaviors (Katz and Rice, 2002) as opposed to facilitating new avenues for connection (Hampton, 2001; Haythornthwaite and Wellman, 1998). The results of empirical work overwhelmingly demonstrated that ICTs increased contacts and the broadening of one’s civic networks (Boase, et al., 2006) as well as provided an avenue to find information about participating at the local and national levels (Stern and Dillman, 2006).
However, as with other kinds of digital inequality, the benefits of digital resources for civic engagement redounded disproportionately to those who were already well endowed economically (Hampton and Wellman, 2003), enjoyed higher education levels (Chadwick, 2012), and were already involved in civic or political groups (Stern and Adams, 2010). Early research largely conducted before widespread smartphone adoption assumed that the rapid adoption of Internet use and broadband diffusion would reduce disparities. However, as time passed, and the disparities in ICT usage for civic engagement did not regress, empirical differences in using ICTs for community, political, or civic engagement led scholars to study civic engagement in relation to education, skills, and proficiencies. For example, Mossberger, et al. (2007) identified gaps in digital citizenship related to income, race, education, and age. Horrigan, et al. (2004) studied the importance of technological skill in engaging in political discussion via listservs; others documented similar trends such as obtaining campaign and voting information (Stern and Rookey, 2013).
However, it remains to be seen what effect mobility will have on civic engagement in an era in which increasing numbers of people perform Internet searches, social networking, and routine tasks on mobile devices. Researchers have theorized about the degree to which mobility can close civic engagement gaps. Some empirical results show that engagement is fostered via intuitive design with stakeholders or “citizen interaction design” (Lampe, 2018). Future work is needed to understand if the benefits translate to other areas of engagement that can mitigate disparities related to age (Gil de Zúñiga and Chen, 2019) and the ability to identify misinformation (Yamamoto, et al., 2018). As mobility increases, the relationships between ICTs and community, political, or civic engagement will continue to warrant scrutiny.
Gaming is another evolving dimension of digital inequality. While earlier literature relegated gaming to the margins of capital-enhancing activities, this assumption is being challenged. There is a growing body of research that examines the positive impacts of playing digital games on a wide variety of topics and contexts. Therefore, we may need to reevaluate our understanding of the role that digital games play in both the proliferation and amelioration of digital inequalities in modern society.
Digital games have the potential to impart numerous educational, career, and psychological benefits. Therefore, we must also consider the possible ramifications of the unequal distribution and access to digital games. For instance, roughly 36 percent of households in the U.S. do not contain a gaming device (Entertainment Software Association, 2018). On the other hand, while 64 percent of American households report owning a device to play video games, there is a substantial amount of variability in the kinds and quality of devices used to play games. For example, the majority of respondents play games on tablets or smartphones (60 percent) while only 41 percent report playing on a personal computer (Entertainment Software Association, 2018). Such device variability has inequality ramifications that are just being recognized, as evidence mounts that playing games on a computer increases computer self-efficacy, by contrast with console ownership (Ball, et al., 2018).
Furthermore, monetization methods increasingly employed within the digital gaming industry have inequality implications. The industry is moving away from traditional “one-time” purchases towards “games-as-service” or “freemium” models in which players continually spend money on games, which prices out those who cannot afford to purchase continuous or subscription services. More specifically, approximately 23 percent of players spend money on microtransactions which indicates that the majority of gamers are either unwilling or unable to participate in this new games-as-service economy (NPD Group, 2016). Microtransactions can give players in-game benefits which could advantage and disadvantage players in these digital spaces based on SES (Švelch, 2017). Likewise, as people long to belong they may feel pressured, but unable, to purchase in-game items/cosmetics that confer social status (Walton and Pallitt, 2012). For example, one study found that players feel pressure to purchase micro-transactions when confronted with players that have purchased them (Evers, et al., 2015).
While some might dismiss gaming inequalities as peripheral to well-being, Pugh’s (2009) research makes a strong case that young people forge social relationships, connections, and meaning through consumer purchases. Device variability also impacts the benefits and consequences of digital gameplay over the long term (Ball, et al., 2020). Youths priced out of keeping pace with device acquisition or services may thereby be priced out of social inclusion (Walton and Pallitt, 2012). Finally, yet other literature (Bergstrom, 2012; Aarsand, 2007) examines the replication of social identity norms in gaming settings, thus opening the door to other kinds of social exclusion in the gaming layer of the digital inequality stack.
Well-being and the life course
Well-being is one of the newest additions to the digital inequality stack with implications across the life course. Recently, scholars have begun focusing on disparities in tangible off-line benefits from ICT use (van Deursen and van Dijk, 2019; van Deursen, et al., 2016). Psychosocial well-being, such as emotional well-being, loneliness, depression, and support satisfaction are some of the tangible outcomes of ICT use that researchers are increasingly investigating.
Young people experience diminished well-being as a result of heightened stress in learning digital skills through “emotional costs” (Huang, et al., 2015). Emotional costs act as a mediating factor between different levels of the digital inequality stack from access to skill acquisition to a sense of self-efficacy. In educational settings, students experience anxiety when they lack digital skills shared by their peers; this anxiety may diminish positive attitudes towards digital technologies and even diminish learning digital skills. Young people who lack digital resources also experience stigma, shame, and social isolation when they are isolated from their peers on social media and cannot play the “identity curation game”. For digitally disadvantaged young people, connectivity gaps prohibit enacting idealized social media identity curation and identity management, leading to negative emotions including frustration, shame, embarrassment, and longing.
At the other end of the life course, research shows that the use of ICTs promotes social connectedness, and reduces social isolation and loneliness among older adults (Chopik, 2016; Sum, et al., 2008). However, class inequalities are also important as indicated by research on ICT use and psychosocial well-being among older adults such as Helsper and van Deursen’s (2017) study suggesting that the quality of social support that people receive is unequally distributed. Older adults, as “digital immigrants,” may feel offended or ostracized when younger generations, as “digital natives,” engage with ICTs around them. In this sense, digitally disadvantaged people are more likely to resist digital engagement and less likely to translate their ICT use into off-line benefits. Even when seniors receive the social benefits of ICT use with geographically distant social ties, they feel disconnected from geographically close social ties, which is labeled as a physical-digital divide (Ball, et al., 2019).
Across the life course, but particularly for older adults, research shows that using ICT for social purposes is associated with both physical and psychological health, and reduced loneliness followed by using ICTs mediated the effects of ICTs on these positive outcomes. (Chopik, 2016). ICT use facilitates mattering, which refers to an individual’s belief that they are important, acknowledged, and relied upon by others, that reduces negative psychosocial outcomes such as loneliness and depression. Finally, returning to the off-line-online feedback loop relating to well-being, experiencing difficulty in using ICTs may become a catalyst for people to seek assistance from social ties. This may in turn indirectly increase their social interactions and promote their psychosocial well-being (Francis, et al., 2018).
We close this examination of emergent digital inequalities by looking at assistive technologies, which are an important phenomenon for health and well-being, with implications for the digital inequality stack. Assistive technologies are increasingly of interest to across diverse fields including sociology, health, and education. Assistive technologies are important to enhance well-being in important domains including education and health (Freeman and Quirke, 2013). However, as with other forms of digital inequality, assistive technologies do not always provide all of their potential benefits when individuals lack either or both the resources or skills to take advantage of them (Bach, et al., 2013).
In addition to cost, Lazar and Jaeger (2011) present several key challenges to the use of assistive technologies that may aid diverse populations: 1) platforms and technologies are generally not designed as assistive technologies; 2) technological accommodations for one group may not meet the needs of another group; 3) individuals may lack training or social networks to help them; and, 4) the rapid pace of change may make assistive technologies quickly obsolete and discontinued for market reasons, despite their social benefit. Other challenges for ICT adoption for vulnerable populations may also impact the use of assistive technologies. These include but are not limited to stigma (Parette and Scherer, 2004), agency (Robinson, 2020), ethical anxiety (Yusif, et al., 2016) and privacy acquisition strategies (Robinson and Gran, 2018).
Nonetheless, two important studies show that progress can be made, especially through education and media mastery (Rice, et al., 2018). Waller’s (2016) work on assistive technologies in education illuminates the experiences of visually impaired students in Jamaica and Barbados for whom ICTs act as significant tools that increase stigma-free participation for information acquisition. Another study of children with hearing impairment in Jamaica (Morris and Henderon, 2016) shows how the implementation of digital resources in public schools bolsters participation, confidence, and learning outcomes. These studies show the potential empowerment of individuals when assistive technologies are marshalled to enhance life chances and well-being in ways only imagined by early studies of the Internet (Castells, 2001). At the same time, the work calls our attention to continually updating layers of the digital inequality stack while recognizing the unsolved challenges recognized by early scholarship (Norris, 2001).
3.0: Implications of emergent digital inequalities
In closing, our two-part series of articles both commemorate the 25th anniversary of the “digital divide” and serve to animate future calls to action as digital inequalities become more ingrained and insidious. By extending our metaphor of digital inequality stack, we have documented the quickly evolving nature of digital inequalities. We have illuminated new frontiers of inequalities arising from the platform economy, automation, big data, algorithms, cybercrime, cybersafety, civic engagement, mobility, gaming, emotional well-being, and assistive technologies. These new dimensions of digital inequalities add complexity to this multi-layered phenomenon and continue to reinforce how digital and social inequalities are interwoven. Mapping out these new forms of inequalities underlines the complex ways inequalities persist within and between countries, individuals and groups, the powerful and the powerless. Therefore, we draw attention to the growing power differentials between “ordinary” citizens and the dominating powers of big tech. Indeed, as revealed, these new forms of (digital) inequalities tend to lessen individuals’ agency while enhancing the power of technology creators, big tech, and other already powerful social actors.
We therefore call for additional work resisting the enlarging gap between the digital oligarchy and the digital underclass (Ragnedda, 2020). This task is critical, lest the advent of tomorrow’s digital technologies reinforce rather than mitigate already existing social inequalities. At the same time, we must also bear in mind that inequalities of all kinds are not “natural facts” but are the cumulative results of economic, political, and ideological choices. These choices are simultaneously big and small, individual and collective, formal and informal, quotidian and extraordinary. They are tensions that seem both within our grasp and yet also increasingly out of reach. If big tech and the digital oligarchy continue to profit from and de facto regulate every aspect of our digital lives, the digital inequality stack will continue to grow along multiple axes to the detriment of the many. Therefore, as social scientists, we need to reinforce the idea that digital resources are new civil and human rights that need to be promoted and cultivated. All stakeholders — individuals, groups, grassroots movements, policy-makers, and industry leaders of conscience — must ensure that the benefits and profits of each technological advance be used to benefit humanity rather than serve as tools of social reproduction in the hands of the increasingly few.
About the authors
Laura Robinson is Associate Professor in the Department of Sociology at Santa Clara University. She earned her Ph.D. from UCLA, where she held a Mellon Fellowship in Latin American Studies and received a Bourse d’Accueil at the École Normale Suprieure. Robinson has served as Visiting Assistant Professor at Cornell University and as Chair of CITAMS (2014–2015). Her research has earned awards from CITASA, AOIR, and NCA IICD. In addition to digital inequalities, Robinson’s work explores interaction and identity work, as well as media in Brazil, France, and the U.S.
Direct comments to: laura [at] laurarobinson [dot] org
Jeremy Schulz is Researcher at the UC Berkeley Institute for the Study of Societal Issues and a Fellow at the Cambridge Institute. He has also served as an Affiliate at the UC San Diego Center for Research on Gender in the Professions and a Council Member of the ASA Section on Consumers and Consumption. Previously, he held an NSF funded postdoctoral fellowship at Cornell University after earning his Ph.D. at UC Berkeley. He has also done research and published in areas including digital sociology, theory, qualitative research methods, work and family, and consumption.
E-mail: jmschulz [at] berkeley [dot] edu
Hopeton S. Dunn is Professor of Media and Communication in the Department of Media Studies at the University of Botswana and Senior Research Associate in the School of Communication, University ofJohannesburg, South Africa. Professor Dunn served as Director of the Caribbean School of Media and Communication at the University of the West Indies, Jamaica, where he remains Academic Director of the Mona ICT Policy Centre.
E-mail: hopetondunn [at] gmail [dot] com
Antonio A. Casilli is a professor of sociology at Telecom Paris, the telecommunication school of the Institut Polytechnique de Paris, and a researcher at the Interdisciplinary Institute on Innovation (i3). His research foci are digital labor, data governance, and human rights. He is the author of the award-winning book En attendant les robots (Editions du Seuil, 2019) and one of the co-creators of the documentary mini-series Invisibles (France Télévisions, 2020) about platform workers.
E-mail: antonio [dot] casilli [at] telecom-paris [dot] fr
Paola Tubaro is Associate Research Professor at the National Centre for Scientific Research (CNRS, in French Centre National de la Recherche Scientifique). Tubaro is affiliated with the Laboratoire de Recherche en Informatique (CNRS, INRIA, and Université Paris-Saclay). At the crossroads of sociology, economics, and computer science, Tubaro’s research explores the effects of big data and machine learning on markets, organizations, and labor. Her interests also include data methodologies and research ethics.
E-mail: paola [dot] tubaro [at] inria [dot] fr
Rod Carveth is an associate professor in strategic communication at the School of Global Journalism & Communication at Morgan State University in Baltimore, Md. His research examines media economics and crisis communication. He is co-editor of the Media economics: Theory and practice (co-editor with Alison Alexander, James Owers, C. Ann Hollifield, and Albert N. Greco; third edition, Routledge, 2003; first edition, L. Erlbaum Associates, 1993; second edition, L. Erlbaum Associates, 1998) as well as over 45 book chapters and journal articles.
E-mail: rodcarveth [at] gmail [dot] com
Dr. Wenhong Chen is associate professor of media studies and sociology at the University of Texas at Austin. Her research has focused on digital media technologies in entrepreneurial and civic settings. Dr. Chen has more than 70 publications, including articles in top-ranked journals in the fields of communication and media studies, sociology, and management. Dr. Chen’s research has received awards from the American Sociological Association, International Communication Association, and International Association of Chinese Management Research.
E-mail: wenhong [dot] chen [at] austin [dot] utexas [dot] edu
Julie B. Wiest is Associate Professor of Sociology at West Chester University of Pennsylvania. As a sociologist of culture and media, Wiest applies mainly symbolic interactionist and social constructionist perspectives to studies in three primary areas: the sociocultural context of violence, mass media effects, and the relationship between new media technologies and social change. Wiest’s latest book, The allure of premeditated murder: Why some people plan to kill (2018, Rowman & Littlefield), was co-authored with Jack Levin, who co-directs the Brudnick Center on Violence and Conflict at Northeastern University.
E-mail: jwiest [at] wcupa [dot] edu
Matías Dodel olds a Ph.D. from the Department of Sociology, University of Haifa. He is an associate Professor of Communication at Universidad Católica del Uruguay. He is the director of the Internet of People (IoP) research group, where he coordinates the Uruguayan chapters of international comparative Internet studies such as World Internet Project, DiSTO (From Digital Skills to Tangible Outcomes), and Global Kids Online. His research interests are digital inequalities, social stratification, digital safety, and cybercrime.
E-mail: matias [dot] dodel [at] ucu [dot] edu [dot] uy
Michael J. Stern is Professor and Department Chairperson in the Department of Media + Information at Michigan State University. Stern has also served as the Director of the Web and Emerging Technologies Initiative and as a Senior Fellow at NORC at the University of Chicago. Substantively, his research focuses on theories of information seeking and digital inequality in the context of health, as well as the exclusion of marginalized and traditionally underrepresented groups in the areas of Internet usage, social media, and mobile emerging technologies.
E-mail: sternmi5 [at] msu [dot] edu
Christopher Ball is an assistant professor in the Department of Journalism at the University of Illinois at Urbana-Champaign. His research interests involve the influence of new technologies on society and how these technologies can be studied and harnessed for research, education, and outreach purposes. More specifically, his research focuses on the use of interactive media and technologies such as video games, virtual worlds, and virtual reality to foster pro-social outcomes and experiential learning across the life course.
E-mail: drball [at] illinois [dot] edu
Kuo-Ting Huang, Ph.D., is an assistant professor of Emerging Media Design & Development in the Department of Journalism at Ball State University. His research focuses on the psychological, cognitive, and affective outcomes of interactive media usage, with an emphasis on digital games and virtual/augmented reality (VR/AR). Specifically, he is interested in how these psychological mechanisms can be harnessed to create virtual reality, augmented reality, and video game experiences that promote educational and health outcomes.
E-mail: khuang2 [at] bsu [dot] edu
Grant Blank is Survey Research Fellow at the Oxford Internet Institute and Senior Research Fellow at Harris Manchester College, both University of Oxford. He received the William F. Ogburn Career Achievement award from the Communication, Information Technology and Media Sociology section of the American Sociological Association in 2015. This award recognizes a sustained body of research that has made an outstanding contribution to the advancement of knowledge in the area of sociology of communication, information technology and media sociology.
E-mail: grant [dot] blank [at] oii [dot] ox [dot] ac [dot] uk
Massimo Ragnedda (Ph.D.) is a Senior Lecturer in Mass Communication at Northumbria University, Newcastle, U.K. where he conducts research on the digital divide and social media. He is the co-vice chair of the Digital Divide Working Group (IAMCR) and co-convenor of NINSO (Northumbria Internet and Society Research Group). He has authored 12 books with his publications appearing in numerous peer-reviewed journals, and book chapters in English, Spanish, Italian, Portuguese and Russian texts. His books include: Digital capital: A Bourdieusian perspective on the digital divide (with Maria Laura Ruiu), Emerald Publishing, 2020; Digital inclusion: An international comparative analysis (co-edited with Bruce Mutsvairo), Lexington Books 2018; Theorizing the digital divide (co-edited with G. Muschert), Routledge (2017); The third digital divide: A Weberian approach to digital inequalities (2017), Routledge; The digital divide: The Internet and social inequality in international perspective (co-edited with G. Muschert) (2013), Routledge.
E-mail: massimo [dot] ragnedda [at] northumbria [dot] ac [dot] uk
Hiroshi Ono (Ph.D., sociology, Chicago; Docent, Economics, Stockholm School of Economics) is Professor of Human Resources Management at Hitotsubashi University Business School and Affiliated Professor of Sociology at Texas A&M University. He writes and speaks extensively on the relationships among motivation, happiness and productivity in the workplace, and the interplay between demographic change and labor market dynamics in Japan. His latest research focuses on Japan’s work reform, especially on reducing work hours and increasing labor productivity.
E-mail: hono [at] ics [dot] hub [dot] hit-u [dot] ac [dot] jp
Bernie Hogan (Ph.D. Toronto, 2009) is a Senior Research Fellow at the OII and Research Associate at the Department of Sociology. With training in sociology and computer science, Hogan focuses on how social networks and social media can be designed to empower people to build stronger relationships and stronger communities. Hogan has published in a wide variety of venues, from peer-reviewed papers in sociology journals (such as Social Networks, City and Community, Bulletin of Science Technology and Society, and Field Methods).
E-mail: bernie [dot] hogan [at] oii [dot] ox [dot] ac [dot] uk
Gustavo S. Mesch is a Professor of Sociology and the Rector of the University of Haifa. His research interests are technology and society, social effects of new media, youth Internet culture, social networks online and off-line. He is currently studying patterns of cyber fraud scams, identity theft and the use of preventive measures, a study funded by the Ministry of Science and Technology of Israel. He has served as Chair of the ASA CITAMS section and Editor-in-Chief of the Sociological Focus, the official journal of the North Central Sociological Association (U.S.).
E-mail: gustavo [at] soc [dot] haifa [dot] ac [dot] il
Shelia R. Cotten is an MSU Foundation Professor and the Associate Chair for Research in the Department of Media and Information at Michigan State University. She holds Affiliate Professor positions in the Department of Sociology and the College of Engineering. Her research examines technology use across the life course and health, workforce, education, and social impacts of this use. She is a past Chair of CITAMS and has also won the William F. Ogburn Senior Career Award and the Public Sociology Award. Beginning 1 August 2020, she will be the Associate Vice President for Research Development and a Provost’s Distinguished Professor at Clemson University.
E-mail: cotten [at] msu [dot] edu
Susan B. Kretchmer is Co-Founder and President of the not-for-profit Partnership for Progress on the Digital Divide (PPDD, http://www.ppdd.org), the only academic professional organization in the world focused solely on the digital divide and on connecting research to policy-making and practice to strategize actions and catalyze solutions to this pressing societal concern. She is also the Lead Organizer of the Partnership for Progress on the Digital International Conferences series.
E-mail: Susan [dot] Kretchmer [at] ppdd [dot] org
Timothy M. Hale, Ph.D., is a medical sociologist in the Department of Kinesiology and Community Health at the University of Illinois at Urbana-Champaign. Previously, he served as Research Fellow at Partners Center for Connected Health and Harvard Medical School. His main research interest is the impact of information and communication technologies (ICTs) on health care and health lifestyles. Prior to joining the Center, he was a postdoctoral fellow at the University of Alabama at Birmingham where he studied the social and psychological impacts of ICT, focusing primarily on youth and older adults. Hale was elected as a CITASA Council Member (2012–2014). His work has been published in Information, Communication & Society; Computers and Human Behavior; Journal of Health Communication and American Behavioral Scientist.
E-mail: timhale [at] illinois [dot] edu
Tomasz Drabowicz is on the faculty at the University of Lodz where he is the Chair of the Department of Sociology of Social Structure and Social Change in the Faculty of Economics and Sociology. Dr. Drabowicz received his Ph.D. from the Department of Political and Social Sciences at the European University Institute. His research areas include social mobility and social stratification, sustainable development, digital inequalities, and new technologies and their impact on social life.
E-mail: tomasz [dot] drabowicz [at] uni [dot] lodz [dot] pl
Pu Yan is a Researcher at Oxford Internet Institute, University of Oxford. Her doctoral research focuses on the influence of the emerging ICTs on everyday information practices in rural and urban China, which employs a mixed methods approach that combines big data research and ethnographic study into the study of human information practices. In her 15-month fieldwork in a village and a factory in central China, she explored the adoption and domestication of ICTs in developing areas and studied how the Internet has influenced information-seeking in everyday life. Her research interests include digital divides, information-seeking practices on the Internet, mobile social media, and international comparative study of media systems.
E-mail: pu [dot] yan [at] oii [dot] ox [dot] ac [dot] uk
Barry Wellman directs the NetLab Network and is the former S.D. Clark Professor of Sociology at the University of Toronto. Prof. Wellman is a Fellow of the Royal Society of Canada. He founded the International Network for Social Network Analysis in 1976–1977. He is the Chair-Emeritus of both the Community and Information Technologies section and the Community and Urban Sociology section of the American Sociological Association. He has been a keynoter at conferences ranging from computer science to theology. He is the (co-)author of more than 200 articles that have been co-authored with more than 80 scholars, and is the (co-)editor of five books.
E-mail: wellman [at] chass [dot] utoronto [dot] ca
Molly-Gloria Harper is a graduate student in the Ph.D. sociology program at Western. Harper’s background includes a Bachelor’s with honours and Master’s degree from the University of Windsor in the field of criminology. Harper’s research interests include youth, deviancy, criminology, social media, the role technology plays in society, cyberbullying, and notions of accountability.
E-mail: mharpe22 [at] uwo [dot] ca
Anabel Quan-Haase is Professor in the Department of Sociology and Faculty of Information and Media Studies at Western University in Canada. Anabel Quan-Haase’s research interests lie in the area of computer-mediated communication, the networked society, social networks, and new media and social change. Her current research projects examine how young people use instant messaging, Facebook, mobile phones and other communication tools and what the social consequences are for their social relations, community, and social capital.
E-mail: aquan [at] uwo [dot] ca
Aneka Khilnani is currently a medical student at the George Washington University School of Medicine and Health Sciences in Washington, D.C. She completed a M.S. in physiology at Georgetown University, where she focused on preventative medicine and novel renal pharmacologics. She currently serves on the university’s medical admissions committee and internal medicine board. She is also a representative for the American Association of Medical Colleges and actively conducts research in the Dermatology Department at Children’s National Hospital. She has a special interest in telemedicine and digital inclusion. She has also served in numerous editorial positions, co-edited several volumes, and has published in the American Behavioral Scientist and Emerald Studies in Media and Communications.
E-mail: aneka [at] gwu [dot] edu
We thank Edward J. Valauskas, Chief Editor and Founder of First Monday, for the opportunity to publish our work thanks to the journal’s pioneering provision of the open access publishing model. In the spirit of digital inclusion, we can think of no better venue to share our research. In addition, we thank the anonymous reviewers for their time and commentary, as well as Aneka Khilnani (Managing Editor) and Natalia Tolentino (Assistant Editor) for their exemplary work and service.
1. The order of authors reflects the sequence of contributions to the two-part article series as follows.
“Digital inequalities 2.0: Emergent inequalities in the information age” was co-authored by: Laura Robinson and Jeremy Schulz (Legacy inequalities in the information age: The digital inequality stack); Grant Blank (From digital divides to digital inclusion); Massimo Ragnedda (Economic class); Hiroshi Ono (Gender); Bernie Hogan (Sexuality); Gustavo Mesch (Race and ethnicity); Shelia R. Cotten (Aging); Susan B. Kretchmer (Disability); Timothy M. Hale (Healthcare); Tomasz Drabowicz (Education); Pu Yan (Rural and urban inequalities); Barry Wellman and Molly-Gloria Harper (Networked individualism); Anabel Quan-Haase (Global digital inequality); Aneka Khilnani (Digital inequalities and COVID-19); and Jeremy Schulz, Laura Robinson, and Massimo Ragnedda (2.0: Implications of legacy digital inequalities).
“Digital inequalities 3.0: Legacy inequalities in the information age” was co-authored by Laura Robinson and Jeremy Schulz (Emergent inequalities in the information age); Hopeton S. Dunn (Accessibility as a human right); Antonio A. Casilli and Paola Tubaro (The platform economy and digital labor); Rod Carveth (Automation); Wenhong Chen (Big data and algorithms); Julie B. Wiest (Digital intersections with criminal justice and security); Matas Dodel (Cybersafety); Michael J. Stern (Civic engagement and mobility); Christopher Ball (Gaming); Kuo-Ting Huang (Well-being and the life course);); Aneka Khilnani (Assistive technologies); Jeremy Schulz, Massimo Ragnedda, and Laura Robinson (3.0: Implications of emergent digital inequalities).
P.A. Aarsand, 2007. “Computer and video games in family life: The digital divide as a resource in intergenerational interactions,” Childhood, volume 14, number 2, pp. 235–256.
doi: https://doi.org/10.1177/0907568207078330, accessed 17 June 2020.
Z.A. Alhaboby, J. Barnes, H. Evans, and E. Short, 2017. “Challenges facing online research: Experiences from research concerning cyber-victimization of people with disabilities,” Cyberpsychology, volume 11, number 1, article 8.
doi: https://doi.org/10.5817/CP2017-1-8, accessed 17 June 2020.
Alliance for Affordable Internet, 2019a. “The 2019 affordability report,” at https://a4ai.org/affordability-r, accessed 17 June 2020.
Alliance for Affordable Internet, 2019b. “Mobile broadband pricing data for Q2 2019,” at https://a4ai.org/extra/mobile_broadband_pricing_usd-2019Q2, accessed 17 June 2020.
R. Anderson, C. Barton, R. Böhme, R. Clayton, M.J.G. van Eeten, M. Levi, T. Moore, and S. Savage, 2013. “Measuring the cost of cybercrime,” In: R. Böhme (editor). The economics of information security and privacy. Berlin: Springer, pp. 265–300.
doi: https://doi.org/10.1007/978-3-642-39498-0_12, accessed 17 June 2020.
J. Angwin, J. Larson, S. Mattu, and L. Kirchner, 2016. “Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks,” ProPublica (23 May), at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, accessed 17 June 2020.
N.A.G. Arachchilage and S. Love, 2014. “Security awareness of computer users: A phishing threat avoidance perspective,” Computers in Human Behavior, volume 38, pp. 304–312.
doi: https://doi.org/10.1016/j.chb.2014.05.046, accessed 17 June 2020.
D.H. Autor, 2015. “Why are there still so many jobs? The history and future of workplace automation,” Journal of Economic Perspectives, volume 29, number 3, pp. 3–30.
doi: https://doi.org/10.1257/jep.29.3.3, accessed 17 June 2020.
A. Bach, G. Shaffer, and T. Wolfson, 2013. “Digital human capital: Developing a framework for understanding the economic impact of digital exclusion in low-income communities,” Journal of Information Policy, volume 3, pp. 247–266.
doi: https://doi.org/10.5325/jinfopoli.3.2013.0247, accessed 17 June 2020.
C. Ball, K.-T. Huang, J. Francis, T. Kadylak, and S.R. Cotten, 2020. “A call for computer recess: The impact of activities on minority students’ technology self-efficacy,” American Behavioral Scientist (14 May).
doi: https://doi.org/10.1177/0002764220919142, accessed 17 June 2020.
C. Ball, K.-T. Huang, R.V. Rikard, and S.R. Cotten, 2019. “The emotional costs of computers: An expectancy-value theory analysis of predominantly low-socioeconomic status minority students’ STEM attitudes,” Information, Communication & Society, volume 22, number 1, pp. 105–128.
doi: https://doi.org/10.1080/1369118X.2017.1355403, accessed 17 June 2020.
C. Ball, K.-T. Huang, S.R. Cotten, and R.V. Rikard, 2018. “Gaming the SySTEM: The relationship between video games and the digital and STEM divides,” Games and Culture, volume 15, number 5, pp. 501–528.
doi: https://doi.org/10.1177/1555412018812513, accessed 17 June 2020.
S. Barocas and A.D. Selbst, 2016. “Big data’s disparate impact,” California Law Review, volume 104, number 3, pp. 671–732.
doi: http://dx.doi.org/10.15779/Z38BG31, accessed 17 June 2020.
J. Bellasio, R. Flint, N. Ryan, S. Sondergaard, C.G. Monsalve, A.S. Meranto, and A. Knack, 2018. “Developing cybersecurity capacity: A proof-of-concept implementation guide,” Rand document number RR-2072-FCO, at https://www.rand.org/pubs/research_reports/RR2072.html, accessed 17 June 2020.
doi: https://doi.org/10.7249/RR2072, accessed 17 June 2020.
J. Berg, M. Furrer, E. Harmon, U. Rani, and M.S. Silberman, 2018. Digital labour platforms and the future of work: Towards decent work in the online world. Geneva: International Labour Office, and at https://www.ilo.org/global/publications/books/WCMS_645337/lang--en/index.htm, accessed 17 June 2020.
K. Bergstrom, 2012. “Virtual inequality: A woman’s place in cyberspace,” FDG ’12: Proceedings of the International Conference on the Foundations of Digital Games. pp. 267–269.
doi: https://doi.org/10.1145/2282338.2282394, accessed 17 June 2020.
J. Boase, J.B. Horrigan, B. Wellman, and L. Rainie, 2006. “The strength of Internet ties,” Pew Research Center (25 January), at https://www.pewresearch.org/internet/2006/01/25/the-strength-of-internet-ties/, accessed 17 June 2020.
S. Box, C. Hale, and G. Andrews, 1988. “Explaining fear of crime,” British Journal of Criminology, volume 28, number 3, pp. 340–356.
doi: https://doi.org/10.1093/oxfordjournals.bjc.a047733, accessed 17 June 2020.
S. Brayne, 2017. “Big data surveillance: The case of policing,” American Sociological Review, volume 82, number 5, pp. 977–1,008.
doi: https://doi.org/10.1177/0003122417725865, accessed 17 June 2020.
M. Büchi, N. Just, and M. Latzer, 2017. “Caring is not enough: The importance of internet skills for online privacy protection,” Information, Communication & Society, volume 20, number 8, pp. 1,261–1,278.
doi: https://doi.org/10.1177/0003122417725865, accessed 17 June 2020.
M. Castells, 2001. The Internet galaxy: Reflections on the Internet, business, and society. New York: Oxford University Press.
A. Chadwick, 2012. “Recent shifts in the relationship between the Internet and democratic engagement in Britain and the United States,” In: Eva Anduiza, Michael James Jensen, Laia Jorba (editors). Digital media and political engagement worldwide: A comparative study. New York: Cambridge University Press, pp. 39&nndash;55.
doi: https://doi.org/10.1017/CBO9781139108881.003, accessed 17 June 2020.
W. Chen, 2019. “Now I know my ABCs: U.S.-China policy on AI, big data, and cloud computing,” Asia Pacific Issues, volume 140, at https://www.eastwestcenter.org/publications/now-i-know-my-abcs-us-china-policy-ai-big-data-and-cloud-computing, accessed 17 June 2020.
W. Chen, A. Quan-Haase, and Y.J. Park, 2018. “Privacy and data management: The user and producer perspectives,” American Behavioral Scientist (30 July).
doi: https://doi.org/10.1177/0002764218791287, accessed 17 June 2020.
W.J. Chopik, 2016. “The benefits of social technology use among older adults are mediated by reduced loneliness,” Cyberpsychology, Behavior, and Social Networking, volume 19, number 9, pp. 551–556.
doi: https://doi.org/10.1089/cyber.2016.0151, accessed 17 June 2020.
M. Dodel and G. Mesch, 2019. “An integrated model for assessing cyber-safety behaviors: How cognitive, socioeconomic and digital determinants affect diverse safety practices,” Computers & Security, volume 86, pp. 75–91.
doi: https://doi.org/10.1016/j.cose.2019.05.023, accessed 17 June 2020.
M. Dodel and G. Mesch, 2018. “Inequality in digital skills and the adoption of online safety behaviors,” Information, Communication & Society, volume 21, number 5, pp. 712–728.
doi: https://doi.org/10.1080/1369118X.2018.1428652, accessed 17 June 2020.
I.E. Dror and J.L. Mnookin, 2010. “The use of technology in human expert domains: Challenges and risks arising from the use of automated fingerprint identification systems in forensic science,” Law, Probability & Risk, volume 9, number 1, pp. 47–67.
doi: https://doi.org/10.1093/lpr/mgp031, accessed 17 June 2020.
H.R. Ekbia and B.A. Nardi, 2017. Heteromation, and other stories of computing and capitalism. Cambridge, Mass.: MIT Press.
Entertainment Software Association, 2018. “Essential facts about the computer and video game industry,” at http://www.theesa.com/wp-content/uploads/2018/05/EF2018_FINAL.pdf, accessed 17 June 2020.
V. Eubanks, 2017. Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St. Martin’s Press.
E.R. Evers, N. van de Ven, and D. Weeda, 2015. “The hidden cost of microtransactions: Buying in-game advantages in online games decreases a player’s status,” International Journal of Internet Science, volume 10, number 1, pp. 20–36.
J. Francis, T. Kadylak, T.W. Makki, R.V. Rikard, and S.R. Cotten, 2018. “Catalyst to connection: When technical difficulties lead to social support for older adults,” American Behavioral Scientist, volume 62, number 9, pp. 1,167–1,185.
doi: https://doi.org/10.1177/0002764218773829, accessed 16 June 2020.
J. Freeman and S. Quirke, 2013. “Understanding e-democracy: Government-led initiatives for democratic reform,” Journal of E-democracy and Open Government, volume 5, number 2, pp. 141–154.
doi: https://doi.org/10.29379/jedem.v5i2.221, accessed 16 June 2020.
H. Gil de Zúñiga and H.-T. Chen, 2019. “Digital media and politics: Effects of the great information and communication divides,” Journal of Broadcasting & Electronic Media, volume 63, number 3, pp. 365–373.
doi: https://doi.org/10.1080/08838151.2019.1662019, accessed 16 June 2020.
M. Graham and M.A. Anwar, 2019. “The global gig economy: Towards a planetary labour market?” First Monday, volume 24, number 4, at https://firstmonday.org/article/view/9913/7748, accessed 16 June 2020.
doi: https://doi.org/10.5210/fm.v24i4.9913, accessed 16 June 2020.
M. Graham and J. Woodcock, 2018. “Towards a fairer platform economy: Introducing the fairwork foundation,” Alternate Routes, volume 29, number 2, pp. 242–253, and at http://www.alternateroutes.ca/index.php/ar/article/view/22455, accessed 16 June 2020.
M.L. Gray and S. Suri, 2019. Ghost work: How to stop Silicon Valley from building a new global underclass. Boston, Mass.: Houghton Mifflin Harcourt.
K.N. Hampton, 2001. “Living the wired life in the wired suburb: Netville, glocalization and civil society,” Ph.D. dissertation, Department of Sociology, University of Toronto, at https://tspace.library.utoronto.ca/handle/1807/15477, accessed 16 June 2020.
K. Hampton and B. Wellman, 2003. “Neighboring in Netville: How the Internet supports community and social capital in a wired suburb,” City & Community, volume 2, number 4, pp. 277–311.
doi: https://doi.org/10.1046/j.1535-6841.2003.00057.x, accessed 16 June 2020.
B. Hanus and Y. Wu, 2016. “Impact of users’ security awareness on desktop security behavior: A protection motivation theory perspective,” Information Systems Management, volume 33, number 1, pp. 2–16.
doi: https://doi.org/10.1080/10580530.2015.1117842, accessed 16 June 2020.
K. Hara, A. Adams, K. Milland, S. Savage, C. Callison-Burch, and J.P. Bigham, 2018. “A data-driven analysis of workers’ earnings on Amazon Mechanical Turk,” CHI ’18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, paper number 449.
doi: https://doi.org/10.1145/3173574.3174023, accessed 16 June 2020.
H. Hassani, X. Huang, E.S. Silva, and M. Ghodsi, 2016. “A review of data mining applications in crime,” Statistical Analysis and Data Mining, volume 9, number 3, pp. 139–154.
doi: https://doi.org/10.1002/sam.11312, accessed 16 June 2020.
C. Haythornthwaite and B. Wellman, 1998. “Work, friendship, and media use for information exchange in a networked organization,” Journal for the American Society of Information Science, volume 49, number 12, pp. 1,101–1,114.
E.J. Helsper and A.J.A.M. van Deursen, 2017. “Do the rich get digitally richer? Quantity and quality of support for digital engagement,” Information, Communication & Society, volume 20, number 5, pp. 700–714.
doi: https://doi.org/10.1080/1369118X.2016.1203454, accessed 16 June 2020.
L. Hoang, G. Blank, and A. Quan-Haase, 2020. “The winners and the losers of the platform economy: Who participates?” Information, Communication & Society, volume 23, number 5, pp. 681–700.
doi: https://doi.org/10.1080/1369118X.2020.1720771, accessed 16 June 2020.
E. Hornbeck, 2018. “‘We know not where we go’: Protecting digital privacy in New York City’s municipal Wi-Fi network,” Fordham Urban Law Journal, volume 45, number 3, pp. 699–760, and at https://ir.lawnet.fordham.edu/ulj/vol45/iss3/3/, accessed 16 June 2020.
J.B. Horrigan, K. Garrett, and P. Resnick, 2004. “The Internet and democratic debate,” Pew Internet & American Life Project (27 October), at https://www.pewresearch.org/internet/wp-content/uploads/sites/9/media/Files/Reports/2004/PIP_Political_Info_Report.pdf.pdf, accessed 16 June 2020.
K.-T. Huang, L. Robinson, and S.R. Cotten, 2015. “Mind the emotional gap: The impact of emotional costs on student learning outcomes,” Communication and Information Technologies Annual, volume 10, pp. 121–144.
doi: https://doi.org/10.1108/S2050-206020150000010005, accessed 16 June 2020.
U. Huws, 2003. The making of a cybertariat: Virtual work in a real world. New York: Monthly Review Press.
infoDev, Wireless Internet Institute, and the United Nations Information and Communication Technology Task Force, 2003. “The wireless Internet opportunity for developing countries,” at http://www.infodev.org/articles/wireless-internet-opportunity-developing-countries, accessed 17 June 2020.
International Telecommunications Union (ITU), Broadband Commission for Sustainable Development, 2018. “The state of broadband 2018: Broadband catalyzing sustainable development,”, at https://www.itu.int/dms_pub/itu-s/opb/pol/S-POL-BROADBAND.19-2018-PDF-E.pdf, accessed 17 June 2020.
P.G. Ipeirotis, 2010. “Demographics of Mechanical Turk,” NYU Center for Digital Economy Research Working Paper, CeDER-10-01, at https://archivefda.dlib.nyu.edu/jspui/bitstream/2451/29585/2/CeDER-10-01.pdf, accessed 17 June 2020.
L. Irani, 2015. “Difference and dependence among digital workers: The case of Amazon Mechanical Turk,” South Atlantic Quarterly, volume 114, number 1, pp. 225–234.
doi: https://doi.org/10.1215/00382876-2831665, accessed 17 June 2020.
J.E. Katz and R.E. Rice, 2002. Social consequences of Internet use: Access, involvement, and interaction. Cambridge, Mass.: MIT Press.
K. Kuehn and T.F. Corrigan, 2013. “Hope labor: The role of employment prospects in online social production,” Political Economy of Communication, volume 1, number 1, at http://polecom.org/index.php/polecom/article/view/9, accessed 17 June 2020.
C. Lampe, 2018. “Citizen interaction design: Teaching HCI through service,” Interactions, volume 23, number 6, pp. 66–69.
doi: https://doi.org/10.1145/2991895, accessed 17 June 2020.
J. Lazar and P. Jaeger, 2011. “Reducing barriers to online access for people with disabilities,” Issues in Science and Technology, volume 27, number 2, 69–82, and at https://issues.org/lazar/, accessed 17 June 2020.
V. Lehdonvirta, O. Kässi, I. Hjorth, H. Barnard, and M. Graham, 2019. “The global platform economy: A new offshoring institution enabling emerging-economy microproviders,” Journal of Management, volume 45, number 2, pp. 567–599.
doi: https://doi.org/10.1177/0149206318786781, accessed 17 June 2020.
E.R. Leukfeldt and M. Yar, 2016. “Applying routine activity theory to cybercrime: A theoretical and empirical analysis,” Deviant Behavior, volume 37, number 3, pp. 263–280.
doi: https://doi.org/10.1080/01639625.2015.1012409, accessed 17 June 2020.
H. Liang and Y. Xue, 2010. “Understanding security behaviors in personal computer usage: A threat avoidance perspective,” Journal of the Association for Information Systems, volume 11, number 7, article 1.
doi: https://doi.org/10.17705/1jais.00232, accessed 17 June 2020.
S. Livingstone, G. Mascheroni, and E. Staksrud, 2015. “Developing a framework for researching childrens online risks and opportunities in Europe,” EU Kids Online, at http://www.lse.ac.uk/media@lse/research/EUKidsOnline/EUKidsIV/PDF/TheEUKidsOnlineresearchframework.pdf, accessed 17 June 2020.
H. Machado and R. Granja, 2020. “Emerging DNA technologies and stigmatization,” In: H. Machado and R. Granja. Forensic genetics in the governance of crime. Singapore: Palgrave Pivot, pp. 85–104.
doi: https://doi.org/10.1007/978-981-15-2429-5_7, accessed 17 June 2020.
A. Margaryan, 2019. “Workplace learning in crowd work: Comparing microworkers’ and online freelancers’ practices,” Journal of Workplace Learning, volume 31, number 4, pp. 250–273.
doi: https://doi.org/10.1108/JWL-10-2018-0126, accessed 17 June 2020.
D. Martin, J. O’Neill, N. Gupta, and B.V. Hanrahan, 2016. “Turking in a global labour market,” Computer Supported Cooperative Work (CSCW), volume 25, 39–77.
doi: https://doi.org/10.1007/s10606-015-9241-6, accessed 17 June 2020.
N. McClain, 2019. “Caught inside the black box: Criminalization, opaque technology, and the New York subway MetroCard,” Information Society, volume 35, number 5, pp. 251–271.
doi: https://doi.org/10.1080/01972243.2019.1644410, accessed 17 June 2020.
A. Merrill, 2017. “The life of a gunshot: Space, sound and the political contours of acoustic gunshot detection,” Surveillance & Society, volume 15, number 1, pp. 42–55.
doi: https://doi.org/10.24908/ss.v15i1.6305, accessed 17 June 2020.
MIT-IBM Watson AI Lab, 2019. “MIT-IBM Watson AI Lab releases groundbreaking research on AI and the future of work,” Business Insider (30 October), at https://markets.businessinsider.com/news/stocks/mit-ibm-watson-ai-lab-releases-groundbreaking-research-on-ai-and-the-future-of-work-1028645270, accessed 17 June 2020.
T. Monahan, 2018. “The image of the smart city: Surveillance protocols and social inequality,” In: Y. Watanabe (editor). Handbook of cultural security. Northampton, Mass.: Edward Elgar, pp. 210–226.
doi: https://doi.org/10.4337/9781786437747.00017, accessed 17 June 2020.
T. Monahan, 2017. “Regulating belonging: Surveillance, inequality, and the cultural production of abjection,” Journal of Cultural Economy, volume 10, number 2, pp. 191–206.
doi: https://doi.org/10.1080/17530350.2016.1273843, accessed 17 June 2020.
F. Morris and A. Henderson, 2016. “ICTs and empowerment of children with disabilities: A Jamaican case study,” Communication and Information Technologies Annual, volume 12, pp. 25–39.
doi: https://doi.org/10.1108/S2050-206020160000012003, accessed 17 June 2020.
K. Mossberger, C.J. Tolbert, and R.S. McNeal, 2007. Digital citizenship: The Internet, society, and participation. Cambridge, Mass.: MIT Press.
S.U. Noble, 2018. Algorithms of oppression: How search engines reinforce racism. New York: New York University Press.
P. Norris, 2001. Digital divide: Civic engagement, information poverty, and the Internet worldwide. New York: Cambridge University Press.
NPD Group, 2016. “PC and video games — DLC and microtransaction purchasing,” summary at https://www.npd.com/wps/portal/npd/us/news/press-releases/2016/latest-report-from-the-npd-group-provides-insight-into-gamers-purchasing-usage-and-perceptions-of-additional-gaming-content/, accessed 17 June 2020.
S. Nunn, 2001. “Cities, space, and the new world of urban law enforcement technologies,” Journal of Urban Affairs, volume 23, numbers 3–4, pp. 259–278.
doi: https://doi.org/10.1111/0735-2166.00088, accessed 17 June 2020.
P. Parette and M. Scherer, 2004. “Assistive technology use and stigma,” Education and Training in Developmental Disabilities, volume 39, number 3, pp. 217–226.
Y.J. Park, 2013. “Offline status, online status: Reproduction of social categories in personal information skill and knowledge,” Social Science Computer Review, volume 31, number 6, pp. 680–702.
doi: https://doi.org/10.1177/0894439313485202, accessed 17 June 2020.
Pew Research Center, 2019. “Internet/broadband fact sheet” (12 June), at http://www.pewinternet.org/fact-sheet/internet-broadband/, accessed 17 June 2020.
V. Pickard and D.E. Berman, 2019. After net neutrality: A new deal for the digital age. New Haven, Conn.: Yale University Press.
J. Prassl, 2018. Humans as a service: The promise and perils of work in the gig economy. New York: Oxford University Press.
doi: https://doi.org/10.1093/oso/9780198797012.001.0001, accessed 17 June 2020.
A.J. Pugh, 2009. Longing and belonging: Parents, children, and consumer culture. Berkeley: University of California Press.
M. Ragnedda, 2020. Enhancing digital equity: Connecting the digital underclass. London: Palgrave Macmillan.
doi: https://doi.org/10.1007/978-3-030-49079-9, accessed 17 June 2020.
B.W. Reyns, R. Randa, and B. Henson, 2016. “Preventing crime online: Identifying determinants of online preventive behaviors using structural equation modeling and canonical correlation analysis,” Crime Prevention and Community Safety, volume 18, pp. 38–59.
doi: https://doi.org/10.1057/cpcs.2015.21, accessed 17 June 2020.
R.E. Rice, I. Hagen, and N. Zamanzadeh, 2018. “Media mastery: Paradoxes in college students’ use of computers and mobile phones,” American Behavioral Scientist, volume 62, number 9, pp. 1,229–1,250.
doi: https://doi.org/10.1177/0002764218773408, accessed 17 June 2020.
S.T. Roberts, 2019. Behind the screen: Content moderation in the shadows of social media. New Haven Conn.: Yale University Press.
L. Robinson, 2020. “The STEM selfing process: Nondigital and digital determinants of aspirational STEM futures,” American Behavioral Scientist (4 June).
doi: https://doi.org/10.1177/0002764220919150, accessed 17 June 2020.
L. Robinson and B.K. Gran, 2018. “No kid is an island: Privacy scarcities and digital inequalities,” American Behavioral Scientist, volume 62, number 10, pp. 1,413–1,430.
doi: https://doi.org/10.1177/0002764218787014, accessed 17 June 2020.
A. Rosenblat, 2018. Uberland: How algorithms are rewriting the rules of work. Berkeley: University of California Press.
V.F. Sacco, 1990. “Gender, fear, and victimization: A preliminary application of powercontrol theory,” Sociological Spectrum, volume 10, number 4, pp. 485–506.
doi: https://doi.org/10.1080/02732173.1990.9981942, accessed 17 June 2020.
J.B. Schor, 2017. “Does the sharing economy increase inequality within the eighty percent? Findings from a qualitative study of platform providers,” Cambridge Journal of Regions, Economy and Society, volume 10, number 2, pp. 263–279.
doi: https://doi.org/10.1093/cjres/rsw047, accessed 17 June 2020.
D. Skinner, 2020. “Race, racism and identification in the era of technosecurity,” Science as Culture, volume 29, number 1, pp. 77–99.
doi: https://doi.org/10.1080/09505431.2018.1523887, accessed 17 June 2020.
M.J. Stern and B.D. Rookey, 2013. “The politics of new media, space, and race: A socio-spatial analysis of the 2008 presidential election,” New Media & Society, volume 15, number 4, pp. 519–540.
doi: https://doi.org/10.1177/1461444812457658, accessed 17 June 2020.
M.J. Stern and A> Adams, 2010. “Do rural residents really use the Internet to build social capital? An empirical investigation,” American Behavioral Scientist, volume 53, number 9, pp. 1,389–1,422.
doi: https://doi.org/10.1177/0002764210361692, accessed 17 June 2020.
M.J. Stern and D.A. Dillman, 2006. “Community participation, social ties, and use of the Internet,” City & Community, volume 5, number 4, pp. 409–424.
doi: https://doi.org/10.1111/j.1540-6040.2006.00191.x, accessed 17 June 2020.
S. Sum, R.M. Mathews, I Hughes, and A. Campbell, 2008. “Internet use and loneliness in older adults,” CyberPsychology & Behavior, volume 11, number 2, pp. 208–211.
doi: https://doi.org/10.1089/cpb.2007.0010, accessed 17 June 2020.
A. Sundararajan, 2016. The sharing economy: The end of employment and the rise of crowd-based capitalism. Cambridge, Mass.: MIT Press.
J. Švelch, 2017. “Playing with and against microtransactions: The discourses of microtransactions acceptance and rejection in mainstream video games,” In: C.B. Hart (editor). The evolution and social impact of video game economics. Lanham, Md.: Rowan & Littlefield, pp. 101–120.
E.M. Toreld, K.O. Haugli, and A,L. Svalastog, 2018. “Maintaining normality when serving a prison sentence in the digital society,” Croatian Medical Journal, volume 59, number 6, pp. 335–339.
doi: https://doi.org/10.3325/cmj.2018.59.335, accessed 17 June 2020.
P. Tubaro and A.A. Casilli, 2019. “Micro-work, artificial intelligence and the automotive industry,” Journal of Industrial and Business Economics, volume 46, 333–345.
doi: https://doi.org/10.1007/s40812-019-00121-1, accessed 17 June 2020.
P. Tubaro, A.A. Casilli, and M. Coville, 2020. “The trainer, the verifier, the imitator: Three ways in which human platform workers support artificial intelligence,” Big Data & Society (24 April).
doi: https://doi.org/10.1177/2053951720919776, accessed 17 June 2020.
U.S. Federal Bureau of Investigation. Internet Crime Complaint Center, 2019. “2018 Internet crime report,” at https://pdf.ic3.gov/2018_IC3Report.pdf, accessed 17 June 2020.
U.S. National Telecommunications and Information Administration (NTIA), 1995. “Falling through the Net: A survey of the ‘have nots’ in rural and urban America,” at hhttps://www.ntia.doc.gov/ntiahome/fallingthru.html, accessed 17 June 2020.
B. Ur, J. Bees, S.M. Segreti, L. Bauer, N. Christin, and L.F. Cranor, 2016. “Do users’ perceptions of password security match reality?” CHI ’16: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 3,748–3,760.
doi: https://doi.org/10.1145/2858036.2858546, accessed 17 June 2020.
A.J.A.M. van Deursen and J.A.G.M. van Dijk, 2019. “The first-level digital divide shifts from inequalities in physical access to inequalities in material access,” New Media & Society, volume 21, number 2, pp. 354–375.
doi: https://doi.org/10.1177/1461444818797082, accessed 17 June 2020.
A.J.A.M. van Deursen, E. Helsper, and R. Eynon, 2016. “Development and validation of the Internet Skills Scale (ISS),” Information, Communication & Society, volume 19, number 6, pp. 804–823.
doi: https://doi.org/10.1080/1369118X.2015.1078834, accessed 17 June 2020.
N. van Doorn, 2017. “Platform labor: on the gendered and racialized exploitation of low-income service work in the ‘on-demand’ economy,” Information, Communication & Society, volume 20, number 6, pp. 898–914.
doi: https://doi.org/10.1080/1369118X.2017.1294194, accessed 17 June 2020.
L. Waller, 2016. “Disability and ICTs in the Caribbean: Enabling visually impaired Caribbean youth,” Communication and Information Technologies Annual, volume 12, pp. 3–24.
doi: https://doi.org/10.1108/S2050-206020160000012001, accessed 17 June 2020.
M. Walton and N. Pallitt, 2012. “‘Grand Theft South Africa’: Games, literacy and inequality in consumer childhoods,” Language and Education, volume 26, number 4, pp. 347–361.
doi: https://doi.org/10.1080/09500782.2012.691516, accessed 17 June 2020.
M. Yamamoto, M.J. Kushin, and F. Dalisay, 2018. “How informed are messaging app users about politics? A linkage of messaging app use and political knowledge and participation,” Telematics and Informatics, volume 35, number 8, pp. 2,376–2,386.
doi: https://doi.org/10.1016/j.tele.2018.10.008, accessed 17 June 2020.
S. Yusif, J. Soar, and A. Hafeez-Baig, 2016. “Older people, assistive technologies, and the barriers to adoption: A systematic review,” International Journal of Medical Informatics, volume 94, pp. 112–116.
doi: https://doi.org/10.1016/j.ijmedinf.2016.07.004, accessed 17 June 2020.
L. Zhou, 2017. “The World Internet Project: International report,” Eighth edition, at https://www.digitalcenter.org/wp-content/uploads/2018/04/2017-WIP-report.pdf, accessed 17 June 2020.
J. Zittrain, 2019. “The hidden costs of automated thinking,” New Yorker (23 July), at https://www.newyorker.com/tech/annals-of-technology/the-hidden-costs-of-automated-thinking, accessed 17 June 2020.
Received 6 June 2020; accepted 9 June 2020.
This paper is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Digital inequalities 3.0: Emergent inequalities in the information age
by Laura Robinson, Jeremy Schulz, Hopeton S. Dunn, Antonio A. Casilli, Paola Tubaro, Rod Carveth, Wenhong Chen, Julie B. Wiest, Matías Dodel, Michael J. Stern, Christopher Ball, Kuo-Ting Huang, Grant Blank, Massimo Ragnedda, Hiroshi Ono, Bernie Hogan, Gustavo Mesch, Shelia R. Cotten, Susan B. Kretchmer, Timothy M. Hale, Tomasz Drabowicz, Pu Yan, Barry Wellman, Molly-Gloria Harper, Anabel Quan-Haase, and Aneka Khilnani.
First Monday, Volume 25, Number 7 - 6 July 2020