First Monday

Framing 'digital well-being' as a social good by Alex Beattie and Michael S. Daubs

This contribution argues that companies such as Apple, Facebook, and Google are increasingly incorporating features that supposedly promote “digital well-being” to forestall regulation of their platforms and services. The inclusion of these features, such as Apple’s Screen Time, frames these commercial platforms as providing a social good by promising to encourage more “intentional” or “mindful” use of social media and mobile devices. As a result, oft-critiqued platforms are increasingly adopting the language of their critics in order to frame themselves as a social good. This strategy mimics that used by radio executives in the United States in the early twentieth century, where the medium developed as a predominantly commercial enterprise. To avoid regulation, it became necessary to perpetuate the perception that commercial broadcasters were also a social good that fulfilled a public service function. Platforms today, we assert, are inadvertently or purposefully adopting a similar tactic to position themselves as leaders in a developing digital wellness market in the hopes of avoiding future governmental regulation.


Contextualising digital well-being
Lessons from history: Framing U.S. broadcast radio as a social good
History repeated: Positioning digital well-being as a social good




This article is a theoretical and critical examination of the concept of “digital well-being”. As we outline below, digital well-being has recently been conceptualised by media, computer science, and behavioural addictions scholars and relates to aspirational digital media use and debates concerning links between screen time and mental ill health (Vanden Abeele, 2020). We build upon this scholarship by theorising about the growth of digital well-being in the context of the “tech-lash” (Laterza, 2018), the popular critique of digital platforms and networked services including mobile technologies and social networks. Prominent critics such as Tristan Harris, co-founder of the Center for Human Technology, have critiqued these companies, including Facebook and Snapchat, for their interface design practices and algorithmically-driven features which, he argues, “have ‘downgraded’ humanity by promoting shortened attention spans, outrage-fuelled dialogue, smartphone addiction, vanity, and a polarized electorate.” [1]

Digital technology corporations such as Apple and Google are increasingly adopting the language of their critics and developing “digital well-being” tools that are ostensibly designed to reduce the harmful effects of the technologies they produce. To theorise why Apple and Google have adopted digital well-being features, we make a historical comparison to activities by United States commercial radio in the twentieth century. The inclusion of digital well-being features is a response to the tech-lash that, we suggest, frames these for-profit companies as providing a social good, a strategy that mimics that used by radio executives in the United States in the early twentieth century as radio broadcasting developed into a primarily commercial industry.

To establish this argument, we first provide an overview of existing scholarly conceptualisations of “digital well-being”, followed by an outline of the wider context of the tech-lash. Next, we demonstrate how companies such as Google and Facebook have adopted the language of these critiques. Then, we demonstrate parallels between this industry adoption and tactics used by the commercial radio industry in the U.S. in the early twentieth century to help establish possible reasons for this adoption, followed by some concluding thoughts on what this historical comparison reveals about the modern tech industry.



Contextualising digital well-being

It is first important to note that “digital well-being” is distinct from “digital health”. Digital health generally refers to technologies that deliver healthcare, train health practitioners, or encourage people with chronic illnesses to undertake activities that benefit their health and well-being (Lupton, 2018). Technologies such as telemedicine apps, digital diaries, and other forms of self-tracking (Neff and Nafus, 2016) are viewed as having the potential to enhance health outcomes. At the same time, these digital health tools also have the potential to introduce numerous risks, such as normalising the everyday surveillance of health-related activities (Lupton, 2018). In contrast, digital well-being largely attends to the perceived issue of excessive screen time or Internet connectivity. Here, technology is often believed to have a more caustic role and positioned as the cause of ill health. For example, social media usage has been linked to exhaustion, mental strain, and reduced productivity, as well as problems regarding concentration, sleep, identity formation, and social relations (Büchi, et al., 2019; Kushlev and Dunn, 2019; Reinecke, et al., 2017).

The majority of definitions of digital well-being appear to draw from behavioural addictions science, expressing concerns about self-control when it comes to “addictive” social media apps and digital devices (Eyal, 2019; Lee, et al., 2019; Roffarello and De Russis, 2019). Other definitions of digital well-being emphasize the role of the wider digital environment on users’ subjectivity. Gui, et al. [2] define digital well-being as “a state where subjective well-being is maintained in an environment characterized by digital communication overabundance.” Burr, et al. (2020) provide a broader definition of digital well-being that is based on aspirational smartphone use, stating: “The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being.”

In an effort to distinguish an understanding of digital well-being distinct from behavioural addictions scholarship, Vanden Abeele (2020) argues that we should avoid the cause-and-effect thinking that dominates current research on the concept. She defines digital well-being as a dynamic experiential state produced by individual psycho-social factors, device functionalities, and context-specific factors. Vanden Abeele asserts that digital well-being is:

a subjective individual experience of optimal balance between the benefits and drawbacks obtained from mobile connectivity. This experiential state is comprised of affective and cognitive appraisals of the integration of digital connectivity into ordinary life. People achieve digital well-being when experiencing maximal controlled pleasure and functional support, together with minimal loss of control and functional impairment. [3]

Vanden Abeele’s person-, device- and context-specific definition suggests there is more to digital well-being than aspirational digital behaviours or identifying psychologically susceptible users who are vulnerable to excessive digital media use.

Following Vanden Abeele, we argue that insufficient attention has been paid to the industry context of digital well-being, one that demonstrates a growing public animosity toward Silicon Valley technology platforms. This “tech-lash” is in part inspired by the Cambridge Analytica data scandal (Laterza, 2018) and the alleged role of disinformation in influencing the outcomes of the 2016 Brexit referendum in the U.K. and 2016 the U.S. presidential election (Ball, 2017). Facebook’s 2014 emotion contagion experiment, in which Facebook attempted to manipulate the emotions of users in certain markets by tweaking their newsfeed (Tufekci, 2015), demonstrated for some the potential harm the Internet, particularly social media, could have on mental health. Turner and Lefevre (2017) note that “Instagram and social media more broadly have been associated with mental health problems” [4] including depression and eating disorders.

The phrase “digital well-being” has been adopted by a loosely affiliated group of wellness and technology organisations, products, and services located in California, United States, that share concerns regarding the health effects or harms of technology. These organisations include the Digital Wellness Collective (2020), Ariana Huffington’s Thrive Global (2020) and the Center for Humane Technology (2020). Each of these groups offers a suite of tips, products or services to combat the perceived psychological harms of digital technologies.

These groups justify the link between “digital” and “well-being” by combining popular psychology and commentary on technology. Authors such as Turkle (2015, 2013) and Carr (2011) argue that digital technologies rob users of communicative or cognitive capacities. They raise concerns, for example, that today’s smartphone user lacks the emotional intelligence and wayfaring capabilities of generations before them. Perhaps the most damning assessment of digital media comes from Twenge (2017a), who provocatively queries whether smartphones have “destroyed a generation” [5], contributing to a mental health crisis. We contend that digital well-being is an industry-driven response to this sceptical technology discourse and ostensibly seeks to rebalance users’ relationships with their smart devices.

A popular figurehead of the tech-lash is the aforementioned Tristan Harris, a former Google designer and ethicist, who is now head of the advocacy group the Center for Humane Technology. Harris has received significant media attention for “exposing” the persuasiveness of design, algorithmic newsfeeds, and large technology platforms. His arguments that tech platforms are corruptible, addictive, and degrade humanity have resonated with quite a few, including former tech executives and politicians, and are perhaps one of the reasons why the appetite to regulate platforms such as Facebook in the U.S. and Europe is growing.

Our aim is not to settle the debate on whether digital technologies can cause mental distress or psychological harm. It is important to note, however, that there is currently little empirical evidence that you can be “digitally well”. In what is probably the most comprehensive study relating to digital well-being, Orben and Przybylski (2019) contend that any link between screen time and psychological harm is inconclusive or extremely small, measured to be about the same as seemingly harmless activities such as “eating potatoes, having asthma, drinking milk, going to the movies, religion, listening to music, doing homework, cycling, height, wearing glasses, handedness, eating fruit, eating vegetables, getting enough sleep and eating breakfast.” [6] Van Zalk and Ha Lee (2020) come to a similar conclusion in an overview of the behavioural sciences literature on online communication and adolescents. They argue that a number of factors including social anxiety, introversion, impulsivity and gender both strengthen and weaken the correlation between excessive connectivity and compulsive Internet use.

Another telling note is that, while “Internet gaming disorder” was included as a “condition for future study” in the fifth revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) published in 2013 (Petry, et al., 2015), the term “Internet addiction” itself remains unrecognised, denying the disorder validity. Przybylski and Orben (2019) suggest that the notion of digital or Internet addiction risks trivialising more serious causes of harms and creates unjustifiable business claims and products. They caution against the emerging emphasis on digital wellness, warning that “we’ll all be dancing to the steady drumbeat of monetised fear sold by the moral entrepreneurs.” [7] However, perhaps it is not only moral entrepreneurs like Ariana Huffington and the Digital Wellness Collective of which we should be sceptical. There are other actors in the tech industry that benefit from the conflation of technology and health, including large technological companies that have been under much public scrutiny.

Although the tech-lash and accusations of “downgrading humanity” are potentially threatening to tech companies, the term “digital well-being” appears not to be. In fact, Google has embraced the term. On 25 June 2019, Harris provided a statement at a pubic U.S. Senate hearing on the dangers of persuasive technology platforms. Sitting right beside Harris at this hearing was Maggie Stanphill, Google’s Lead User Experience Designer and head of its Digital Wellbeing team. In her testimony, Stanphill (2019) claims Google supports their users having a healthy relationship with their smart devices and lists several new features the company has introduced across its platforms to help people better understand their tech usage.

Two tools that Google has developed to support healthy smartphone usage that are available on smartphones include Family Links (n.d.) and Digital Wellbeing (n.d.). These tools provide data analytics about screen time and enable the user (or parents) to set limits on certain apps, as well as pause notifications that the user identifies as distracting. Apple (2018) have also launched a similar product to Digital Wellbeing called Screen Time and, in January 2018, Facebook founder Mark Zuckerberg (2018) announced that the social network would alter its newsfeed algorithm to ensure that time users spend on the platform is “time well spent”. That phrase is, not coincidentally, the original name of Harris’ Center for Humane Technology.

So why are tech companies adopting the language and tools of their critics when there is a lack of empirical evidence justifying the need for such tools? Answering this question is where revisiting the development of and reactions to a decidedly “old” medium, radio, can be useful.



Lessons from history: Framing U.S. broadcast radio as a social good

Hilmes (1997) details how broadcast radio in the United States was singularly “allowed to develop commercially, without direct subsidy or state involvement.” [8] This development was in stark contrast to public broadcast systems in Europe such as the BBC, or the hybrid public-private model that developed in Canada (Vipond, 1992). Although the establishment of radio as a commercial system was eventually (though incorrectly) seen as a “natural outgrowth of the ‘American Way’” [9], it nonetheless presented the radio industry with several problems.

Much like the Internet and mobile technologies that are the focus of the “tech-lash” described above, radio was subject to critiques and moral panics, such as concerns that rural areas would become susceptible to “corrupting” city influences [10]. Marshall McLuhan even levelled a critique at radio that today is more associated with social media, namely that it “creates insatiable village tastes for gossip, rumour, and personal malice.” [11]

However, one of the primary concerns that critics had of early commercial radio was its emphasis on entertainment. The European public service model attempted to “universally distribute information, facilitate public debate, and help build a common identity in modernizing nation states.” [12] Content thus focused on education and the promotion of so-called “high culture” and the arts to serve the highest ideals of community and nation (Gripsrud, 1998).

In contrast, U.S. commercial radio developed through a series of negotiations between networks, sponsors, and advertising agencies, which “had the effect of sending the public service and cultural functions of radio ‘underground’” [13] and was often a source of criticism. For example, the serial narrative was originally an uncontroversial form of radio drama and popular with audiences regardless of gender. However, their “often slow and torturous open-ended plots, their explicit purpose as selling vehicles, and above all their ‘morbid’ and contested subject matter”, which featured heavily in increasingly popular daytime radio dramas for women, i.e., soap operas, became a common concern [14]. References to these narratives as “serialized drool” [15] undercut the cultural value of such programming, and its wide appeal among female audiences led to fears that radio would upset social norms by having a “feminising” (meaning negative) impact on the home and society (Boddy, 1994).

Critics of serial radio narratives also raised unfounded psychological concerns, referring to loyal audience members as “neurotic” or “addicted” [16] due to the open-ended content they represented. While a serial narrative is not directly analogous to the never-ending scroll interface of some social media, these critiques are a precursor to the “addictive” trope common in the current tech-lash.

Today, of course, we would rightfully question these critiques. However, at the time, complaints of this sort were a conundrum for the commercial radio industry in the United States despite the lack of any real evidence of harm. Women not only made up a majority of the listening audience, but were also responsible for most household purchasing decisions, which made them a popular and profitable target audience for both advertisers and networks (Hilmes, 1997).

At the same time, critics of commercial radio used these concerns about the negative impacts of such programming to argue for increased content regulation or even nationalisation of the radio industry. Broadcasters and advertisers alike wanted desperately to avoid such outcomes since such moves would, of course, infringe on the profitability of radio broadcasting. It therefore became necessary for radio executives to perpetuate the perception that commercial broadcasters were providing a strong public service function.

To achieve this framing, broadcasters took a novel approach: they denigrated the cultural value of their own daytime serial programming. At the same time, they elevated the cultural significance of other programming, not only home service shows for women, which dispensed household, cleaning, childrearing, cooking, and health information mixed with light musical features (Hilmes, 1997), but also “high culture” evening content such as news, classical music, and theatrical programming.

By classifying daytime programming as lowbrow, radio networks were able to downplay their female audience and, therefore, their profit motive. Furthermore, by contrasting serial programming with “quality” evening programming, networks were able to suggest they were fulfilling a similarly positive cultural contribution to society as their publicly owned European contemporaries.

This framing was ultimately successful; in short, by “acknowledging” that some of its content was not a significant social or cultural contribution, or even “harmful”, and offering “high culture” content as a self-corrective, the radio industry in the U.S. was able to stave off calls for significant content regulation and nationalisation, and instead maintain its profit motive until it could cement itself as a central and seemingly indispensable institution.



History repeated: Positioning digital well-being as a social good

Although not obviously parallel, this history of the U.S. radio industry in the early twentieth century provides potential insight into the emergence of the digital well-being movement. Both instances show how an industry can “correct” itself through the construction of both the problem and the solution. Like digital technologies today, commercial radio was critiqued as additive and a social harm, despite little evidence thereof. However, just as digital well-being, or the perceived psychological harms of digital technology, does not depend upon empirical evidence, neither did the perceived harms of commercial radio. Instead, both rely on what Durkheim (1982) called a “social fact” (Sutton, 2020). Durkheim (1982) defines a social fact as follows:

A social fact is any way of acting, whether fixed or not, capable of exerting over the individual an external constraint; or: which is general over the whole of a given society, whilst having an existence of its own, independent of its individual manifestations. [17]

In other words, social facts are values and norms that influence or shape our beliefs and actions. In an ethnographic study of a digital detox retreat called Camp Grounded in northern California, Sutton (2020) observed attendees justify digital harm on the basis of a shared cultural preference for face-to-face or off-line interaction and concerns about the economic impact of the technology industry on the San Francisco Bay area. For the digital detoxers of Camp Grounded, proof of any actual psychological ill health is unnecessary, as cultural or social norms have been sufficiently violated by technology (Sutton, 2020).

Internet addiction camps in South Korea (Sullivan, 2019) and digital detox public health campaigns in Europe (Royal Society for Public Health, 2018) suggest cultural expectations around appropriate smartphone use exist well beyond California. These examples demonstrate an international and multicultural social expectation of being physically ‘present’ or spending an appropriate time spent on device.

Such expectations are visible in the YouTube video “I forgot my phone” by writer and performer Charlene deGuzman (2013). The video, which has been viewed over 50 million times, “evoke[s] a vision of human communication that is subverted by excessive smartphone use.” [18] This video and other similar messages seemingly position face-to-face socialisation as superior to mediated communication and lament the “loss” of “genuine” sociability. When a smartphone or digital technology appears to interfere with these values, however, a health metaphor of digital harm has become an increasingly common way to capture cultural or social loss.

Moreover, social facts have the potential to generate social capital and become a social good. Like those within the radio industry, Google and Apple have leaned into the digital harm critique to offer digital well-being products as a corrective to those perceived deficiencies, both to mitigate government intervention and pursue profit. Products like Google’s Digital Well-being signal to customers that they support the social fact that too much screen time can be harmful for users, especially children. There is, however, one significant difference between approaches to commercial radio and digital well-being tools; while commercial radio was ultimately framed as a good that benefitted all of society, digital well-being is positioned as a parenting tool or form of self-discipline for the individual user. This difference indicates how a historical comparison can reveal how multiple contexts — including historical contexts — are important when considering digital well-being, as Vanden Abeele argues. In fact, this comparison reveals how digital well-being is merely the latest extension of a more general, decades-long “media well-being”. This history applies not just to radio, but also to other traditional media such as television. For example, Spiegel (1992) notes how the magazine Ladies’ Home Journal warned readers in 1951 about the “addictive” qualities of television, which could “reverse good habits of hygiene, nutrition, and decorum, causing physical, mental, and social disorders.” [19] This language increasingly puts the responsibility on families or, more specifically, parents.

Crucially, with digital well-being, the duty of care is directly shifted to the individual user. Mulvin calls the media tactic of premediating screen-related harms “media prophlaxis”, with prophlaxis meaning a type of medical preventative treatment to prevent disease. Mulvin (2018) observes: “by transferring the duty of care to individual and away from institutions, device manufacturers can tacitly protect themselves from accusations of negligence through the selection and propagation of new default settings.” [20]

With regards to digital well-being tools, all bases are covered for Google and Apple. On one hand, the psychological harm of excessive screen time cannot be proven, forestalling any regulation of interfaces or design strategies that are allegedly promote ‘smartphone addiction’ or an unhealthy amount of screen time. On the other hand, potential social and cultural harms are anticipated and shutdown. By appearing to be proactive and mitigating the perceived harms of screen time, tech platforms ameliorate the social and cultural effects of technology, and safeguard against any future proven psychological harms, ensuring regulation of the tech-industry remains self-contained.




Perhaps what is non-threatening about digital well-being is that the phrase does not offer a structural criticism of the technology industry. Digital well-being says nothing about the concentration of power via a few technology conglomerates, the exploitation of cheap labour or scarce minerals in manufacturing phones, and the health of tech workers who perform undesirable labour such as moderating online hate speech. Instead, digital well-being has arguably strengthened the position of companies such as Apple.

It is worth noting that Apple has squeezed out several digital well-being entrepreneurs who depend on access to Apple’s iOS apple store to distribute their product or service. As Apple governs the app store, Apple alone decides which apps meet their criteria, and the company has shut out some independent digital well-being apps (Nicas, 2019). The emergence of Apple’s Screen Time suggests that digital well-being has not eaten into Apple’s profit line via regulation of its products or design but has provided a new category of products and services to Apple to extend into.

Therefore, as of yet, the term digital well-being has failed to provide a sufficient critique of the technology industry, instead providing an opportunity for large technology companies to position themselves as both the perpetrator and solution of issues concerning the digital society. If truly meaningful regulation requires independent examination and scrutiny, this double role of perpetrator and critic does not bode well for future, and arguably necessary, governance of the technology industry. End of article


About the authors

Alex Beattie is a researcher in communication and design at Victoria University of Wellington, New Zealand. He recently completed his Ph.D. on the relationship between Silicon Valley, new media technologies and disconnecting from the Internet. His work has featured in Convergence and making time for digital lives, published by Rowman & Littlefield in 2020.
Send comments to: alexander [dot] beattie [at] vuw [dot] ac [dot] nz

Michael S. Daubs is a Senior Lecturer in Media Studies at Victoria University of Wellington, New Zealand. His research challenges myths associated with mobile and networked media. He is co-editor (with Vincent Manzerolle) of Mobile and ubiquitous media: Critical and international perspectives, published by Peter Lang in 2017.



1. Harris in Newton, 2019.

2. Gui, et al., 2017, p. 166.

3. Vanden Abelle, 2020, p. 13.

4. Turner and Lefevre, 2017, p. 278.

5. This quote comes from the title of Twenge’s (2017a) Atlantic article ‘Have smartphones destroyed a generation?’ which was adapted from Twenge (2017b).

6. Orben and Przybylski, 2019, p. 176.

7. Przybylski and Orben, 2019.

8. Hilmes, 1997, p. 7.

9. Ibid.

10. Hilmes, 1997, p. 15.

11. McLuhan in West, 2008, p. 183.

12. Moe, 2008, p. 220.

13. Hilmes, 1997, p. 7.

14. Hilmes, 1997, pp. 154–155.

15. Hilmes, 1997, p. 154.

16. Hilmes, 1997, p. 155.

17. Durkheim, 1938, p. 59.

18. Ribak and Rosenthal, 2015.

19. Spigel, 1992, p. 51.

20. Mulvin, 2018, p. 196.



Apple, 2018. “iOS 12 introduces new features to reduce interruptions and manage Screen Time” (5 June), at, accessed 13 August 2020.

James Ball, 2017. Post-truth: How bullshit conquered the world. London: Biteback Publishing.

Moritz Büchi, Noemi Festic, and Michael Latzer. 2019. “Digital overuse and subjective well-being in a digitized society,” Social Media + Society (October).
doi:, accessed 12 November 2020.

William Boddy, 1994. “Archaeologies of electronic vision and the gendered spectator,” Screen, volume 35 number 2, pp. 105–122.
doi:, accessed 12 November 2020.

Center for Humane Technology, 2020. “Join the movement for Humane Technology,” at, accessed 13 August 2020.

Christopher Burr, Mariarosaria Taddeo, and Luciano Floridi, 2020. “The ethics of digital well-being: A thematic review,” Science and Engineering Ethics, volume 26, pp. 2,3135–2,343.
doi:, accessed 12 November 2020.

Nicholas Carr, 2011. The shallows: What the Internet is doing to our brains. New York: W.W. Norton.

Charlene deGuzman, 2013. “I forgot my phone,” at, accessed 7 January 2020.

Digital Wellbeing, n.d. “Find a balance with technology that feels right for you,” Google, at, accessed 13 August 2020.

Digital Wellness Collective, 2020. “Enroll in the digital wellness certificate program today!” at, accessed 13 August 2020.

Émile Durkheim, 1982. The rules of sociological method. Edited, with an introduction, by Steven Lukes. Translated by W.D. Halls. New York: Free Press.

Nir Eyal, 2019. Indistractable: How to control your attention and choose your life. London: Bloomsbury.

Family Links, “Help your family create healthy digital habits,” Google, at, accessed 13 August 2020.

Jostein Gripsrud, 1998. “Television, broadcasting, flow: Key metaphors in TV theory,” In: Christine Geraghty and David Lusted (editors). The television studies book. New York: St. Martin’s Press, pp. 17–32.

Marco Gui, Marco Fasoli, and Roberto Carradore. 2017. “‘Digital well-being’. Developing a new theoretical tool for media literacy research,” Italian Journal of Sociology of Education, volume 9, number 1, pp. 155–173.
doi:, accessed 12 November 2020.

Michele Hilmes, 1997. Radio voices: American broadcasting, 1922–1952. Minneapolis: University of Minnesota Press.

Kostadin Kushlev and Elizabeth W. Dunn, 2019. “Smartphones distract parents from cultivating feelings of connection when spending time with their children,” Journal of Social and Personal Relationships, volume 36 number 6, pp. 1,619–1,639.
doi:, accessed 12 November 2020.

Vito Laterza, 2018. “Cambridge Analytica, independent research and the national interest,” Anthropology Today, volume 34, number 3, pp. 1–2.
doi:, accessed 12 November 2020.

Uichin Lee, Hyunsoo Lee, and Jooyoung Park, 2019. “Positive computing for digital wellbeing,” at, accessed 12 November 2020.

Deborah Lupton, 2018. Digital health: Critical and cross-disciplinary perspectives. London: Routledge.

Hallvard Moe, 2008. “Public service media online? Regulating public broadcasters’ Internet services — A comparative analysis,” Television & New Media, volume 9, number 3, pp. 220–238.
doi:, accessed 12 November 2020.

Dylan Mulvin, 2018. “Media prophylaxis: Night modes and the politics of preventing harm,” Information & Culture, volume 53, number 2, pp. 175–202.
doi:, accessed 12 November 2020.

Gina Neff and Dawn Nafus, 2016. Self-tracking. Cambridge, Mass.: MIT Press.

Casey Newton, 2019. “The leader of the Time Well Spent movement has a new crusade,” The Verge (24 April), at, accessed 5 January 2020.

Jack Nicas, 2019. “Apple cracks down on apps that fight iPhone addiction,” New York Times (27 April), at, accessed 5 January 2020.

Amy Orben and Andrew K. Przybylski, 2019. “The association between adolescent well-being and digital technology use,” Nature Human Behaviour, volume 3, pp. 173–182.
doi:, accessed 5 January 2020.

Nancy M. Petry, Florian Rehbein, Chih-Hung Ko, and Charles P. O’Brien, 2015. “Internet gaming disorder in the DSM-5,” Current Psychiatry Reports, volume 17, article number 72.
doi:, accessed 5 January 2020.

Andrew K. Przybylski and Amy Orben, 2019. “We’re told that too much screen time hurts our kids. Where’s the evidence?” Guardian (7 July), at, accessed 5 January 2020.

Leonard Reinecke, Stefan Aufenanger, Manfred E. Beutel, Michael Dreier, Oliver Quiring, Birgit Stark, Klaus Wlfling, and Kai W. Mller. 2017. “Digital stress over the life span: The effects of communication load and Internet multitasking on perceived stress and psychological health impairments in a German probability sample,” Media Psychology, volume 20, number 1, pp. 90–115.
doi:, accessed 12 November 2020.

Rivka Ribak and Michele Rosenthal, 2015. “Smartphone resistance as media ambivalence,” First Monday, volume 20, number 11, at, accessed 7 January 2020.
doi:, accessed 12 November 2020.

Alberto Monge Roffarello and Luigi De Russis, 2019. “The race towards digital wellbeing: Issues and opportunities,” CHI ’19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, paper number 386.
doi:, accessed 12 November 2020.

Royal Society for Public Health, 2018. “Scroll free September,” at, accessed 5 January 2020.

Lynn Spigel, 1992. Make room for TV: Television and the family ideal in postwar America. Chicago: University of Chicago Press.

Maggie Stanphill, 2019. “Testimony of Maggie Stanphill: Director of User Experience, Google,” U.S. Senate Committee on Commerce, Science, and Transportation: Subcommittee on Communnications, Technology, Innovation, and the Internet: Hearing on Optimizing for Engagement: Understanding the Use of Persuasive Technology on Internet Platforms (25 June), at, accessed 7 January 2020.

Theodora Sutton, 2020. “Digital harm and addiction: An anthropological view,” Anthropology Today, volume 36, number 1, pp. 17–22.
doi:, accessed 12 November 2020.

Michael Sullivan, 2019. “Hooked On The Internet, South Korean teens go into digital detox,” NPR (13 August), at, accessed 5 January 2020.

Thrive Global, 2020. “Improve your people’s mental resilience, health, and productivity. In the new normal and beyond,” at, accessed 13 August 2020.

Zeynep Tufekci, 2015. “Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency,” Colorado Technology Law Journal, volume 13, pp. 203–218, and at, accessed 12 November 2020.

Sherry Turkle, 2015. Reclaiming conversation: The power of talk in a digital age. New York: Penguin Press.

Sherry Turkle, 2013. Alone together: Why we expect more from technology and less from each other. New York: Basic Books.

Pixie G. Turner and Carmen E. Lefevre, 2017. “Instagram use is linked to increased symptoms of orthorexia nervosa,” Eating and Weight Disorders — Studies on Anorexia, Bulimia and Obesity, volume 22, pp. 277–284.
doi:, accessed 7 January 2020.

Jean M. Twenge, 2017a. “Have smartphones destroyed a generation?” Atlantic (September), at, accessed 5 January 2020.

Jean M. Twenge, 2017b. iGen: Why today’s super-connected kids are growing up less rebellious, more tolerant, less happy — and completely unprepared for adulthood — and what that means for the rest of us. New York: Atria Books.

Mariek M.P. Vanden Abeele, 2020. “Digital wellbeing as a dynamic construct,” Communication Theory (17 October).
doi:, accessed 12 November 2020.

Nejra Van Zalk and Seung Ha Lee, 2020. “Links between online communication and compulsive Internet use in adolescence: Is there a reason to worry?” In: Nejra Van Zalk and Claire P. Monks (editors). Online peer engagement in adolescence: Positive and negative aspects of online social interaction. London: Routledge, pp. 85–102.

Mary Vipond, 1992. Listening in: The first decade of Canadian broadcasting, 1922–1932. Montreal: McGill-Queen’s University Press.

Rebecca West, 2008. “McLuhan and the future of literature,” New England Review, volume 29, number 1, pp. 177–191.

Mark Zuckerberg, 2018. “Facebook” (11 January), at, accessed 13 August 2020.


Editorial history

Received 6 January 2020; revised 12 August 2020; accepted 14 September 2020.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Framing ‘digital well-being’ as a social good
by Alex Beattie and Michael S. Daubs.
First Monday, Volume 25, Number 12 - 7 December 2020