First Monday

Diagnostic advertisements: The phantom disabilities created by social media surveillance by Amy Gaeta

This article examines the use of algorithms to target consumers on social media for health and medical product advertising to tell users directly or indirectly what is ’wrong’ with them. The algorithms behind these ads are creating phantom disabled consumers that they project onto real disabled and nondisabled users via advertisements. I call these ads ‘diagnostic advertisements’ to underscore how the concepts of diagnosis and algorithms share similar aims to cure and control. This essay asks: What might the rise of diagnostic advertisements mean for the social status of disabled people and disabled users’ sense of self? I develop and use the method of ‘crip autotheory’ to argue that the ads are an important window into the larger issue of the intensification of the medicalization of everyday life; a process that entails the normalization of surveillance and commodification of personal health.


Key terms
Diagnosing users
Medicalizing everyday life
The disabled data double
Discussion & conclusion




Instagram thinks I have borderline personality disorder. Twitter tells me to heal my gut. Facebook calls me bipolar. Sometimes I wonder if they are all right. These platforms reveal what they think about my physical and mental health through the curated advertisements and content they show me. By doing so, these ads reproduce a form of the clinical gaze that threatens to disturb how I understand my body and mind to work and what they need. This essay asks what the advent of auto-generated targeted ads, what I call ‘diagnostic advertisements,’ on social media feeds means for the experience and status of disability on an embodied and social level.

One of the boldest and most exemplar of these advertisements came from the company, a social media platform that connects users based on the similarity of their health profiles. The advertisement practically shouted at me, using a big black bolded typeface to say “you are 94% Alike! Get matched with friends who get your medical reality.” Below this typical, there were different colored smiley faces, organized in a circle around the words “you.” Next to this graphic, there were three words: anxiety, fibromyalgia, and insomnia. It is suggested that these three conditions are what connect the friend circle represented by smiley faces. A week later I was advertised by the same company, with the same ad, except this time, the ad listed different medical conditions, including diabetes, anxiety, and borderline personality disorder.

The ad is intrusive as it is alluring. It makes bold claims about my social needs, mental ability, behavioral habits, and physical health; assumption made based on a constructed data model of me. Yet, it does capture accurately my physical and mental needs as someone with anxiety and sleep problems. The interjection of these ads within the intimate space of my feed recreated a scene I’ve been in many times: the doctor’s exam room. The doctor glances at my chart, checks my vitals and asks a few generalized questions about my symptoms, and then spits out a diagnosis and treatment, allowing for little non-standardized input from me, the one living in the diagnosed body. Another version of this is the psychiatrist that tries to fit my symptoms into an existing DMS-5 entry. Now, instead of standing cold and exposed in an exam room in front of medical experts, I am exposed differently, comfortably and unexpecting. I am also presented with a diagnosis I did not consent to receive. Discussions of mental, physical, and social needs and conditions used to be relegated to more private spaces, e.g., the domestic, therapy offices, and medical files. These ads are now forcing the visibility of personal health information by telling users what and how their bodies work. Even more, they are expanding and trapping us in the ‘doctor’s office,’ it seems. This ad from health was one of the many targeted health ads that tried to keep me online by having me click a link or enter my e-mail, or they offered a Web-based product (i.e., therapeutic digital journaling), thereby striving to extract even more data from me. It became evident that these ads saw me less as a disabled person, and more as a data profile that included disability. In other words, disabled people are merely untapped or under-tapped data for capitalist ends in the eyes of these algorithms and the companies profiting from them.

Diagnostic advertisements should be understood within the wider collision of technology, capitalism, and everyday medicalization. Together, this collision acts upon disabled people and how they move about the world. Algorithms have been used for decades by doctors and medical scientists in the clinical diagnosis process as well as in predicting health outcomes. Patients are often unaware of how their care may be informed by algorithms since this process is largely made invisible to them. Replacing the professional and embodied judgment of human clinicians as well as the embodied expertise of patients, algorithms used in medicine and public health make decisions by relying on existing data sets, such as data on which demographic profiles are most vulnerable to a disease or data about how the symptoms of a disease vary by age range. As such, we must be wary of how these algorithms may reproduce and even streamline medical bias and discrimination that is reflected in the datasets from which they draw and how they collect such data. Further, with the increasing privatization of healthcare and the influence of pharmaceutical and Big Tech companies, we must also be wary of how diagnoses and the algorithms behind them are motivated by profit and cost efficiency over patient care.

Diagnostic labels are a volatile topic within disability communities, a source of both pride and shame. Prior, for instance, I celebrated my PTSD diagnosis because it affirmed my bodily experience and provided access to certain treatments and medications, but this label has also deeply informed how seriously I am taken by healthcare professionals, such as them attributing physical symptoms as “just stress.” The stakes can be much higher depending on the stigma attached to a given disability, especially for mental illness, and depending on the demographic of the patient. A diagnosis is more than a label, it is a passport that can provide or bar access to certain treatments, programs, societal expectations, and more. Indeed, using ‘diagnosis’ as a metaphor can flatten the complexity of the process or trivialize it. I do not use diagnosis metaphorically. I use the term to emphasize how social media algorithms can diffuse and spread the clinical gaze. And, by doing so, they can construct categories of meaning that have real-world impacts on users’ health decisions and perspectives, even in spaces as mundane as the social media feed. The diagnoses delivered by social media algorithms have quiet, and less immediate impacts on people than ones delivered by medical professionals; where the former helps to shape one’s digital profile, virtual networks, and online content, the latter becomes part of one’s medical history and can determine their resources, civil rights, income, and more. This is not to presume an online/‘real-world’ binary. For instance, data collected from social media is already used in healthcare for public health measures (Gupta and Katarya, 2020). we should also account for how this new realm of diagnosis informs or coincides with the traditional medical realm, particularly in how it affects users’ sense of self and understanding of their health.

Instagram uses the same advertising system as Meta (previously known as Facebook), its owner. Recently, Colin Lecher (2021) investigated Meta’s relationships with pharmaceutical companies as advertising partners for the social platform. It is indeed illegal for health agencies, pharmacies, etc. to sell personal data to Meta that they could use to target users by health status or condition. Instead, Lecher finds that Meta works with pharmaceutical companies to determine which users may have an ‘interest’ in a health condition, like cancer or lupus. This may be inferred by looking at the type of content that users engage with (e.g., ‘liking’ articles about autism support programs or viewing accounts dedicated to deaf artists). Recently, Meta did pledge to stop letting advertisers from using demographic data to target users (Waller and Lecher, 2022). However, this is only part of the process of finding users to target. Meta, and its subcompany Instagram, remain quiet about the exact process. Advertisers can choose what types of consumers see their posts on Instagram, which may be their target audience or a more expansive audience to expand product exposure. Advertisers can define their audience by controlling for various demographic and activity factors, including location, gender, age range, user language, race/ethnicity, behaviors, interests, and more. Most of these data signatures are not inputted by the user. Never have I told Instagram that I’m a white disabled middle-class queer or that I enjoy lake trips and vintage motorcycle shirts. These are all data points that are inferred based on my activity, connections, demographics, geographical location, etc., which, thus, creates a lot of room for error. The actual process of inference, including criteria and determiners, are certain identities and interests, is unknown. It is necessary to consider how other demographic factors play into what ad someone sees (i.e., their ‘diagnosis’) and why. How might, for instance, my whiteness affect the probability of me being targeted for autism-related products (e.g., stim toys, sound blocking headphones), and how may that probability change if Instagram incorrectly inferred my race as Black, seeing as Black autistics have been long underdiagnosed with autism and left out of representations of autism (Mandell, et al., 2009)? Indeed, following the work of Ruha Benjamin (2019) and Safiya Noble (2016), algorithms often reproduce and streamline pre-existing forms of medical bias and discrimination, especially around race, class, and gender.

In addition to amplifying existing discrimination, algorithms are becoming one of the most prominent producers of disabilities. We know that social media can easily ignite feelings of inferiority and ableist rhetoric through content depicting the ‘perfect body and lifestyle’ as defined by white heteronormative Western ideals, among the other ideas and perceptions it can help to condition. Now, surveillance-for-profit and customization algorithms on social media have led to the phenomenon of telling users directly or indirectly what is ‘wrong’ with them through content and advertisement curation that is intended to pertain to users, personalizing their online experience. Such content may include things like targeted adverts for specialty products (e.g., mobility aids, antidepressants) or suggested users or topics (e.g., OCD awareness pages, addiction recovery advocates). Some of these diagnoses may indeed resonate with the target users and even motivate them to consider themselves in a new light that proves to be helpful. Despite these potential benefits, we still must attend to the larger issue at hand: These algorithms are creating phantom disabled consumers that they project onto real disabled and nondisabled users.

Previously (Gaeta, 2019), I theorized how social media algorithms implicitly diagnose users based on their activity, networks, demographics, and other data signatures. These diagnoses are projected onto users through targeted advertisements for medical devices, health treatments, and other health-related products that are used to treat a specific mental or physical health concern or condition. I call these ads ‘diagnostic advertisements’ to underscore how the concepts of diagnosis and algorithms share similar aims to cure and control. A plethora of studies has documented that Internet algorithms can act as devices of control (Beer, 2017; Nakamura, 2009; Galloway, 2006) and that the harvesting of personal data for profit contributes to the exploitation of marginalized groups (Introna and Wood, 2004; Ananny, 2011; Noble, 2016). A handful of these focuses on how algorithms discriminate against disabled people or contribute to the marginalization of disability (Keyes, 2020; Brown, 2021; Banner, 2018). These studies from the humanities and social sciences have documented that internet algorithms can have real-world impacts on users and public understandings of certain populations and that the harvesting of personal data for profit and normalization of mass surveillance contributes to the oppression and exploitation of already marginalized groups.

Rather than contribute to the robust and growing literature on how algorithms discriminate against disabled people, I consider how the phenomena of diagnostic advertisements may alter a user’s sense of self in ways that disrupt the ability/disability binary and the various identities built around it. I take a humanities-based crip technoscience approach and ask: What might the rise of diagnostic advertisements mean for the social status of disabled people and disabled users’ sense of self? In answering this question, I will focus on 1) The potential of diagnostic ads to shift the contours of disability communities and identities, 2) What the ads suggest about larger trends in medicine and surveillance, and 3) The slipperiness between the user and their data profile.

To explore the shifting sense of self, this article is a work of crip autotheory, focused on the case study of my personal Instagram feed over three months. ‘Crip’ is a framework and political orientation from disability studies and activism. A crip approach is a methodology that seeks out and imagines value systems not built on ability. It enables us to see mechanisms and ideologies that render certain people’s bodies as the antithesis of normal and determine their value based on ability (Kafer, 2013; McRuer, 2018, 2006). Instead of normative notions of ability, a crip approach may center on passivity, slowness, inertness, and other characteristics historically seen as unproductive (Kim, 2012; Chen, 2012; Kafer, 2013). Putting this ethos into my methodology entails making myself vulnerable to these ads and their effects rather than putting up an active defense or imposing distance. This crip ethos is also what distinguishes this method from comparable methods that are equipped to foreground disabled knowledge and ways of knowing, like what Cassandra Hartblay (2020) identifies as “disability anthropology” or intersectional feminist ethnographic methods. Following the spirit of “fugitive ethnography” (Berry, et al., 2017), crip autotheory is grounded in resistance to the longstanding delegitimization and silencing of disabled knowledge and disabled scholars in the academy. It is an act of affirming the rightful place of disabled voices in knowledge production.

Autotheory is a hybrid form of writing that blends theorizing with personal experience and embodied knowledge. Different from autoethnography or standpoint criticism, autotheory treats the subjective as a minority positionality that contains traces of and portals to the universal. Also central to autotheory is that the scene of writing is part of the field of knowledge production — not merely a place of knowledge recording. By cripping autotheory, one, I foreground the value of my experience as a disabled person, and two, center the value of passivity and partiality. The merit of this approach will be in refusing academic fantasies of ‘complete’ and ‘objective’ research. Following Ann Cvetkovich (2012), this project “operates from the conviction that affective investment can be a starting point for theoretical insight and that theoretical insight does not deaden or flatten affective experience” [1]. By immersing myself within the research field, I aim to dismantle the researcher-object of study binary and foreground disabled knowledge.

An autotheoretical approach is also suited to my object of study, algorithms. Algorithms are not stable objects with a consistent logic that you can reveal. For instance, even if Instagram was transparent about how exactly its algorithms work, this knowledge would likely be incomplete because the scope of algorithms exceeds the visibility of any single person (Seaver, 2019). Algorithms are sociotechnical assemblages, which include everything from the algorithm itself, its model, its aims, hardware, the data it inputs and outputs, and the network of human actors that are involved in designing, running, and maintaining each part of this assemblage (Gillespie, 2016). While algorithms are opaque due to their vastness and complexity, this is not necessarily a limitation. Tania Bucher (2017) proposes that while algorithms and their use on social media platforms are a “black box,” we do not need to see inside the black box in order to learn about them. Instead, the secretiveness of algorithms can prompt new methodological approaches, one which may even allow for knowledge that may overwise go unconsidered if algorithms were fully transparent. To that point, a crip autotheory perspective on diagnostic advertisements can provide a focused glimpse into how audiences process and embody the contradictory nature of the wide-ranging effects of health advertising algorithms. This allows us to embrace the partial, contradictory, incomplete, or even irrational knowledges and affects that these algorithms incite and thus avoid an oversimplified lens on the negative implications of their effects.

Any critical study of algorithms must account for how algorithms express their outputs by intimately molding and shifting to fit users. Ruckenstein and Granoth (2020) have discussed algorithmically generated targeted advertisements as a prime example of the “intimacy of surveillance”: the pervasive way that companies and governments occupy proximity with people across contexts in order to more fully surveil them. In terms of digital surveillance, such intimacy is especially prone to be normalized or unseen due to the fragmented and non-physical nature of the surveillance mechanisms. Algorithms play a role in constructing public culture, peoples sense of self, and their relations to other people and objects. I abide by Ruckenstein and Granoth’s claim that we need more multifaceted accounts of the affective ambivalence and difficulty of living with algorithms to understand the role of algorithms in constructing and reflecting what we know as reality. Building on their call, I contend that targeted advertisements for health products are a particularly potent type of algorithm use, and that we need more perspectives from disabled people to capture the nuances and extent of their effect. My sharing of my experience here as a disabled internet user is one such attempt to help to fulfill this aim.



Key terms

Before proceeding with my findings and analysis, I will establish the relationship between the key terms I use in this essay. Conceptually speaking, diagnosis is similar to algorithms in that each is a step-by-step process of inputting and outputting data in order to reach a desired end. Diagnosis is both a process and a label; an authority figure judges a given set of symptoms, and prescribed categories, and determines how they correlate to a given pathology. The result is a label, one that is given to a patient and comes to influence how the patient is treated by wider society and how they see themselves. What we may call ‘algorithmic’ is more than the predetermined codes programmed into various computerized systems. Following Tarleton Gillespie (2016), algorithmic refers to “the insertion of procedure into human knowledge and social experience,” which “requires the formalization of social facts into measurable data and the ‘clarification’ (Cheney-Lippold, 2011) of social phenomena into computational models that operationalize both problem and solution” [2]. Each has gained a sense of authority and respect by obscuring the degrees of subjectivity and room for error involved in both.

The social media algorithms to be discussed here are not engaging in the traditional medical diagnosis process, it is this that may make them even more dangerous. They are operated by another set of identificatory markers — besides symptoms and patient profile — found within user data and its patterns. By bringing the two terms together, I take Nick Seaver’s (2019) stand that algorithms are social constructions, and “the point of declaring something a construction is to argue that it might be constructed differently” [3]. I aim to help to show how algorithms can be psychological and bodily interventions beyond their use in clinical diagnosis.

My exploration of social media algorithms and their role in targeted health and medical advertisements is influenced by what Shoshana Zuboff (2019) has detailed as “surveillance capitalism”: consumer surveillance (i.e., the process behind targeted advertisements) aims for behavioral modification by harvesting personal data, and then use this personal data as raw material for marketing and product research on how to dictate consumer behaviors. Like other forms of automated and data-driven decision-making, surveillance capitalism reproduces the forms of exclusion and discrimination already facing marginalized populations, and therefore also helps to enact normalizing processes that fold these populations into the ability-centered machine that is cisheteropatriachial racial capitalism (O’Neil, 2016; Eubanks, 2018). Surveillance capitalism poses unique threats to disability in that it can make ‘unproductive bodies’ into ‘productive’ ones by harvesting data in pursuit of profit. Speaking to my comparison between diagnosis and algorithm, the new regime of surveillance also supports the commodification of health by enabling new ways to influence consumer desires and decision-making.



Diagnosing users

From March 2022 to May 2022, I collected targeted advertisements for health and medical products and services that appeared on my personal Instagram feed, which I accessed via my smartphone in the midwestern United States. Instagram is a social media platform designed for photo and short video sharing. Users can ‘like,’ share, post, re-post, and comment on photos posted by their friends, family, social media influences, and celebrities. The app doubles as a commerce site and it is now standard for brands and companies to use Instagram to promote and sell products. During this time, I got a wide range of advertisements and suggested posts. I used Instagram for an average of 1.5–2 hours a day, which included scrolling through my feed, looking at individual accounts, posting content, commenting on posts, and direct messaging. My profile is public, and I mainly use it to follow friends, family members, and meme accounts. I had no special privacy or ‘do not track’ options turned on. I saw over 30 targeted advertisements a day on a platform, varying depending on my level of usage each day. Most all advertisements and suggested accounts were related to beauty, clothing, TV/movies, social life, and physical and mental health. Apart from some diet food products, the majority of advertised health-related products would ones that would keep me online, including telehealth services, therapy journaling apps, nutrition trackers, etc.

I refer to these algorithmically placed ads as diagnostic advertisements: a strategy of “soft biopolitical” capitalism in which companies use targeted advertising algorithms to identify potential consumers on the Internet in order to sell them health and medical-related products (Cheney-Lippold, 2011). By pairing users with products, the implication is that those users need the product, and that assumed need is the result of mental or physical impairment or difference from the norm. Algorithms and diagnoses are similar in how they carry out their aims to identify and classify phenomena by using prior knowledge and preset criteria, and further, conceiving these ads as offering diagnoses can enable us to better realize the medicalization of everyday life and how it occurs through softer mechanisms of biopolitical capitalism — the control and exploitation of life for profit.

John Cheney-Lippold (2011) has written at length about how algorithms relate to users’ identities, explaining, “These computer algorithms have the capacity to infer categories of identity upon users based largely on their Web-surfing habits ... using computer code, statistics, and surveillance to construct categories within populations according to users’ surveilled internet history” [4]. While Instagram and its advertisers likely do not consider non-normative physical and mental differences as identities, it is necessary to acknowledge that for many disabled people, diagnosis is a site of self-transformation and re-definition, for better or worse. Algorithms, however, often inaccurately capture users’ identities, which can be revealed through misplaced targeted ads. The inaccuracy is a site of volatility; it should be celebrated as it shows the failure of surveillance to capture the ‘real you,’ and also revered for its potential to shift just how users think about themselves.

What distinguishes these types of ads from any other health product marketing scheme is the context in which it reaches their audience: the customized and individualized social media feed. The feed simultaneously gives off an aura of publicness and privateness. While social media is a public space, my feed is ‘mine,’ in that I partially select what content to share and see, and it will not be exactly replicated on anyone else’s feed. It is feed in which many people, me included, live out a significant portion of our daily life. The feed is a site of community, especially for marginalized people who are more likely to struggle to build community in the physical world. Social media platforms have repeatedly proven to help facilitate the formation of community among disabled people and to help give shape to disabled people’s identity (Tollan, 2022). Precisely due to the remote, asynchronous/synchronous, and flexible design of social media, disabled people have found social media to be more accessible for sharing their experiences and connecting with others who understand those experiences. This does, however, mean that Web-based communities like these are vulnerable to having their data exploited and used against them.

By interjecting diagnoses into my feed, a projection of power through the screen, the ads create a sense of intimacy and personalization with me, even if that intimacy is unwanted or unsettling. As a disabled person, I find a strange sense of excitement about seeing their ads that make disability public and interject disability into the feeds of millions. I recall how diagnosis and claiming disability were pivotal moments in my self-cultivation. For this exact reason, the ads that made incorrect assumptions about my body began to facilitate a curiosity about how my body worked, and eventually a sense of incoherency about myself. The more ads I saw, the more I wondered: “Could I have borderline personality disorder? What about my data says I’m pre-diabetic? Why does Instagram think I’m suicidal? Am I?” These are more than just diagnoses; disabilities are an integral part of identity formation and for some, an identity in itself. It is the slipperiness between my self-understanding and the understanding of me staring at me through my phone that is the most generative and dangerous aspect of targeted health advertisements as it can not only push users towards health products that may or may not be suitable for their needs, but it can also shift the grounds on which they come to define themselves.



Medicalizing everyday life

The heavy use of targeted advertisements and circulation of the ads themselves speaks to the larger issue of the intensification of the medicalization of everyday life; a process that entails the normalization of surveillance and commodification of personal health. Since these ads are pushing products as much as diagnoses, this phenomenon reduces disability to a product to be sold. I use the phrase ‘the medicalization of everyday life’ to refer to the commonplace practice of subjecting people to the clinical gaze in non-medical spaces. Alex Haagaard (2021) explains how

the clinical gaze conceives of, and accredits, disability. When a group of symptoms or impairments — that is to say, functional differences that interfere with a person’s ability to be normatively productive under industrial capitalism — can be mapped to a specific, tangible and somatic site of pathology, the clinical gaze accredits them as disability, and seeks to correct the physical deviation it has identified.

Diagnostic advertisements reproduce and transform the clinical gaze through the vector of capitalism, but by looking at users through their data rather than at their bodies. By doing so, diagnostic advertisements use data to create images through the lens of the clinical gaze; these images are of users’ bodies, images that they impose onto users through the portal of the social media feed.

As mentioned, the advent of diagnostic advertisements gestures towards many trends at work in the culture around healthcare and surveillance, particularly in how they divorce health from social relations and place the burden of responsibility of care on the individual. Many of the ads included some type of engagement strategy, such as asking users to click a link, follow a page, or take a quiz. This engagement strategy has the effect of prompting users to identify with the content of the ad. Strategies like this also generate more user activity data, data that can be analyzed in certain ways to gather users health status. For this very reason, a recent study by Cosgrove, et al. (2020) called attention to the ethics of companies pushing mental health support apps during the COVID-19 pandemic. By examining the apps themselves, the authors find that they preyed upon vulnerable people (i.e., those experiencing mental illness and other health issues) and roped them into a data harvesting scheme.

To that point, one company that consistently advertised to me was Headway; a company that operates an app designed to facilitate and motivate personal growth. I was shown numerous different advertisements from this same company, one about improving my sex, another about work-life balance management, and a third about childhood trauma. This last ad about childhood trauma took the form of an infographic. It contained four columns, each listing a different type of inner child wounds that result from trauma types — including rejection, abandonment, betrayal, and humiliation trauma — and traits associated with each type. Above each column was a crude cartoon of a human figure depicting each type of trauma. At the bottom of the image was the prompt “take test” and a button to install the app. The ad seems to say, ‘you’re wounded, let’s find out how.’ Targeted ads such as these reduce identification with trauma and disability as a part of a marketing strategy. By doing so, they threaten the integrity and coherency of disability as a category and leave the users alone to ‘make decisions’ about their trauma experienced based on predetermined quiz selections and outcomes.

Divorcing diagnosis from the doctor’s office or even the patient portal screen, I still see these Headway diagnoses repeatedly each time I opened my Instagram page. They became a pesky reminder of my mental and physical health; a reminder that ‘lived’ in my phone which I carried everywhere and relied upon for social and work needs. In this manner, targeted health ads are analogous to wearable health technologies. Julie Passanante Elman (2018) argues that wearables are a mechanism of biopolitical self-conditioning, in that: “wearables promise greater control over health, safety, and emotional well-being through intimate data gathering. The devices encourage constant but playful self-surveillance; the constant self-enhancement they encourage also offers an endless interface of self-diagnosis” [5]. The difference between these devices and the ads is the ads do more to normalize nonconsensual and regular interjections and judgments on users’ health from non-medical and seemingly nonhuman sources. Indeed, targeted advertisements appear as disembodied, objective, and indifferent bystanders to users’ health.

As an aspect of influencing consumer activity, diagnostic advertisements can shift a user’s perception of how their body works and therefore what it needs, even when the user disagrees with the content. In April 2022, on five different occasions, I received a targeted ad for an irritable bowel syndrome (IBS) treatment product called Mahana by Mahana Therapeutics. The product is a self-guided cognitive behavioral therapy app customed to address IBS symptoms and triggers. The ad itself called out to its targeted viewers directly, saying “follow us to learn more about your #guthealth and conquer life with IBS.” I do not have IBS. After the third time seeing the ad, I could not stop thinking about my stomach health and I could not stop thinking about how even when algorithms failed to accurately capture my health, they still had a powerful influence on my thought and behavior patterns. Unlike a doctor’s advice or written recommendation, I could not escape these ads, these relentless callouts about my stomach health. Even ads that did not resonate with me had a powerful affective weight over my thoughts due to their relentless intrusion on my Instagram feed. Compounding that, I found myself careful to dismiss any diagnostic ad outright as ‘not me’ due to the fluidity of disability — an experience we will face differently if we live long enough — and fears of my own internalized ableism preventing me from seeing myself in certain ways (McRuer, 2018, 2006). The ads were curious prompts at times, interjections that forced me to consider my physical and mental through different lenses.



The disabled data double

Diagnostic advertisements run parallel to the rise of personalized medicine, big data, and digital automation in healthcare. All these technological transformations reduce the role of the patient’s voice and inspection of the physical body. Instead, data comes to represent and even partially determine patient treatment. Previously, I have described this parallel:

Advertisers’ use of algorithms perpetuates the same logic as these medical trends; our data speaks louder than us. Virtual, automated diagnosis privileges the standardized processing of a person’s data, abstracted from their personhood. Critically, neither a doctor nor a salesperson needs to physically interact with or make decisions about their patients or patrons. Decisions about our bodies are made without ever even seeing our bodies (Gaeta, 2019).

Without any actual expert inspection or knowledge of users’ physical or mental state, let alone consent, these ads project disabilities onto them. Diagnostic advertisements operate as a third space between nondisabled and disabled; it is a site in which disability is being produced in the form of phantom diagnoses, casting diagnoses out into the virtual sphere for the sake of profit. These ads do not merely aim to sell products, they aim to sell images of users’ mental and physical health and often frame that image as one of lack and brokenness.

By working through the space of the social media feed at the level of data, social media algorithms are diagnosing data patterns more so than people themselves. What these posts were showing me was reflections of my “data double” (Raley, 2013). “Data double” refers to the informational profile that shadows each surveilled subject. Rather than simply a collection of a person’s data, the data double is a shadow self that is created by the capture of data, which is then vectored through “processes of disassembling and reassembling. People are broken down into a series of discrete informational flows which are stabilized and captured according to pre-established classificatory criteria. They are then transported to centralized locations to be reassembled and combined in ways that serve institutional agendas” [6].

Importantly, as the Critical Art Ensemble warns us: “The data body is the body by which you are judged in society and the body which dictates your status in the world. What we are witnessing at this point in time is the triumph of representation over being.” [7]. The stakes of this are high considering that my data double seemed to be disabled in ways that I am not. And, in a world centered on ability, subjecting even the data double to the clinical gaze implies that even the virtual body is a site of medical micromanagement and commodification. Bringing this messaging and conditioning to users via algorithms further de-humanizes the process of such medicalization and normalizes nonconsensual claims about other people’s health.

Your data double is a manipulated shadow version of you, one that plays a significant role in what content you see, how you engage with this content, and who you interact with online. In an information society, our data double comes to precede us. The content we are shown is not for us. It is for who the platform expects us to be and conditions us to become. That my data double was shown to be disabled in ways that I am not (e.g., the IBS ads), and in a highly consumerist context, raises concerns about how the perceived profitability of disability and what notions of disability are at work behind these ads.

As these ads project disabilities upon users, we must consider how these ads may inflict a re-negotiation of the self, and how this re-negotiation will occur differently depending on the user’s subject position. For nondisabled users and those who enjoy a more privileged experience with the medical system (i.e., the cishet male nondisabled norm), being seen as disabled through these ads may be glossed over or perhaps even prompt self-reflection. For disabled users and others with a more fraught relationship with the medical system, these ads may trigger deeply unsettling emotions and painful memories.

More than just behavioral modification or a challenge to the coherency of the self, these ads can induce immediate internal conflict and harm by creating friction between the real self and the data double. The most common type of health-related targeted ads shown to me, for example, were those for weight loss programs and diet food, from low-calorie rice snack bars and no- sugar chocolate to food diary apps and fitness support programs. The implication is that I must want and/or need to lose weight. What the advertisers or the algorithms behind these ads seemed to not know is that I have dealt with anorexia and other eating disorders for over half my life. These ads haunted me by threatening to set back years of progress and triggering thoughts of restricted eating. Albeit unexpected content is an inherent quality of any social media feed — to a degree. Distinct about the targeted health ads is how they direct the user head-on and often repeatedly with the clinical gaze. These types of encounters demonstrate the power of the data double to attack the self it is claiming to represent.

One’s data double arguably holds even greater influence over people who rely on networked technologies and virtual spaces to participate in daily life. Even when users do not interact with these ads, such as buying the featured product or clicking a link, the digital body — the data double — is already a site of micromanagement, and not engaging is a form of engagement. And, to recall, most of the ads I encountered did attempt to keep me online, keep dragging more data out of me under the lure that they can help me as a disabled person. The effect of the COVID-19 pandemic cannot be understated here; the threat of the virus kept people, especially disabled people at high-risk, indoors and online (Gupta and Katarya, 2020). This, assumably, led to the creation of more data from which to be profiled and enabled more exposure to targeted content. Further, the virus and poor public health management awakened a host of new health-related anxieties for disabled and nondisabled people alike. Even as I write this in mid-2022, emerging public health crises, including monkeypox and polio, are coming into public view and, again, threatening to keep us indoors and online. These conditions set the stage for the continuation, and possibly intensification, of mining data from real people to fabricate and project diagnoses onto them, thereby strengthening the power of algorithms in shaping disability identities and self-perceptions of one’s health.



Discussion & conclusion

I have used this article to begin to understand the role of targeted health and medical advertisements in constructing Internet users’ sense of self and their significance for disability and disabled people. This work has implications for scholars interested in the socio-political power of algorithms and the health industrial complex. An intersectional disability justice framework must be at the heart of these conversations in order to foreground what this means for disability as a socio-political category and experience. Such a framework can also help to reveal how targeted medical and health ads implicate the inverse of disabilityable-bodiedness — and lend to the normalization of disability. As Logan Smilges (2023) reminds us, disability is not always marginal: “Sometimes, as the flourishing mental health industrial complex indicates, it’s even pretty damn close to normative” [8]. Here, ‘normative’ and ‘normalized’ should be understood with caution, as it does not signal the wholesale acceptance of disabled people into the mainstream and politics. To point towards the normalization of disability through popular health and medical marketing social media is to identify how disability and disabled people are being stripped of their radical potential, folded into neoliberal capitalism, and controlled through data surveillance — not accepted nor seen as the norm.

My exploration of diagnostic advertisements can support conversations about the web of influences at work in present-day personal biomedical decision-making and disability identity. My data double being disabled in ways that I am not poses questions about where our bodies begin and end in the era of mass surveillance and big data. One way to study this question is by using target health and medical advertisements to examine the slipperiness between the self and its data double. In tandem, the concept of diagnostic advertisements may benefit disability and/or technology justice activists by illuminating how these ads may compromise users’ decision-making and how platforms use algorithms to shift the norms around personal health information and privacy. With the right set of tools, examining these ads could be a way to work backward to understand how companies are identifying target audiences and if they are reproducing disability tropes and stereotypes in the process.

As prior stated, the great potential and danger of these diagnosing algorithms is the self-reflection they prompt in users. Implicating nondisabled social media users, for instance, into images and language of disability could prompt self-reflection upon their ability status and the stability of the ability/disability binary. A parallel to my discussion here is the conversation about the increase in self-diagnosis and how content on social media platforms contributes to this trend (Patole, 2021; Oliver, 2021; Price, 2022). Understated in these debates is the role of the platforms themselves. That is, how their content curation, data surveillance, and targeted ads may contribute to the popularization of self-diagnosis, which is not inherently nor entirely negative. It is, however, necessary to factor into claims about how self-diagnosis may support the identity formation of disabled people, as well as concerns about the ‘trendiness’ of certain disabilities on social media (McRuer, 2018; Korducki, 2022).

Examining diagnostic advertisements in relation to the growing culture and market around mental health may improve our understanding of emergent and persistent forms of ableism that affect populations differently depending on their real or inferred subject position. More work is needed to untangle how diagnostic advertisements may contribute to reinforcing “the racist and cisheterosexist criteria for diagnoses were developed” [9]. We must also contend with how these projections of disability may demedicalize or normalize, harmful, certain disabilities over another, especially in regard to how madness and insanity often define Blackness in an antiblack world (Pickens, 2019). Part of this discussion will need to include bearing in mind that diagnosis can be a significant and positive label for the beholder, despite its problematic history and links to medicalization. In a world haunted by the threat of current and future pandemics, in tandem with the datafication of everything, social media will play an increasingly larger role in mediating social relations and self-image for everyone. This is especially true for disabled and other marginalized people who feel safer accessing the world online than off. Simultaneously, it will play a larger role in how we access healthcare and health information. As such, we must have a nuanced set of critical tools to recognize how algorithms are conditioning us to differently perceive our mental and physical health and to assess the stakes of these changes for how we define disability as a category and experience. End of article


About the author

Amy Gaeta is an academic, poet, and disability justice advocate. She earned her Ph.D. in English at the University of Wisconsin-Madison. Her academic work specializes in the psychological aspects of human-technology relations under and in the surveillance state. Gaeta’s present book manuscript asks what the widespread use and malleability of drone technology means for the ways human-technology relations are framed and valued. The book develops the concept of ‘drone life’ to boldly considers how shifting human-tech relations challenge the desirability the able-bodied individual liberal subject. In poetry, she explores mental illness, desire, and the impossibility of being human. Her first chapbook The Andy Poems was published by Red Mare Press in 2021, and her next, Prosthetics & Other Organs is forthcoming on Dancing Girl Press.
E-mail: agaeta [at] wisc [dot] edu



1. Cvetkovich, 2012, p. 10.

2. Gillespie, 2016, p. 26.

3. Seaver, 2019, p. 413.

4. Cheney-Lippold, 2011, p. 164.

5. Elman, 2018, p. 3,761.

6. Haggerty and Ericson, 2006, p. 4.

7. Raley, 2013, p. 121.

8. Smilges, 2023, p. 17.

9. Ibid.



Mike Ananny, 2011. “The curious connection between apps for gay men and sex offenders” Atlantic (14 April), at, accessed 11 May 2022.

Olivia Banner, 2018. “Disability studies, big data and algorithmic culture,” In: Katie Ellis, Rosemarie Garland-Thomson, Mike Kent, and Rachel Robertson (editors). Interdisciplinary approaches to disability. London: Routledge pp. 45–58.
doi:, accessed 12 December 2022.

David Beer, 2017. “The social power of algorithms,” Information, Communication, & Society, volume 20, number 1, pp. 1–13.
doi:, accessed 12 December 2022.

Ruha Benjamin. 2019. Race after technology: Abolitionist tools for the new Jim Code. Medford, Mass.: Polity Press.

Maya J. Berry, Claudia Chávez Argüelles, Shanya Cordis, Sarah Ihmoud, and Elizabeth Velásquez Estrada, 2017. “Toward a fugitive anthropology: Gender, race, and violence in the field,” Cultural Anthropology, volume 32, number 4, pp. 537–565.
doi:, accessed 12 December 2022.

Lydia X.Z. Brown, 2021. “Tenant screening algorithms enable racial and disability discrimination at scale, and contribute to broader patterns of injustice,” Center for Democracy & Technology (7 June) at, accessed 11 May 2022.

Tania Bucher, 2017. “The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms,” Information, Communication & Society, volume 20, number 1, pp. 30–44.
doi:, accessed 12 December 2022.

Mei Y. Chen, 2012. Animacies: Biopolitics, racial mattering, and queer affect. Durham, N.C.: Duke University Press.
doi:, accessed 12 December 2022.

John Cheney-Lippold, 2011. “A new algorithmic identity: Soft biopolitics and the modulation of control,” Theory, Culture & Society, volume 28, number 6, pp. 164–181.
doi:, accessed 12 December 2022.

Lisa Cosgrove, Justin M. Karter, Zenobia Morrill, and Mallaigh McGinley, 2020. “Psychology and surveillance capitalism: The risk of pushing mental health apps during the COVID-19 pandemic,” Journal of Humanistic Psychology, volume 60, number 5, pp. 611–625.
doi:, accessed 12 December 2022.

Ann Cvetkovich, 2012. Depression: A public feeling. Durham, N.C.: Duke University Press.
doi:, accessed 12 December 2022.

Julie Passanante Elman, 2018. “‘Find your fit’: Wearable technology and the cultural politics of disability,” New Media & Society, volume 20, number 10, pp. 3,760–3,777.
doi:, accessed 12 December 2022.

Virginia Eubanks, 2018. Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St. Martin’s Press.

Amy Gaeta, 2019. “Do algorithms know your body better than you?” OneZero (28 October), at, accessed 10 May 2022.

Alexander R. Galloway, 2006. Gaming: Essays on algorithmic culture. Minneapolis: University of Minnesota Press.

Tarleton Gillespie, 2016. “Algorithm,” In: Benjamin Peters (editor). Digital keywords: a vocabulary of information society and culture. Princeton, N.J.: Princeton University Press, pp. 18–30.

Aakansha Gupta and Rahul Katarya, 2020. “Social media based surveillance systems for healthcare using machine learning: A systematic review,” Journal of Biomedical Informatics, volume 108, 103500.
doi:, accessed 12 December 2022.

Alex Haagaard, 2021. “Notes on temporal inaccessibility,” Medium (12 March), at, accessed 4 June 2022.

Kevin D. Haggerty and Richard V. Ericson (editors), 2006. The new politics of surveillance and visibility. Toronto: University of Toronto Press.

Cassandra Hartblay, 2020. “Disability expertise: Claiming disability anthropology,” Current Anthropology, volume 61, number S21, pp. S26–S36.
doi:, accessed 12 December 2022.

Lucas Introna and David Wood, 2004. “Picturing algorithmic surveillance: The politics of facial recognition systems,” Surveillance & Society, volume 2, number 2, pp. 177–198.
doi:, accessed 12 December 2022.

Alison Kafer, 2013. Feminist, queer, crip. Bloomington: Indiana University Press.

Os Keyes, 2020. “Automating autism: Disability, discourse, and artificial intelligence,” Journal of Sociotechnical Critique, volume 1, number 1, at, accessed 12 December 2022.

Eunjung Kim, 2012. “Why do dolls die? The power of passivity and the embodied interplay between disability and sex dolls,” Review of Education, Pedagogy, and Cultural Studies, volume 34, numbers 3–4, pp. 94–106.
doi:, accessed 12 December 2022.

Kelli Mara Korducki, 2022. “TikTok trends or the pandemic? What’s behind the rise in ADHD diagnoses,” Guardian (2 June), at, accessed 3 June 2022.

Colin Lecher, 2021. “How big pharma finds sick users on Facebook,” The Markup (6 May), at, accessed 10 May 2022.

David Mandell, Lisa Wiggins, Laura Arnstein Carpenter, Julie Daniels, Carolyn DiGuiseppi, Maureen S. Durkin, Ellen Giarelli, Michael J. Morrier, Joyce S. Nicholas, Jennifer A. Pinto-Martin, Paul T. Shattuck, Kathleen C. Thomas, Marshalyn Yeargin-Allsopp, and Russell S. Kirby, 2009. “Racial/ethnic disparities in the identification of children with autism spectrum disorders,” American Journal of Public Health, volume 99, number 1, pp. 493–498.
doi:, accessed 12 December 2022.

Robert McRuer, 2018. Crip times: Disability, globalization, and resistance. New York: New York University Press.

Robert McRuer, 2006. Crip theory: Cultural signs of queerness and disability. New York: New York University Press.

Lisa Nakamura, 2009. “The socioalgorithmics of race: Sorting it out in Jihad worlds,” In: Shoshana Magnet and Kelly Gates (editors). The new media of surveillance. London: Routledge, pp. 149–162.

Safiya Noble, 2016. Algorithms of oppression: How search engines reinforce racism. New York: New York University Press.

Cathy O’Neil, 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown.

Avery Oliver, 2021. “Self-diagnosing: A response to inaccessible healthcare,” Rooted in Rights (8 September), at, accessed 10 May 2022.

Ira Patole, 2021. “Barriers to diagnosis: The function of self-diagnosis in neurodivergent and disabled communities,” Honi Soit (28 October), at, accessed on 10 May 2022.

Ther Alyce Pickens, 2019. Black madness :: Mad Blackness. Durham, N.C.: Duke University Press.
doi:, accessed 12 December 2022.

Devon Price, 2021. “Self-diagnosis isn’t ‘valid.’ It’s liberatory,” Medium (9 February), at, accessed on 10 May 2022.

Rita Raley, 2013. “Dataveillance and countervailance,” In: Lisa Gitelman (editor). Raw data is an oxymoron. Cambridge, Mass.: MIT Press, pp. 121–146.
doi:, accessed 12 December 2022.

Minna Ruckenstein and Julia Granroth, 2020. “Algorithms, advertising and the intimacy of surveillance,” Journal of Cultural Economy, volume 13, number 1, pp. 12–24.
doi:, accessed 12 December 2022.

Nick Seaver, 2019. “Knowing algorithms,” In: Janet Vertesi and David Ribes (editors). digitalSTS: A field guide for science & technology studies. Princeton, N.J.: Princeton University Press, pp. 412–422, and at, accessed 12 December 2022.

J. Logan Smilges, 2023. Crip negativity. Minneapolis: University of Minnesota Press.

Kristen Tollan, 2022. “Exploring the development of disability identity by young creators on Instagram,” Review of Disability Studies, volume 17, number 4, at, accessed 12 December 2022.

Angie Waller and Colin Lecher, 2022. “Facebook promised to remove ‘sensitive’ ads. Here’s what it left behind,” The Markup (12 May), at, accessed 10 May 2022.

Shoshana Zuboff, 2019. The age of surveillance capitalism: The fight for a human future at the new frontier of power. London: Profile Books.


Editorial history

Received 16 November 2022; accepted 12 December 2022.

Creative Commons License
This paper is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Diagnostic advertisements: The phantom disabilities created by social media surveillance
by Amy Gaeta.
First Monday, Volume 28, Number 1 - 2 January 2023