Facial recognition, emotion and race in animated social media
First Monday

Facial recognition, emotion and race in animated social media by Luke Stark



Abstract
Facial recognition systems are increasingly common components of commercial smart phones such as the iPhone X and the Samsung Galaxy S9. These technologies are also increasingly being put to use in consumer-facing social media video-sharing applications, such as Apple’s animoji and memoji, Facebook Messenger’s masks and filters and Samsung’s AR Emoji. These animations serve as technical phenomena translating moments of affective and emotional expression into mediated socially legible forms. Through an analysis of these objects and the broader literature on digital animation, this paper critiques the ways these facial recognition systems classify and categorize racial identities in human faces. The paper considers the potential dangers of both racializing logics as part of these systems of classification, and how data regarding emotional expression gathered through these systems might interact with identity-based forms of classification.

Contents

1. Introduction: The void and the panda
2. Digital animation and seeing through race
3. Race, emotion, and aesthetics in digital animation
4. Skeuomorphism and face tracking
5. Animated labor and the racialized subject
6. Conclusion: Animation and resistance

 


 

1. Introduction: The void and the panda

 

Apple's launch of animoji in September, 2017
 
Figure 1: Apple’s launch of animoji in September, 2017 (Source: https://www.youtube.com/watch?v=Hdvqb3PJWYw).

 

The reticulated human face gapes at the cartoon panda bear; the panda gapes back. Beneath these figures, a man in a button-down shirt and comfortable jeans is speaking: “You can’t customize emojis; they only have a limited amount of expressiveness,” he suggests to the unseen crown in front of him. The human face and panda face shake and close their mouths in unison — they smile and grimace, bear their teeth. “You can just watch this, can’t you,” the man avers, seemingly entranced along with the appreciative audience at the entities mugging behind him (McGregor, 2017). Apple’s “animoji” were released to the world.

Introduced at the launch event for the iPhone X in September 2017, the animoji set is comprised of animated avatars — 12 in all, including a robot, alien, unicorn, and the famous pile of poop. “Animojis,” claimed the company at the launch, “track more than 50 muscle movements.” The company pitched animoji as a new and more nuanced way to communicate feelings with friends and family within the Apple Messages chat application: the animated facial masks use facial recognition data collected by the iPhone X’s TrueDepth camera to animate their expressiveness.

The term “animoji” is a portmanteau of “animation” and “emoji.” Digital animations, including the emoji character set in its myriad fonts, animated GIFs, static stickers (like Bitmoji avatars or Kim Kardashian’s proprietary set of images), facial masks on Facebook Messenger or Snapchat, message effects (the burst of hearts or confetti in Apple’s Messenger app, for instance) — and still even the humble emoticon — are everywhere. In June of 2018, Apple launched Memoji — customizable animated avatars of a user’s own face to complement its animoji characters. Samsung had introduced its AR Emoji — “augmented reality” avatars of a user — several months prior.

Customizable still-image “sticker” avatars in JPEG or GIF format became popular across social media and SMS platforms in 2014 with the launch of Bitmoji, an app created by two Toronto cartoonists enabling users to create customizable stickers of their faces and incorporate them into social media conversations. In 2016, Bitmoji was purchased by Snap, developers of the Snapchat app: Apple’s Memoji and Samsung’s AR Emoji are thus mobile competitors to Bitmoji avatars. Snap itself has also begun to develop Bitmoji as mobile animated characters, and Facebook, not to be left out, has also developed its own animated human avatars (Constine, 2018).

Far surpassing the “cartoons” of Saturday mornings past, these animated representations are ubiquitous on screens both personal and portable (Gershon, 2015; Manning and Gershon, 2013). The anthropologist Terry Silvio defines animation as, “the projection of qualities perceived as human — life, power, agency, will, personality, and so on — outside of the self, and into the sensory environment, through acts of creation, perception, and interaction” [1]. Animated media encompass a wide range of symbolic, expressive, and social practices ranging from “animism to robotics” [2], including hand-drawn cartoons, puppetry, and stop-motion claymation.

Through digital artifacts like emoticons, emoji, animated reaction GIFs, and other cartoonish visualizations, animations suffuse our digitally mediated social interactions: these formats serve to translate moments of affective and emotional expression into socially legible forms through particular, historically contingent models of classification, legibility, and discrimination. Discrimination has a double meaning, and in the context of animoji, and many other digital animations, I use the term deliberately. In this paper, I argue digital artifacts for emotional expression seeking to approximate human identity — like Apple’s animoji characters and more recent memoji character set of human faces — reify and maintain extant discriminatory racial categorizations through their formal and aesthetic features.

In other words, many forms of digital animation have racializing logics built in at the level of the technical mechanism. These racism-enabling technical mechanisms are not necessarily present in these systems due to animus or even awareness on the part of designers, but because of how race as a discriminatory social construct is derived from the schematization and representational caricature of human bodies. Unfortunately, schematization and representational caricature of human bodies is at the heart of most — through crucially, not all — digitally animated formats for human representation.

Facial recognition systems such as the iPhone X’s FaceID system are the means through which these schematizing — and therefore racializing — technical mechanisms do their work. With Face ID, “your face is your password.” What might have been presented solely as a security feature became, with animoji and memoji, a social and emotional affordance, a seemingly playful expansion of the digital animate. Yet as Simone Browne and other scholars have observed, facial recognition technologies and other systems for visually classifying human bodies through data are always means by which race is defined and made visible (Browne, 2015). Here, I extend the analyses of feminist scholars of color such as Browne, affect theorist Sianne Ngai, anthropologist Lisa Nakamura and others to argue racialization is an inextricable formal problem for any digital animation seeking to represent the human, particularly if that animation makes use of facial recognition technologies. Animoji, memoji and their ilk represent a point at which technical formats, aesthetic representations, cultural prejudices and capitalist exigencies intersect to produce digital objects which are inherently racializing.

Marta Maria Maldonado describes racialization as, “the production, reproduction of and contest over racial meanings and the social structures in which such meanings become embedded” [3]. “Racial meanings,” observes Maldonado, “involve essentializing on the basis of biology or culture.” Animated objects essentialize along both these axes in part because of their status as signifiers of social emotion; our use of animated avatars via digital platforms often involves emotional expression. In documenting the racial history of animation in American media history, Sianne Ngai observes how animatedness as an “exaggerated emotional expressiveness” often serves to “function as a marker of racial or ethnic otherness in general” [4]. Emotional expression is therefore an important element of digital animation’s schematization and perpetuation of race as a discriminatory category.

Flipping Ngai’s formulation, I argue emotional classification schemes are technical strategies — like animoji and memoji — that enable and produce racializing logics within digital animations of the human. By extension, the conditions under which data regarding human emotion is collected via these digital systems perpetuate those same racializing discourses. Despite the resistant or emancipatory potentials of many forms of animation, its technical exigencies predispose it to essentializing, politically regressive and reactionary articulations of difference.

“The proliferation of animation and animated characters,” Terri Silvio suggests, “is not simply an effect or symptom of the intersection of computer technology and structural transformations in global capitalism.” Animation, Silvio argues, instead “provides a productive trope for thinking through this [sociotechnical] intersection” [5]. Elsewhere, others and I have made related arguments regarding the general indexical qualities of these sorts of everyday digital object — the mobile ringtone, emoji characters, or animated GIFs — to point to broader social, cultural, and politico-economic trends (Eppink, 2014; Gopinath, 2013; Stark and Crawford, 2015).

Seemingly mundane digital artifacts like animoji are intimately tied to the social and political possibilities of digitally mediated life, precisely because they are so ubiquitous and widely used. As Alexander R. Galloway observes, “it is precisely those places in culture that appear politically innocent that are at the end of the day the most politically charged” [6]. Animated formats are thus of significance not only because of their racializing logics: their ubiquity and trajectory suggest animation is a central nexus for the labor of feeling and bearing racialized identity in the digitally mediated economy.

 

++++++++++

2. Digital animation and seeing through race

 

Apple's memoji characters
 
Figure 2: Apple’s memoji characters (Source: https://www.cnet.com/news/memoji-versus-ar-emoji-how-apple-animoji-of-your-face-already-beats-galaxy-s9/)).

 

Digital formats have increasingly become a focus for critical attention from scholars in media studies and science and technology studies (STS) (Mackenzie, 2005; Montfort and Bogost, 2009; Sterne, 2003). As Jonathan Sterne observes in MP3: The meaning of a format, “if there were a single imperative of format theory, it would be to focus on the stuff beneath, beyond, and behind the boxes our media come in” [7]. Sterne’s work on the MP3 audio format is part of a broader project to examine the role of the digital file format as “a technique for storage and movement” embedded in wider technical and cultural developments and logics.

Sterne’s insight extends to older media forms alongside digital programs and platforms, but is particularly salient to the workings of computational machines. “We don’t have to subscribe to a single model of historical causality or historical change,” Sterne observes, “in order to appreciate that a change in format may mark a significant cultural shift” [8]. The waxing and waning of technical formats, in other words, is connected both to changes in their broader sociocultural milieu, and is representative of the power these formats possess as potential agents of societal change in their own right.

An emphasis on materiality is salutary for media studies, but runs the risk of obscuring the ways in which technical formats shape aesthetic, culturally contingent representations, such as those of race. Animation as a technical format is a case in point. Digital animation has taken on a life of its own as both a medium and an object of academic inquiry (Sito, 2013). Animated characters differ from other forms of representation of the human in being what Silvio terms “ciphers”: representative depictions through which “specific formal qualities stand for specific character traits.” This process of simplification is often denoted by the term “cartoony” [9]. Animated objects are distinct because of their very “cartoonishness”: such objects perform a “simplification of each medium’s sign system in comparison with the organically integrated sign systems of embodied performance” [10].

The reduction of embodied humans into sets of legible, manipulable signs is a hallmark of technoscientific and administrative techniques: animation, like many human cultural practices, thus resonates with other technologies of modernity (Scott, 1998). Contemporary methods of classification and schematization — of which animation is one — seek to sort and control individuals, by creating both the technical forms through which the classification is performed, as well as the extension the epistemological terms on which interpretation can take place (Bowker and Star, 1999; Ginzburg, 2009; Cheney-Lippold, 2017).

As Ngai observes, there is “a crucial ambivalence” embedded in animation as a form or format, one which serves as a microcosm for the ambivalence of modernity itself: the plasticity and schematic reorganization of animated figures both suggest formal freedom, but also the total exposure of the animated body to be reshaped by the techniques and apparatuses of power (Foucault, 2003; Chen, 2012; Ngai, 2015). Digital animations are therefore ripe for analysis examining the direct relationship between technical affordances, material platforms, and the representation of racialized identities.

Scholars of the digital and critical race studies such as Lisa Nakamura (2002; 2009b) and Safiya Noble (2018) have detailed the ways racial categorizations are built into the logic of digital systems of classification and enumeration. As Nakamura observes, “computing’s cultural history is intimately connected to the history of racial classification and sorting” [11]. Animation is both part of this history and a medium through which such classification and sorting continue to flourish: Alexander R. Galloway suggests, “the contemporary format of animation, both cinematic and gamic, is one of the most important sites today where racial coding is worked out in mass culture” [12]. With animation becoming an increasingly common feature of social media, we need to inquire how racialization is taking place in quotidian digital spaces suffused with everyday animated faces.

 

++++++++++

3. Race, emotion, and aesthetics in digital animation

Cartoon animation has often been a medium for racist caricature in the American context, ranging from Disney’s Steamboat Willie in the 1920s through to the present (Harris, 2003; Stabile and Harrison, 2003). Ngai argues persuasively that “to be ‘animated’ in American culture is to be racialized in some way,” as a figure moving on the default ground of white propriety and power [13], and for animation’s “transformation into a racializing technology in American cultural contexts” [14]. As Stuart Hall (2017) and others (Harris, 2003; Mitchell, 2012) have observed, visual representations of race are integral to questions about racism itself: race, in Mitchell’s words, “is something we see through, like a frame, a window, a screen or a lens, rather than something we look at” [15]. Racism is always mediated.

In the age of digital reproducibility, tracking and classification, every human action and emanation is an innervation open to capture by digital media (Kerr and McGill, 2007). Nakamura (2017; 2009a; 2009b; 2002) has written extensively on the ways digital databases create and reproduce racializing logics of automated sorting and classifying based on country of origin, physiognomic features, and other measures: “race is a social algorithm in addition to and sometimes instead of a physiognomic or phenotypic feature, a form of genotypic media,” she notes (Nakamura, 2009b). Critical to Nakamura’s analysis is her mobilization of Rachel E. Dubrofsky’s observation that “self-sameness” is an essential arbiter of subjective value in contemporary cultures of digital surveillance and biometric capture (Dubrofsky, 2016).

Nakamura extends this insight from its grounding in the spectacle of reality television to apply to identity classifiers writ large — consistency is valorized by both reality television shows and social media platforms as a means to build a consistent personality “brand” together with the broader neoliberal shaping of identity precipitated by the digital economy, a system in which certain identities, and certain signifiers of identity, are more valuable, and mobile, than others (Karppi, et al., 2016).

In a similar vein, Ngai argues, “one of the most basic ways in which affect becomes socially recognizable in the age of mechanical reproducibility [is] as a kind of ‘innervation,’ ‘agitation,’ or ... ‘animatedness’” [16]. Likewise for Harris, racial discourses “ultimately rely on the visual in the sense that the visible body must be used by those in power to represent non-visual realities that differentiate insides from outsiders” [17]. With the proliferation of digital animations, existing patterns of discrimination are easily reproduced via novel technical forms: I tsecond Ngai’s thesis that animation’s formal emphasis on classification directly shapes the representation and lived experience of racialized subjects.

What formal, technical, or aesthetic elements of digital animation enable the racializing and racist effects detailed by Nakamura, Ngai, and others? Digital animations mediate our feelings through what Ian Bogost terms “procedural rhetoric.” In his (2007) book Persuasive Games, Bogost identifies this form of rhetorical expression as “the practice of using processes persuasively, just as verbal rhetoric is the practice of using oratory persuasively and visual rhetoric is the practice of using images persuasively” [18]. Bogost observes how, as with other forms of rhetorical production, procedural rhetoric is intended to change minds and express ideas — but through the medium of code, with visual or textual means a secondary epiphenomenon.

Procedural rhetoric is thus a mechanism of sorting and intervening. “[Such] arguments are made not through the construction of words or images,” Bogost writes, “but through the authorship of rules of behavior, of dynamic models.” Distinct from the rhetoric of media genres like novels or live performance on stage and screen, Bogost sees this form of rhetoric at work in the new digital media of computers, video games, and digital animation.

Digital games are one arena in which racialization and animation have been ongoing areas of study, and digital animation’s wider prominence as both a set of technical affordances and as a broader cultural form signals this critical game scholarship is broadly applicable (Golumbia, 2009). “Representation is not fully separate from the implicitly hard-core [or material] elements of games,” observe Jennifer Malkowski and TreaAndrea M. Russworm; “it is,” they write, “achieved through and dependent on player and machine actions, on code, and on hardware, not just on surface-level images and sounds” [19]. As more and more animated characters appear in film, games, and other media, the more racializing tropes seem to exert themselves.

Digital animation perpetuates logics of racial classification and sorting even in the context of fantasy worlds and alien races — as in the case of the alien races of Star Wars Episode I: The Phantom Menace (1997), who were roundly critiqued as deploying stereotypes regarding people of East Asian and Caribbean descent, or the well-known taxonomy of races in World of Warcraft. “The more one seems to extricate oneself from the mire of terrestrial stereotyping,” Alexander Galloway suggests in regards to these and other digitally animated fantasy characters, “the more free and flexible the bigotry machine becomes” [20]. In this context, fantastic or non-illusionistic animations are as problematic for their racializing potential as those representations grounded in, for instance, the physiological reality of the human body and face.

Why are made-up “races” so prone to racialization along “real” lines? Racism is always mediated by the body, and our embodied intersubjective relationships with others are always social and emotional. Paula Ioanide suggests that contemporary racism is grounded in “emotional economies” — “public beliefs, fears, and desires” — that undergird the “construction of political will or complicity” around racism and racial discrimination in ostensibly color-blind societies [21]. I argue it is not just the representation of human faces and bodies, but of the human affects and emotions they express, which are the main vectors through which the procedural rhetorics of digital animation encode racializing schemas — necessitating an analytic focus around race and schematization in the coding of the human face.

The procedurality of visual representations of emotional expression is central to the formal effects of digital animation: a recent textbook for digital animators makes this centrality plain (O’Neill, 2008). “An abstract character with a simplified face has the ability to emote more clearly via the process of amplification via simplification,” the textbook suggests. In the case of more illusionistic animated human characters, the textbook observes digital animators, like roboticists, often struggle with crafting creations realistic enough to bridge the so-called “uncanny valley” (Mori, 2012). As such, the textbook suggests, “animated characters with a high degree of abstraction or a cartoon-like appearance are generally more accepted by the audience” [22] because they avoid the pitfalls of the not-quite “lifelike.”

Digital animators are presented with something of a paradox: aesthetically, highly illusionistic characters are better able to express emotional nuance, but are more challenging to devise in ways which are not perceived as “creepy” — whereas more schematic characters have the virtue of expressive clarity and audience acceptance through their status as obviously animated fictions. Part of the accounting for this paradoxical reaction to animation comes from the psychology of audience response.

Terry Silvio observes animation is a form of human psychological projection, one which, “like any human expression, requires a medium” [23]. Drawing on D. W. Winnicott’s concept of the transitional object and psychic transference, Silvio suggests animated representations of beings and things function for the viewer as, “psychically projected objects of desire.” Such objects are powerful precisely because, while semi-illusionistic, they still provide a schematic, simplified ground onto which the individual can transfer their affective and emotive attention and energy. By this account, highly representative digital animations of human beings may be less effective as objects of projection, precisely because they lack the clarity and simplicity of more schematized animated beings.

Animated objects are compelling because of a viewer’s emotional or affective projection onto a particular set of simplified formal characteristics and procedural rhetorics, and it is in these formal qualities where the structural mechanics of animation as a classificatory — and racializing — format are found. Digital animations like animoji and memoji are replete with these schematic emotional signifiers and aesthetic markers. One such marker is a particular kind of movement, described by O’Neill as a “jiggle”: a “secondary motion [,] a subtle cue that the [animation] is alive and affected by forces in the world it inhabits” [24]. Silvio notes this kind of organic movement is one aesthetic strategy “used to create the ‘illusion of life.”

In game studies, such movement is described as “juicy”: game studies scholar Jesper Jull defines this “juiciness” as, ”a type of visceral interface gives excessive amounts of positive feedback in response the player’s actions” [25]. Described with terms like wiggling, bouncing, sparking, or squirting, “jiggling” or “juicy” movements are deployed by digital designers to make users feel emotionally connected to a digital application and its animated elements.

Animoji and memoji, in their bouncing and jerking on screen, are indeed “juicy.” In animated faces meant to be illusionistic or representation, O’Neill suggests such motions should be dampened; in contrast, “for a cartoon-style character, the stretching and pilling of the face while in motion will need to be exaggerated” [26]. As Ngai notes, such representations “takes on special weight ... for [those] whom objectification, exaggerated corporeality or physical pliancy, and the body-made-spectacle remain doubly freighted issues” [27]. Ngai observes plasticity, as a material or aesthetic form, is often part of an easy slippage and constructed equivalence between schematic classifications and representations of physical characteristics, emotional states, and racialized or otherwise bigoted stereotypes.

Like animation, plastic, as Barthes observes, is defined by “the very spectacle of its end products” [28]. This spectacle can be turned to counter-hegemonic ends. Ngai argues explicitly that animated representations of racialized characters can nonetheless provide potential ground for an emancipatory and radical plasticity [29]. Yet the visceral heft of a term like “juicy” — and of the animated effects it describes — stems in part from its connotation of voluptuousness, both of physicality and emotionality. This implicit allusion is a powerful one: “Emerging from the carnal language of (colonial) excess, viscerality registers those systems of meaning that have lodged in the gut,” note the editors of a recent special issue of GLQ on viscerality and race. Such sensations send a mixed signal, “signifying to the incursion of violent intentionality into the rhythms of everyday life” (Holland, et al., 2014). As a baseline for emotional representation, juiciness already has racial overtones.

Above the baseline of “juiciness,” representations of the animated human face are often organized into schematic, granular categories, another example of Nakamura’s “menu-driven identities.” The palettes of hairstyles, skin colors, face shapes, and other physiological features offered by Bitmoji and Memoji are exemplary of these categorizations (much like similar systems in many digital games). Silvio cites another example of how categorization, affect, and animated representations intersect: the case of devoted Japanese manga and anime fans or otaku, who participate in what Azuma Hiroki calls “database consumption” (Azuma, 2009). The practice of database consumption entails classifying the physiological elements of an anime character’s image according to their correspondence with particular stereotypical physical, emotional, social, or personality traits. Otaku do so in order to find or create new animated characters composed out of assembled bundles of preferred social and emotional signifiers.

The process of finding a visualization to represent emotion invariably draws on extant cultural codes, including potentially discriminatory or racist tropes. “What is special about the animation techne of otaku,” Silvio suggests, “is that it simultaneously homogenizes affect ... and proliferates conventionalized affect-signs ... across a range of media,” in turn making “the arbitrariness of the relationship between material qualities and emotional states explicit” [30]. Emotional expression, with its emphasis on the physical body and on caricature, becomes a potential medium of discrimination. “When the human is defined in terms of affect,” claims Silvio, “it can only be projected into the material world via conventionalized signifiers” [31]. In other words, animated media objects necessitate the schematization of human feelings as a result of the very semiotic emptiness which makes them amenable to human emotional projection — and it is this schematization which introduces invariably racialized stereotypes as frame upon which these emotive signifiers are presented.

 

++++++++++

4. Skeuomorphism and face tracking

Despite the novelty of Apple’s animoji and memoji as moving animations, they share a kinship with other animated digital objects used for social and emotional expression, such as emoticons, emoji characters, moving reaction GIFs, and image “stickers” (Stark and Crawford, 2015). Indeed, as “animated emoji,” animoji and memoji are exemplary of the ways in which the technical differentiations and limits of various animated expressive formats are skeuomorphically overcome by the increased flattening and interoperability of what Friedrich Kittler terms “optical media.” Skeuomorphism is the practice of designing items to resemble some predecessor analog, even if such aesthetic mimesis is no longer functionally necessary.

Skeuomorphism is yet another vector whereby digital animation reiterates racialized cultural codes. Stephanie Boluk and Patrick LeMieux point to the history of the animated white gloved manicule cursor familiar across many digital applications as one such skeuomorph with a racist heritage: “the white, right-hand glove,” they note, “not only intentionally signify Mickey Mouse but also, unintentionally, the much longer racial history of cartoons [and] anthropomorphic animals that whitewashed their relation to racist caricatures inspired by minstrelsy and vaudeville” [32]. The “Master Hand” demonstrates the reach of racializing cultural tropes into the fabric of games not only through procedurality as a formal logic, but through the racial antecedents of social or cultural tropes (procedures writ large) which are transferred, deliberately or not, into digital artifacts.

Animoji and memoji are skeuomorphic: they make use of animations drawn from Apple’s emoji font, but are not themselves emoji characters. Yet by building off the emoji character set, Apple’s animoji are also implicated within another “menu-driven” controversy around racial representation. The Unicode Consortium added a mechanism to represent race via emoji in 2015 as part of the release of Unicode 8.0, after being lobbied by a diverse coalition of activists to improve the diversity of the character set. Yet the schema for skin tone modifiers (five different shades ranging from dark brown to pink) is based on the Fitzpatrick scale, a dermatological measure meant to describe skin pigmentation, not race or ethnicity (Fitzpatrick, 1988).

The Fitzpatrick scale’s default assumptions — such as Type I for the “whitest” skin, Type VI for the “blackest” — are exemplary of the emoji character set’s indebtedness to established hierarchies of racialized value. Commentators flagged problems with the new modifiers almost immediately upon their release: simply changing the pigmentation of emoji neither adequately represented racial difference, nor abrogated the fact some emoji were already physiologically as racialized (Tutt, 2015).

While broadly salutary, the implementation of emoji skin tone modifiers both failed to efface earlier racializing logics embedded in the character set, and overlaid a whole new set of categorizations and classing behaviors on top of them. Moreover, putting the onus on emoji users to make a decision about modifying each character, as some commenters have suggested, is not uncomplicated. This technical affordance places the onus on people of color to make a proactive decision to change the look of each character — by simply using the default yellow emoji in the context of the new modifiers, users underscore the status quo whereby the assumed baseline of whiteness is always the first and easiest option (A. McGill, 2016).

Silvio writes that, “the emoticon is an icon of generic affect, rather than individual identity.” Such generic remediation and recombination of emoji characters, as a “menu-driven identity” (Nakamura, 2002), are made more generic, not less, with the addition of skin tone modifiers. Without systematically addressing the racializing skeuomorphism already extant within the character set, the formal bits and pieces available to permit “individuals to narrate their own emotional lives through the medium of animation” [33] become ways to reproduce the dominant affective, emotional, and social discourses they depict (Stark and Crawford, 2015). These effects are particularly pronounced in the case of animating a mobile human face.

Illusionistic or “lifelike” digital animation seeks to produce realism through a subtler logic of projecting emotional signifiers, grounded in a more granular but no less schematic system of emotive expression. O’Neill’s (2008) textbook on digital animation is representative of these techniques. In a chapter titled “Face setup,” the textbook gives animators an introductory lesson in facial anatomy, and a primer on the “psychology behind facial expressions.” In doing so, the textbook hews to the Paul Ekman physiological model of Basic Emotional Expression. Ekman, a Palo Alto psychologist, made a series of ethnographic studies in the 1970s through which he concluded certain facial expressions were universally correlated with basic human emotions, and that these expressions, including momentary or fleeting “micro expressions,” were identifiable by a trained specialist.

Ekman quantified his theory of facial expressions within the Facial Action Coding System (FACS) (Ekman and Rosenberg, 2005), a taxonomy to code the ways in which an individual’s facial muscles shifted position when they expressed various feelings. Ekman has commercialized his work on FACS, offering his services to marketers, law enforcement agencies, and other organizations eager to pinpoint the discrete emotions of particular individuals. As the textbook notes, FACS is also one of the most influential components of contemporary facial recognition software, used to “lay a foundation for the expressions that will be needed” for particular animated characters [34].

While noting, “emotion and emotional responses are a highly debated and controversial subject in fields ranging from anthropology to psychology and philosophy,” O’Neill’s textbook is representative of the genre in failing to dig into the many extant controversies around the physiological signaling of emotion and affect. Ekman’s taxonomy of basic, universal emotional expressions — and the assumptions underpinning the Facial Action Coding System reliably connecting external physiological behavior with internal emotional states — have been contested since their inception (Russell, 1994; Leys, 2017, 2011). Some scholars have disagreed with Ekman’s identification of only five or six basic emotions (Russell, 1994); others have cast doubt on the universality of basic emotions, suggesting human feelings stem from a more complicated combination of common embodied affects along with culturally specific cues and interactions (Barrett, 2006; Gendron and Barrett, 2009).

Just as troublingly, physiological facial coding systems like FACS are mechanisms for visually categorizing, and reifying, race. These systems enact a process Simone Browne terms, “digital epidermalization,” or “the imposition of race on the body” through the classification and schematization of human facial features [35]. As Browne rightly notes, “these machines are designed and operated by real people to sort real people.” In other words, even if such systems are able to map each and every human face perfectly, the technical capacities of physiological classification will still be subject to the vagaries of embedded cultural histories and contemporary forms of discrimination and of racial ordering.

Like the associations made by otaku between emotions and animated facial features, facial recognition systems like FACS perform the same formal association between emotional expressions and schematically mapped parts of the face. The “conventionalized affect-signs” of the FACS system may be more numerous and subtler than those of Japanese anime, but they predispose faces animated using such systems to similar sorting. This categorization of emotional expressions always risks generating racializing effects reinforced by the formal qualities of the media involved. “Emotional qualities seem especially prone to sliding into corporeal qualities where the African-American subject is concerned,” suggests Ngai, “reinforcing the notion of race as a truth located, quite naturally, in the always obvious, highly visible body” [36]. As Silvio’s example of otaku suggests, this slippage is not limited: it gives rise to a hydra-like profusion of categorizations grounded in stereotypical signs of race, gender, ability, and intersecting combinations of these categories.

 

++++++++++

5. Animated labor and the racialized subject

Memoji, Bitmoji, and AR Emoji are all mechanisms through which their parent companies collect information about the human subject through the facial recognition capabilities of contemporary smartphones (Gates, 2011). On the one hand, the mobilization of “cute” animated characters and quasi-lifelike avatars both normalizes the use of facial recognition technologies, and helps these companies improve their technical capabilities. The many animated “mask” applications available for the iPhone (and which are now components of Snapchat and of Facebook’s Instagram app) serve a similar function. These interfaces are privacy “loss leaders,” drawing in smartphone users with a seemingly innocuous use case in order to cement the widespread use of a highly invasive form of surveillance, and enhance the capabilities of the technology itself.

As procedural mechanisms for the transmission, and translation, of affective rhetoric, animation performs the expressive work of affective labor, just as emoji, stickers, and other similar digital objects do (Stark and Crawford, 2015). As Michael Hardt argues, affective labor entails “contact [which] can be either actual or virtual ... [in] the production of affects in the entertainment industry, for example, the human contact, the presence of others, is principally virtual, but not for that reason any less real” [37].

Users of animoji, memoji, and other similar formats are also already performing affective labor (Hardt, 1999) for Apple and other digital platforms, just as they do when using other formats of emotional expression like Facebook’s Reactions icons (Stark, 2018). This affective labor is invariably racialized. As Sianne Ngai observes, “animatedness not only returns us to the connection between the emotive and the mechanistic [,] but also commingles antithetical notions of physical agency” [38]. Animation, for Ngai, signals both the legibility and regularization of the body, and the exaggerated performance of emotional expression — the two faces of the animate represented perfectly by memoji and animoji. Ngai notes this ambivalence “takes on special weight in the case of racialized subjects” — and in my view, is also a mechanism through which digital animation produces racializing logics in the first place.

The novelty in animoji, memoji, bitmoji, and similar animated formats representing the human face is their combination of both the vitalism and exaggeration of schematic animations, and the more granular techniques native to FACS and other techniques for capturing facial expressivity. Formats like animoji and memoji thus represent a concentration of animation’s racializing logics, through their formal incorporation of a system of broad schematic classifying tropes drawn from the emoji character set with the more granular metrics of physiological tracking enabled by facial identification technology. The combination of schematic and granular tropes of digital animation — the former sitting like a second skin atop the latter — makes the racializing potential of formats like animoji and memoji acute.

This combination of animated formats produces a potentially potent form of Browne’s “digital epidermalization”: these married techniques impose race on users within a limited pallet for the classification and schematization of human facial features (Browne, 2015), and introduce a variety of classifying logics that potentially reify existing racial categories. Browne observes how, “particular biometric systems privilege[e] whiteness, or lightness, in the ways in which certain bodies are measured for enrollment” [39]. Recent work by Joy Buolamwini and Timnit Gebru (2018) documenting the bias in facial recognition training sets, and the difficulty many commercial facial recognition systems have in recognizing African-American women illustrates one aspect of digital epidermalization’s privileging of whiteness.

Masking apps are another particularly egregious, and obvious, avenue for racial discrimination via digital epidermalization. In 2016, Snapchat came under fire for a “Bob Marley” mask filter described by many commentators as “digital blackface” (Kleeman, 2016). In 2017, the FaceApp app deployed a “Hot” filter than lightened a photograph’s skin tone and applied smoothing to make a subject’s facial features appear “white.” Apparently immune to the widespread critiques of racism, the company later released a set of filters explicitly labeled as racial: “Asian, Black, Caucasian and Indian” (Thomas, 2017). Memoji and animoji illustrate a flip side to Buolamwini and Gebru’s critique: they help facial recognition technologies improve their ability to recognize all human faces. As Alondra Nelson, a sociologist of race, science and technology at Columbia University and President of the Social Science Research Council, recently noted on Twitter, “Algorithmic accountability [in facial recognition] is tremendously important. Full stop. But I confess that I struggle to understand why we want to make black communities more cognizant in facial recognition systems that are disproportionately used for surveillance.” Digital animations like memoji and animoji are capable of rendering this logic of racializing privilege even more pervasive and perverse, precisely because they enlist multiple different technologies of classification within the structure of the mechanisms through which our digitally mediated social and emotional lives take place.

In some cases, users can resist the racial classifying inherent in many digital animations, and turn these formats of emotional expression into resistance. Yet the more race-based classification is distilled as a technical feature, the more animation’s capacity to, in Ngai’s words, “generate unanticipated social meanings and effects” subverting racialization comes into question [40]. My fear is that these racializing logics of classification are increasingly over-determined and redundant within formats like memoji and AR emoji — users may subvert one racializing mechanism (say, juiciness), but another formal or aesthetic element (say, skin tone palette) is there to do the work of formalizing difference and making that difference legible and categorizable to systems of oppression.

 

++++++++++

6. Conclusion: Animation and resistance

In uniting animation with human representation, animoji and memoji share some similarities with the animated Graphical Interface Format (GIF), or animated GIF (Eppink, 2014; Miltner and Highfield, 2017; Stark and Crawford, 2015). GIFs, too, are racialized: in a brilliant essay, Lauren Michele Jackson observes animated reaction GIFs disproportionately represent Black people, and suggests the mobilization of these expressive objects by white users in particular constitutes a form of “digital blackface” (Jackson, 2017). “Black people and black images are thus relied upon to perform a huge amount of emotional labor online on behalf of nonblack users,” Jackson notes. “We are your sass, your nonchalance, your fury, your delight, your annoyance, your happy dance, your diva, your shade, your ‘yaas’ moments. The weight of reaction GIFing, period, rests on our shoulders.”

Compounding this problem are the ways in which animated reaction GIFs are also codified and classified by businesses like GIF archive and purveyor Giphy, a site exemplary of how animated GIFs are now being monetized through more modulated control of the distribution channels which support the format’s circulation online. By categorizing reaction GIFs and suggesting them through its search function and widely-used API, Giphy is also profiting from animation’s racializing logics. The kind of classificatory mechanics a site like Giphy deploys are seemingly innocuous, but by training and habituating users to consider digital animations as everyday media of social and emotional exchange, Apple and other platforms like it risk codifying and reifying animation as a primary rhetoric of emotional self-expression.

Animated GIFs and animoji/memoji share many formal and aesthetic elements as personalizable forms of lively movement used for social purposes online (indeed, it is no surprise there are now many GIF images of animoji characters, some engaged in “animoji karaoke”). And as Jackson points out, users themselves are often guilty of participating in straightforward “digital blackface.” Yet there are formal qualities of the animated GIF that give the format the potential to be re-radicalized.

A format like the GIF was already widely accessible prior to the commercialization of the World Wide Web and its associated technologies, and so is resistant at a technical level to commodification and capture. Anyone can (and does) create an animated GIF, on any subject. In contrast, Apple and other platforms have technical control over the facial data they collect from users. With animoji and memoji, an animated character ventriloquizes a phone’s user directly, drawing on their attention, physiognomy, and sociality to train Apple’s facial recognition system and collect training data regarding the user’s emotional expressions. At a technical level, animoji and memoji are constrained by racializing classificatory logics in ways an animated GIF is not.

Digital animation — from the hearts and confetti which now appear automatically in the iMessage program on all iPhones to the avatars of games and virtual worlds like The Sims and Second Life — are pervasive across digital worlds. Yet by extension, the modes of emotional and affective labor these formats mediate are suffused with racializing logics, both generated and enabled by these technical formats. What makes such labor suffused with racializing difference is entangled with the history and technical affordances of digital animation as an aesthetic form, digital objects simultaneously interoperable and classifiable, open-ended, and recursive.

There is a resistant or emancipatory potential in animation: it comes in part from what Sianne Ngai terms animation’s “reanimation,” and thus subversion, of stereotypical representations, “images that are perversely both dead and alive” [41]. As facial recognition technologies become widespread, it is vital to turn to explicit philosophies of data justice (Hoffmann, 2018) and design justice (Costanza-Chock, 2018) in the world of digital animation as elsewhere. In accord with other theorists of digital discontinuity, heterodoxy, obfuscation, and queerness (Brunton and Nissenbaum, 2011; Cohen, 2012; Gaboury, 2013; Light, 2011), we need further study of these digital forms of emotional expression, in order to produce “new ways of understanding the technologization of the racialized body” [42]. The racializing tendencies of the digital animate can and must be subverted — but that subversion is unlikely to come from an animated panda calling out in the void. End of article

 

About the author

Luke Stark is a Postdoctoral Researcher at Microsoft Research Montréal, and an Affiliate at the Berkman Klein Center for Internet & Society at Harvard University. He holds a Ph.D. from the Department of Media, Culture, and Communication at New York University, and an Honours B.A. and M.A. from the University of Toronto; he has been published in venues including Social Studies of Science, Social Media + Society, Information Society, and Media, Culture and Society.
E-mail: luke [dot] stark [at] nyu [dot] edu

 

Acknowledgements

I would like to thank a number of people for their help and advice in the preparation of this article, including members of the Labor Tech reading group, and especially Sareeta Amrute, Winifred Poster, David Columbia, and Rolien Hong; Crystal Abidin and Joel Gn for their leadership on this special issue; and Anna McCarthy, Illana Gershon, Christine Mitchell, Jenny Korn, and several anonymous reviewers.

 

Notes

1. Silvio, 2010, p. 422.

2. Silvio, 2010, p. 427.

3. Maldonado, 2009, p. 1,024.

4. Ngai, 2004, p. 94.

5. Silvio, 2010, p. 422.

6. Galloway, 2006, p. 95.

7. Sterne, 2012, p. 11.

8. Ibid.

9. Silvio, 2010, p. 430.

10. Ibid.

11. Nakamura, 2009b, p. 158.

12. Galloway, 2011, p. 119.

13. Ngai, 2004, p. 95.

14. Ngai, 2004, p. 92.

15. Mitchell, 2012, p. xii.

16. Ngai, 2004, p. 91.

17. Harris, 2003, p. 2.

18. Bogost, 2007, p. 28.

19. Malkowski and Russworm, 2017, p. 17.

20. Galloway, 2011, p. 119.

21. Ioanide, 2015, p. 4.

22. O’Neill, 2008, p. 10.

23. Silvio, 2010, p. 427.

24. O’Neill, 2008, p. 179.

25. Juul, 2010, p. 54.

26. O’Neill, 2008, p. 180.

27. Ngai, 2004, p. 91.

28. Barthes, 1972, p. 97.

29. Through an extended reading of the stop-motion animated series The PJs and other texts, Ngai locates moments in which both technical failures and deliberate creative choices breach audience expectations and create an “unsuspected liveness” which subverts racializing aesthetic tropes. Ngai notes the stop-motion technique used to animate The PJs created a “slippery-mouth effect,” whereby inexact substitution of the many types of mouth shape needed to represent speech for each African-American character created the impression that a character’s mouth “would fly off the body completely,” drawing attention both to animation’s artifice and to the potential racist stereotyping being ascribed to the show’s characters by some viewers (Ngai, 2004, p. 117). In this case, it is an inadvertent technical failure in the stop-motion format that provides a point of entry for critique and the identification of radical movement.

30. Silvio, 2010, p. 430.

31. Ibid.

32. Boluk and LeMieux, 2017, p. 53.

33. Silvio, 2010, p. 433.

34. O’Neill, 2008, p. 167.

35. Browne, 2015, p. 113.

36. Ngai, 2004, p. 93.

37. Hardt, 1999, p. 96.

38. Ngai, 2004, p. 100.

39. Browne, 2015, p. 110.

40. Ngai, 2004, p. 125.

41. Ibid.

42. Ngai, 2004, p. 125.

 

References

H. Azuma, 2009. Otaku: Japan’s database animals. Translated by J. E. Abel and S. Kono. Minneapolis: University of Minnesota Press.

L. F. Barrett, 2006. “Are emotions natural kinds?” Perspectives on Psychological Science, volume 1, number 1, pp. 28–58.
doi: https://doi.org/10.1111/j.1745-6916.2006.00003.x, accessed 22 August 2018.

R. Barthes, 1972. Mythologies. Selected and translated by A. Lavers.. New York: Hill and Wang.

I. Bogost, 2007. Persuasive games: The expressive power of videogames. Cambridge, Mass.: MIT Press.

S. Boluk and P. LeMieux, 2017. “About, within, around, without: A survey of six metagames.” In: S. Boluk and P. LeMieux. Metagaming: Playing, competing, spectating, cheating, trading, making, and breaking videogames. Minneapolis: University of Minnesota Press, pp. 1–77.

G. C. Bowker and S. L. Star, 1999. Sorting things out Classification and its consequences. Cambridge, Mass.: MIT Press.

S. Browne, 2015. Dark matters: On the surveillance of blackness. Durham N.C.: Duke University Press.

F. Brunton and H. Nissenbaum, 2011. “Vernacular resistance to data collection and analysis: A political theory of obfuscation,” First Monday, volume 16, number 5, at http://firstmonday.org/article/view/3493/2955, accessed 22 August 2018.
doi: https://doi.org/10.5210/fm.v16i5.3493, accessed 22 August 2018.

J. Buolamwini and T. Gebru, 2018. “Gender shades: Intersectional accuracy disparities in commercial gender classification,” Proceedings of Machine Learning Research, volume 81, pp. 77–91, at http://proceedings.mlr.press/v81/buolamwini18a.html, accessed 22 August 2018.

M. Y. Chen, 2012. Animacies: Biopolitics, racial mattering, and queer affect. Durham, N. C.: Duke University Press.

J. Cheney-Lippold, 2017. We are data: Algorithms and the making of our digital selves. New York: New York University Press.

J. E. Cohen, 2012. Configuring the networked self: Law, code, and the play of everyday practice. New Haven, Conn.: Yale University Press.

S. Costanza-Chock, 2018. “Design justice: Towards an intersectional feminist framework for design theory and practice,” Proceedings of the Design Research Society 2018 (3 June), at https://ssrn.com/abstract=3189696, accessed 22 August 2018.

R. E. Dubrofsky, 2016. “Therapeutics of the self,” Television & New Media, volume 8, number 4, pp. 263–284.
doi: http://doi.org/10.1177/1527476407307578, accessed 22 August 2018.

P. Ekman and E. L. Rosenberg (editors), 2005. What the face reveals: Basic and applied studies of spontaneous expression using the facial action coding system (FACS). Second edition. New York: Oxford University Press.

J. Eppink, 2014. “A brief history of the GIF (so far),” Journal of Visual Culture, volume 13, number 3, pp. 298–306.
doi: https://doi.org/10.1177/1470412914553365, accessed 22 August 2018.

T. B. Fitzpatrick, 1988. “The validity and practicality of sun-reactive skin types I through VI,” Archives of Dermatology, volume 124, number 6, pp. 869–871.
doi: https://doi.org/10.1001/archderm.1988.01670060015008, accessed 22 August 2018.

M. Foucault, 2003. “17 March 1976,&dquo; In: M. Foucault. “Society must be defended”: Lectures at the Collège de France 1975–1976. Translated by D. Macey. New York: Picador, pp. 239–263.

J. Gaboury, 2013. “A queer history of computing, part 1” (19 February), at http://rhizome.org/editorial/2013/feb/19/queer-computing-1/, accessed 6 October 2013.

K. Gates, 2011. Our biometric future: Facial recognition technology and the culture of surveillance. New York: New York University Press.

A. R. Galloway, 2011. “Does the whatever speak?” In: L. Nakamura and P. A. Chow-White (editors). Race after the Internet. New York: Routledge, pp. 111–127.

A. R. Galloway, 2006. Gaming: Essays on algorithmic culture. Minneapolis University of Minnesota Press.

M. Gendron and L. F. Barrett, 2009. “Reconstructing the past: A century of ideas about emotion in psychology,” Emotion Review, volume 1, number 4, pp. 316–339.
doi: http://doi.org/10.1177/1754073909338877, accessed 22 August 2018.

I. Gershon, 2015. “What do we talk about when we talk about animation,” Social Media + Society (11 May).
doi: http://doi.org/10.1177/2056305115578143, accessed 22 August 2018.

C. Ginzburg, 2009. “Morelli, Freud and Sherlock Holmes: Clues and scientific method,” History Workshop Journal, volume 9, number 1, pp. 5–36.
doi: https://doi.org/10.1093/hwj/9.1.5, accessed 22 August 2018.

D. Golumbia, 2009. The cultural logic of computation. Cambridge, Mass.: Harvard University Press.

S. Gopinath, 2013. The ringtone dialectic: Economy and cultural form. Cambridge, Mass.: MIT Press.

S. Hall, 2017. “Stuart Hall — Race, gender, class in the media” (2 March), at https://www.youtube.com/watch?v=FWP_N_FoW-I, accessed 1 June 2018.

M. Hardt, 1999. “Affective labor,” Boundary 2, volume 26, number 2, pp. 89–100.

M. D. Harris, 2003. Colored pictures: Race and visual representation. Chapel Hill: University of North Carolina Press.

A. L. Hoffmann, 2018. “Data violence and how bad engineering choices can damage society,” Medium (30 April), at https://medium.com/s/story/data-violence-and-how-bad-engineering-choices-can-damage-society-39e44150e1d4, accessed 1 June 2018.

S. P. Holland, M. Ochoa, and K. W. Tompkins, 2014. “On the visceral” GLQ, volume 20, number 4, pp. 391–406.
doi: http://doi.org/10.1215/10642684-2721339, accessed 22 August 2018.

P. Ioanide, 2015. The emotional politics of racism: How feelings trump facts in an era of colorblindness. Stanford, Calif.: Stanford University Press.

L. M. Jackson, 2017. “We need to talk about digital blackface in reaction GIFs” (2 August), at http://www.teenvogue.com/story/digital-blackface-reaction-gifs, accessed 2 August 2017.

J. Juul, 2010. “A casual revolution,” In: J. Juul. A casual revolution: Reinventing video games and their players. Cambridge, Mass.: MIT Press, pp. 1–24.

T. Karppi, L. Kähkönen, M. Mannevuo, M. Pajala, and T. Sihvonen, 2016. “Affective capitalism: Investments and investigations,” ephemera, volume 16, number 4, pp. 1–13.

I. Kerr and J. McGill, 2007. “Emanations, snoop dogs, and reasonable expectations of privacy,” Criminal Law Quarterly, volume 52, number 3, pp. 392–432.

S. Kleeman, 2016. “Snapchat’s offensive ‘Bob Marley’ filter gives you instant blackface” (20 April), at https://gizmodo.com/snapchat-s-offensive-bob-marley-filter-gives-you-inst-1772008981, accessed 1 June 2018.

R. Leys, 2017. The ascent of affect: Genealogy and critque. Chicago: University of Chicago Press.

R. Leys, 2011. “The turn to affect: A critique,” Critical Inquiry, volume 37, number 3, pp. 434–472.
doi: http://doi.org/10.1086/659353, accessed 22 August 2018.

A. Light, 2011. “HCI as heterodoxy: Technologies of identity and the queering of interaction with computers,” Interacting with Computers, volume 23, number 5, pp. 430–438.
doi: http://doi.org/10.1016/j.intcom.2011.02.002, accessed 22 August 2018.

A. Mackenzie, 2005. “The performativity of code: Software and cultures of circulation,” Theory, Culture & Society, volume 22, number 1, pp. 71–92.
doi: http://doi.org/10.1177/0263276405048436, accessed 22 August 2018.

M. M. Maldonado, 2009. “‘It is their nature to do menial labour’: The racialization of ‘Latino/a workers’ by agricultural employers,” Ethnic and Racial Studies, volume 32, number 6, pp. 1,017–1,036.
doi: http://doi.org/10.1080/01419870902802254, accessed 22 August 2018.

J. Malkowski and T. M. Russworm (editors), 2017. Gaming representation: Race, gender, and sexuality in video games. Bloomington: Indiana University Press.

P. Manning and I. Gershon, 2013. “Animating interaction,” HAU: Journal of Ethnographic Theory, volume 3, number 3, pp. 107–137.
doi: https://doi.org/10.14318/hau3.3.006 , accessed 22 August 2018.

A. McGill, 2016. “Why white people don’t use white emoji?” (9 May), at https://www.theatlantic.com/politics/archive/2016/05/white-people-dont-use-white-emoji/481695/, accessed 26 November 2017.

R. McGregor, 2017. “Apple animoji demo for iPhone X” (12 September), at https://www.youtube.com/watch?v=u65R3Uo0iRc, accessed 3 December 2017.

K. M. Miltner and T. Highfield, 2017. “Never gonna GIF you up: Analyzing the cultural significance of the animated GIF,” Social Media + Society (17 August).
doi: http://doi.org/10.1177/2056305117725223, accessed 22 August 2018.

W. J. T. Mitchell, 2012. Seeing through race. Cambridge, Mass.: Harvard University Press.

N. Montfort and I. Bogost, 2009. Racing the beam: The Atari Video computer system. Cambridge, Mass.: MIT Press.

M. Mori, 2012. “The uncanny valley” (12 June), at https://spectrum.ieee.org/automaton/robotics/humanoids/the-uncanny-valley, accessed 8 June 2018.

L. Nakamura, 2017. “Afterword: Racism, sexism, and gaming’s cruel optimism,” In: J. Malkowski and T. M. Russworm (editors). Gaming representation: Race, gender, and sexuality in video games. Bloomington: Indiana University Press, pp. 245–250.

L. Nakamura, 2009a. “Don’t hate the player, hate the game: The racialization of labor in World of Warcraft,” Critical Studies in Media Communication, volume 26, number 2, pp. 128–144.
doi: http://doi.org/10.1080/15295030902860252, accessed 22 August 2018.

L. Nakamura, 2009b. “The socioalgorithmics of race: Sorting it out in Jihad worlds,” In: S. Magnet and K. Gates (editors). The new media of surveillance. New York: Routledge, pp. 149–161.

L. Nakamura, 2002. Cybertypes: Race, ethnicity, and identity on the Internet. New York: Routledge.

S. Ngai, 2015. “Visceral abstractions,” GLQ, volume 21, number 1, pp. 33–63.
doi: http://doi.org/10.1215/10642684-2818648, accessed 22 August 2018.

S. Ngai, 2004. Ugly feelings. Cambridge, Mass.: Harvard University Press.

S. U. Noble, 2018. Algorithms of oppression: How search engines reinforce racism. New York: New York University Press.

R. O’Neill, 2008. Digital character development: Theory and practice. Burlington, Mass.: Elsevier/Morgan Kaufmann.

J. A. Russell, 1994. “Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies,” Psychological Bulletin, volume 115, number 1, pp. 102–141.
doi: http://doi.org/10.1037/0033-2909.115.1.102, accessed 22 August 2018.

J. C. Scott, 1998. Seeing like a state: How certain schemes to improve the human condition have failed. New Haven, Conn.: Yale University Press.

T. Silvio, 2010. “Animation: The new performance?” Journal of Linguistic Anthropology, volume 20, number 2, pp. 422–438.
doi: http://doi.org/10.1111/j.1548-1395.2010.01078.x, accessed 22 August 2018.

T. Sito, 2013. Moving innovation: A history of computer animation. Cambridge, Mass.: MIT Press.

C. A. Stabile and M. Harrison (editors), 2003. Prime time animation: Television animation and American culture. New York: Routledge.

L. Stark, 2018. “Algorithmic psychometrics and the scalable subject,” Social Studies of Science, volume 48, number 2, pp. 204–231.
doi: https://doi.org/10.1177/0306312718772094, accessed 22 August 2018.

L. Stark and K. Crawford, 2015. “The conservatism of emoji: Work, affect, and communication,” Social Media+ Society (8 October).
doi: https://doi.org/10.1177/2056305115604853, accessed 22 August 2018.

J. Sterne, 2012. MP3: The meaning of a format. Durham, N.C.: Duke University Press.

J. Sterne, 2003. The audible past: Cultural origins of sound reproduction. Durham, N.C.: Duke University Press.

C. Thomas, 2017. “The Internet Isn’t here for FaceApp’s new, race-swapping filters” (9 August), at https://www.out.com/news-opinion/2017/8/09/internet-isnt-here-faceapps-new-race-swapping-filters, accessed 1 June 2018.

P. Tutt, 2015. “Apple’s new diverse emoji are even more problematic than before,” Washington Post (10 April), at https://www.washingtonpost.com/posteverything/wp/2015/04/10/how-apples-new-multicultural-emojis-are-more-racist-than-before/, accessed 26 November 2017.

 


Editorial history

Received 8 August 2018; accepted 9 August 2018.


Copyright © 2018, Luke Stark. All Rights Reserved.

Facial recognition, emotion and race in animated social media
by Luke Stark.
First Monday, Volume 23, Number 9 - 3 September 2018
https://firstmonday.org/ojs/index.php/fm/article/view/9406/7572
doi: http://dx.doi.org/10.5210/fm.v23i9.9406





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2019. ISSN 1396-0466.