Nothing new here: Emphasizing the social and cultural context of deepfakes
First Monday

Nothing new here: Emphasizing the social and cultural context of deepfakes by Jacquelyn Burkell and Chandell Gosse



Abstract
In the last year and a half, deepfakes have garnered a lot of attention as the newest form of digital manipulation. While not problematic in and of itself, deepfake technology exists in a social environment rife with cybermisogyny, toxic-technocultures, and attitudes that devalue, objectify, and use women’s bodies against them. The basic technology, which in fact embodies none of these characteristics, is deployed within this harmful environment to produce problematic outcomes, such as the creation of fake and non-consensual pornography. The sophisticated technology and metaphysical nature of deepfakes as both real and not real (the body of one person, the face of another) makes them impervious to many technical, legal, and regulatory solutions. For these same reasons, defining the harm deepfakes causes to those targeted is similarly difficult and very often targets of deepfakes are not afforded the protection they require. We argue that it is important to put an emphasis on the social and cultural attitudes that underscore the nefarious use of deepfakes and thus to adopt a more material-based approach, opposed to technological, to understanding the harm presented by deepfakes.

Contents

Introduction
Fake porn is nothing new
Now anyone can create fake porn — but not really
Detecting and deleting deepfakes
Legal responses to deepfakes
Locating the harm
Conclusion

 


 

Introduction

On 11 December 2017 Motherboard journalist Samantha Cole (2017) broke a story about AI-assisted fake pornography that was being produced by a Redditor under the pseudonym Deepfakes. The issue was picked up in the European print media by late January 2018 (Keach, 2018), and in the North American print media by late February and early March (O’Carroll, 2018; Roose, 2018). Motherboard and other news outlets followed the story as it developed, quickly coming to label the fake pornography using the pseudonym under which it was first posted to Reddit: deepfakes. In January 2018, a different user in the subreddit devoted to deepfakes posted a desktop app, called FakeApp, which simplified the process of creating deepfakes (Cole, 2018a). Following up on her original story, Cole (2018a) posted a new article to Motherboard entitled ‘We are truly fucked: Everyone is making AI-generated fake porn now.’ In that story, Cole (2018a) reported that‘all the tools one needs to make these videos are free, readily available, and accompanied with instructions that walk novices through the process.’ In other words, the capacity to create non-consensual fake pornography had become available to anyone with what Cole termed ‘terrifying speed.’ The goal of this paper is to reorient the conversation about deepfakes from one that focuses solely on technological solutions to one that engages with the socio-cultural context in which deepfakes thrive. We do this by situating deepfakes in a larger context of fake pornography; addressing the ease with which this technology has become available for use; exploring the pitfalls of proposed solutions; and pointing to the difficulty in identifying the harms associated with deepfakes.

 

++++++++++

Fake porn is nothing new

Fake pornography, and indeed the general practice of ‘pasting’ one face onto another body, or of depicting someone in a scenario or activity where they were not in fact present, did not start with deepfakes. Famous fictional characters and celebrities have long been written into innumerable newly-imagined or entirely fictional scenarios, including pornographic scenarios. Such fan fiction, including pornographic material, is a fixture of the Internet, and the practice of writing people and fictional characters into new storylines predated that mode of distribution (Fanlore.org, n.d.). There is also a long history of representing well-known figures in image-based, rather than text-based, pornography. In eighteenth century France, for example, Marie Antoinette and Louis XVI were depicted in sexually explicit cartoons (Brown, 2010), and ‘Tijuana Bibles,’ produced in the U.S. in the 1920s to the 1960s, were obscene visual parodies of popular comic strips of the day (Guy-Ryan, 2016).

Deepfakes are not even the first example of manipulating digital images to create non-consenual pornography. As Michael Grothaus (2014) puts it, ‘as far back as I can remember, there’s been a big online demand for this particular brand of smut that involves stitching the heads of celebrities on to the bodies of porn stars.’ Kashmira Gander (2016) documents thousands of requests by people to the adult requests section of the imageboard Web site 4chan. These requests ask members of the community with photoshopping skills to alter photographs for purposes obviously related to sexual objectification and often to ‘revenge’: requests include, for example, ‘remove her top: please nudeshop my friend’s mom,’ or to ‘nudeshop’ a photo of someone who had ‘told everyone my girl’s a slut’ (Gander, 2016). Fake celebrity porn GIFs (short animated sequences) are also easily accessible online: these involve editing existing footage to isolate particular sequences and to replace the faces of the original participants with those of celebrities. Image and video editing to create fake pornography involves manipulations typically carried out by non-professional ‘experts’ who compete with other producers for accolades from ‘fans’ — and they produce these images and videos for free (Gander, 2016). These same skills, of course, have long been in demand for the creation of other, non-pornographic, manipulated images and videos, particularly in the Hollywood context. The skills required to create seamless and realistic fakes using image or video editing software are significant, and require many hours of training and practice. The shift with deepfakes, as noted by Cole (2018a), was to remove the ‘skills’ requirement from the production of these images and videos. As Nicola Henry and her colleagues wrote in February 2018 (Henry, et al., 2018), ‘now, anyone with a high-powered computer, a graphics processing unit (GPU) and time on their hands can create realistic fake videos — known as ‘deepfakes’ — using artificial intelligence (AI).’

Although the term ‘deepfakes’ originally applied to fake pornography produced using a particular set of AI techniques (Cole, 2017), it quickly came to denote a much larger range of AI-assisted techniques for editing video and images (Vincent, 2018). Much of the discussion of deepfakes refers not only to faceswapping, but also to applications that allow users to create audio of the targets ‘speaking’ words selected by the source (cf., Vincent, 2017), synthesized video lip-synched to any input audio so that the target appears to be speaking (Suwajanakorn, et al., 2017), and ‘puppetry’ video in which the movements of the target, including head and mouth movements, are controlled by another person (cf., Kim, et al., 2018; Thies, et al., 2019). These other technologies, many not widely available to the general public, are potentially even more realistic than deepfakes, with results that are in some cases (and unlike most current examples of deepfakes) virtually indistinguishable from real footage of the individual (cf., Leslie, et al., 2018).

In principle, there is no reason why these same ‘puppetry’ techniques cannot be extended to control entire bodies: it is only a matter of time before we have AI-assisted editing techniques that allow a source actor to control the full body movements of a target (work that represents progress in this area includes Achenbach, et al., 2017; Haberman, et al., 2018; Joo, et al., 2018). Indeed, researchers at the University of California have developed software that copies the dance movements of one body — that of a professional dancer — and maps them on to another under controlled conditions (Quach, 2018a). Developers are also moving toward applications that allow users to physically experience sex with ‘virtual’ others (Knowles, 2016) — avatars that ultimately could be designed to be indistinguishable from a ‘real’ individual. The eventual end state is obvious, and easily predicted: the ability to produce ‘made to order’ pornography in which actors, actions, locations, and even the physical experience are all controlled by the audience. There are already moves in this direction. Naughty America, a North American adult entertainment company, is launching a service that ‘allows customers to commission their own deepfakes clips, which can include superimposing their own faces onto the bodies of porn performers, or incorporating porn stars into different environments’ (e.g., the bedroom of the person commissioning the clip) (Roettgers, 2018).

MacKinnon (1987), among others, famously (and somewhat controversially) argued that pornography objectifies women; if this is true, fake pornography takes that objectification even further by eliminating any vestige of personal agency and consent on the part of those who appear in these productions. In his discussion of fake celebrity porn, Marshall (2016) puts it this way:

‘[Fake celebrity porn] represents a form of possession of the public figure, a fantasy belief in the capacity of complete revelation and exposure of the public personality. This is its tonic for the user. The images themselves are very often obscene and degrading in their graphic bodily detail, and this identifies a further form of possession and ownership that is heightened because of the fame and value of the personality.’ [1]

Women who experience the violation of being the subject of fake porn are thus (whether they are aware of the material or not) subject to the will and desires of those who consume such productions, effectively being possessed or owned, and certainly in some sense controlled, by the viewers. This has always been the case with fake pornography, but new technologies make the false representations that much more ‘real’, and will thus make it easier to, as Chesney and Citron (2018) put it, ‘exploit people’s sexual identities for others’ gratification.’ [2]

 

++++++++++

Now anyone can create fake porn — but not really

Deepfakes support a deskilling of the ‘faceswapping’ process in video, and that does make it special — but perhaps not quite as powerful or revolutionary as the press would have us believe. The hype makes it sound straightforward: ‘the faces of celebrities, politicians, children, or pretty much anyone, can be pasted over faces of porn stars in X-rated movies using freely available machine-learning software’ (Quach, 2018b). Reality demonstrates something quite different. Kevin Roose (2018), for example, wrote in the New York Times of his experiences trying to create deepfake videos. Even with the assistance of two technical experts to select and preprocess the images and to run the software, the results (after eight hours of training) were ‘blurry and bizarre’ (Roose, 2018). Roose (2018) remarked of a FakeApp video of his face on Ryan Gosling’s body that ‘only the legally blind would mistake the person in the video for me.’ He shows two other ‘fake’ videos in the same story. They are ‘glitchy’: the colouring of the face flickers and the edges of the face seem unstable. In the ‘best’ clip his face replaces that of Jake Gyllenhaal. The result (as with all deepfakes, and a natural outcome of the particular computational approach) is a strange amalgam of the two faces, rather than being a seamless and complete replacement of one face for another. The results observed by Roose (2018) are not an anomaly. Many have noted the ‘uncanny valley’ sensation created by deepfakes such as those produced by FakeApp (Scott, 2018), which, despite all the hype, for now remain easily distinguishable from the ‘real’ thing.

There is a significant learning curve involved in using FakeApp. The DeepFace Lab, for example, provides a multi-page and highly technical online tutorial for the use of the application. One introduction on how to use deepfakes (Hui, 2018) discusses the basic concept, noting that ‘first, we collect hundreds or thousands of pictures for both persons’ (the target, who is being swapped in, and the source, who is being swapped out) — and that is just the beginning. It is not just any two faces that will do — the faces have to be carefully chosen to be enough like each other to make the transformation work. There are, however, AI tools to assist with that problem as well (Morrissey, 2012). Even having the ‘right’ faces is not sufficient: the pictures used to train FakeApp must include an appropriate variety of expressions, angels, and perspectives so that each of the two faces is ‘learned’ from, and thus can be represented in a full range of poses. For optimal performance, all images (source and target) should share similar lighting — and the images must be pre-processed to identify and crop the faces (there are also automated tools available for this step). More recently there have been efforts to simplify and streamline the creation of deepfakes. In 2019 Samsung developed software that can create deepfakes, albeit, somewhat unconvincingly, using only one image (Barber, 2019).

While we argue that technology is not entirely to blame, neither are we saying that technology is entirely blameless. The ease and accessibility of sharing information, and images in particular, no doubt increases the consequences of deepfakes. The point is that even with the software assistance that is available, creating deepfakes requires work, and it is profoundly intentional work: it does not happen accidentally, and it does not happen automatically. In other words, the algorithm that underlies deepfakes (or any of the even more powerful face-swapping applications that will follow in the footsteps of deepfakes) cannot be set loose in digital culture with the necessary result of non-consensual fake porn. That result must be planned.

 

++++++++++

Detecting and deleting deepfakes

Since Cole (2017) first flagged the problem of deepfakes in her Motherboard article, many platforms have moved to ban the content. Gyfcat, which had quickly emerged as the most popular site for hosting deepfakes, banned the content in late January 2018 (Cole, 2018b); a Gyfcat spokesperson described non-consensual pornography, a category that includes deepfakes, as ‘objectionable’, noting that their terms of service allow them to ‘remove any content that we find objectionabl’ (Cole, 2018b). Gyfcat’s response followed closely on the heels of a similar action by Discord, a chat platform for gamers, which had shut down a user-created chat group where participants were sharing deepfake porn (Price, 2018), also citing a general prohibition of non-consensual pornography. Pornhub, where the deepfake community also looked to distribute the material, was slightly slower to respond, indicating on 6 February 2018 that they would ban deepfakes on the grounds that they are non-consensual porn, putting these productions in the same category as the non-consensual sharing of intimate images (colloquially known as revenge porn) (Cole, 2018c). Around that same date, Reddit updated their site-wide rules against involuntary pornography and sexual or suggestive content involving minors (Reddit, ‘Update on site-wide rules’), resulting in the banning of a subreddit devoted to making deepfake pornography and of other subreddits focused on similar content (Robertson, 2018).

Despite these bans, non-consensual fake pornography continues to appear on these and other sites (cf., Scott, 2018) and there are questions about how actively the various platforms are enforcing the bans (cf., Fingas, 2018). But even if platforms have the best of intentions, there is the issue of identification. Fake porn labeled as such could easily and effectively be removed as per Web site standards and codes of conduct. The real challenge occurs when these videos are not identified as fake — the problem then becomes one of detection. We see this problem with the never-ending efforts by social media companies to manage the spread of disinformation (read: fake news). Like fake pornography, disinformation is not an entirely new phenomenon, but technical advances — including deepfake technology — create conditions under which disinformation is both easier to produce and harder to detect (Gorbach, 2018).

There is money to be made in detecting deepfakes, in large part because of concerns about political interference: Internet startup Truepic, for example, provides image authentication for photos and videos; they recently attracted an additional eight million in funding to develop subtle forensic techniques that can be used to identify forged or fake videos (Constine, 2018). Facebook’s existing terms of service preclude the posting of adult nudity and sexual activity, and thus ban all pornography, fake pornography included. Facebook is, however, deeply affected by other forms of fake videos. Interestingly, Facebook waited until September 2018 to make a formal announcement about deepfakes, and at that point revealed their own hyper-realistic deepfake-esque avatars (Murphy, 2018); the company is also expanding their fact-checking processes to photos and videos (Woodford, 2018) in an effort to ensure the accuracy of their content (cf., Keane, 2018). Fueled by funding from the Defense Advanced Research Projects Agency (DARPA), researchers in the United States (U.S.) have also been working at the problem of deepfakes, with some success. Siwei Lyu and colleagues from the State University of New York at Albany, for example, quickly identified that fake videos could be detected by examining eye blinking (Li, et al., 2018). In what has been aptly termed ‘cat-and-mouse games of outdoing each other’ (Cole, 2018e), deepfake producers immediately responded by incorporating eye-blinking images in training sets, thus undermining this method of detecting fake videos. Lyu and other researchers are also exploring other techniques, such as the examination of visible indicators of heartbeats or pulses (e.g., small changes in skin colour; cf., Tait, 2018), for the identification of fake videos. While these results are promising, there is no doubt that they too will eventually be rendered ineffective as advances in the software used to produce fake videos support increasingly realistic representations.

Others, recognizing the futility of pursuing detection algorithms that will likely be defeated as the technology advances, have focused on a different technological approach: authentication (Allibhai, 2018). These techniques, however, are also vulnerable to defeat by technological advances. Gyfcat developed two software tools that assisted in identifying faked videos, based on an authentication approach (Liao, 2017; Matsakis, 2018). Project Maru and Project Angora are AI technologies that were originally developed for Gyfcat to index GIFs, and the platform is repurposing these to help identify face-swapped GIFs, including fake pornography. The software developed under Project Maru identifies fake videos by flagging productions that only partially match a famous face, capitalizing on the less-than-perfect substitutions supported by deepfakes. If a potentially fake video is identified, a version of the Project Angora software is used to match the body and background footage (the face is masked out) with other footage found online. If the faces do not match in the two GIFs, then the software concludes that the first is faked. Blockchains to support digital signatures have also been proposed as part of the authentication solution (Amber Video, 2018; Hasan and Salah, 2019). Chesney and Citron (2018), while not hopeful that the technology will emerge in the near future, suggest that eventually ‘immutable life logs’ [3] could become a service; these logs would, according to the authors, ‘make it possible for a victim of a deep fake to produce a certified alibi credibly proving that he or she did not do or say the thing depicted’ [4].

A recent think-tank of ‘independent and company-based technologists, machine learning specialists, academic researchers in synthetic media, human rights researchers, and journalists’ (Gregory, n.d.) identified a range of possible approaches to detecting and combating fake video. As an organization dedicated to helping journalists and human rights defenders use video to uncover injustice, WITNESS is focused on finding ‘pragmatic proactive ways to mitigate the threats that widespread use and commercialization of new tools for AI-generated synthetic media [...] potentially pose to public trust, reliable journalism and trustworthy human rights documentation’ (Gregory, n.d.). The results of their think-tank suggest a multi-pronged approach that includes media literacy education for audiences, skills training for information providers, collaboration among media organizations, development of manual and automated detection, and validation and authentication processes (WITNESS Media Lab, n.d). They also suggest some technological approaches that could limit vulnerability to malicious deepfakes, including modifying potential training images in ways that render those images useless for training algorithms that produce fake videos while leaving them interpretable to the human eye.

These initiatives are important for reasons that extend well beyond the realm of fake pornography and into other areas, including the political arena, where fake videos could be used to publicly misrepresent the actions and statements of individuals (Khalaf, 2018). Technological limitations aside, tools for detecting fake videos will not entirely eliminate this material, and in particular will not eliminate fake pornography, for one very basic reason: in many cases, and particularly for fake pornography, the audiences of these productions know that they are fake, and they do not care. The same cannot be said, however, for the targets and victims of fake pornography, who in many cases very much care about the ways they are being depicted and how those images are being used — real or not.

 

++++++++++

Legal responses to deepfakes

One key question is whether there are legal remedies to protect against non-consensual fake pornography. Much of this discussion has taken place in the U.S. context — and the conclusions are generally not hopeful. In fact, the production of fake videos attracts protection under U.S. and Canadian legal provisions that protect free speech or freedom of expression (Chesney and Citron, 2018) and fair use/fair dealing provisions that allow the use of someone else’s material for the purposes of comment, criticism, or parody (Black, et al., 2018; Lynn, 2018). There is also the problem of the very nature of these productions. They are chimeric — literally the amalgam of the head of one person and the body of another — and their double nature as being and not being actual representations of the target creates questions of ownership (Whose image is it?) and truth/falsehood (What is being represented? Who are the representations about?).

Under copyright provisions in multiple jurisdictions, the copyright owner holds rights over the reproduction and distribution of a copyrighted work; they also have ‘moral rights’, or a right to the integrity of the work. The holders of copyright for the photographs that are used to swap in the face, and the video into which the face is swapped, may be able to seek damages and to have copies of the fake video destroyed (Black, et al., 2018). Non-consensual pornography legislation would appear to hold some promise — but the premise of legal responses to that issue generally rest on privacy concerns and, as Wired writer Emma Ellis (2018) points out, ‘You can’t sue someone for exposing the intimate details of your life when it’s not your life they’re exposing.’ Actions based in defamation law face a similar problem: if the videos are identified as being fake, they cannot easily be made subject to a claim that they promote false facts about an individual — a claim that is fundamental to any defamation action (Black, et al., 2018). In those cases where the producer of fake pornographic videos uses those productions for economic gain, targets may be able to seek relief under appropriation of personality or right of publicity provisions (Black, et al., 2018; Greene, 2018); such claims, however, will be easier for celebrities to make, as compared to non-celebrity women who are targeted (Ellis, 2018). If fake pornography is used to blackmail targets, or for other fraudulent purposes, criminal provisions against extortion or fraud would apply (Black, et al., 2018), and the production and distribution of fake pornography may also in some situations be prosecuted as harassment (Black, et al., 2018; Greene, 2018).

Mary Ann Franks, who was instrumental in the development of the U.S. legislation on non-consensual pornography, is greatly concerned about legal responses to fake pornography (Ellis, 2018). Franks believes that responses based in defamation law holds the most promise for the average citizen, but realizes many of these videos are made for private enjoyment, rather than the humiliation of the target, which creates problems for defamation claims. Ultimately, however, even successful individual legal responses to fake pornography will constitute a never-ending battle (Black, et al., 2018), coming into play only after the harm has occurred, and addressing only some aspects of the harm experienced by targets.

Like the technical solutions described above, these legal approaches to the problem of deepfakes — legal actions that help to protect the rights and dignity of the targets of fake pornography — are important. At the same time, these remedies do not seem to recognize or address the core of the problem, or what we imagine (or might know) to be the most basic experience of anyone victim to this type of manipulation. At root, there is something powerfully disturbing and deeply wrong with being an involuntary participant in someone’s sexual fantasies (made manifest) and having your likeness co-opted for the sexual purposes of an (unknown) other.

 

++++++++++

Locating the harm

Producers of fake pornography, and those who consume it, are quick to defend the practice on the grounds of freedom of expression, noting their right to produce and consume parody and satire, and asserting that neither practice is intended to harm the targets. After Reddit moved to delete the deepfakes subreddit, one user remarked, ‘Personally, I think a U.S. based company should strive to uphold the rights and values our nation was founded on. Oh well’ (Reddit, ‘/r/Deepfakes has been banned’). The civil liberties director of the Electronic Frontier Foundation (EFF) expressed similar concerns: ‘From a civil liberties perspective, I am [...] concerned that the response to this innovation will be censorial and end up punishing and discouraging protected speech [...] It would be a bad idea, and likely unconstitutional, for example, to criminalize the technology’ (Beres and Gilmer, 2018). One Redditor asserted that consent was a non-issue in the context of fake pornography: ‘I don’t understand why they used the word consensual here. It doesn’t require consent. It’s a fake video ...’ (Hlve, 2018); someone who admitted viewing fake pornography on a regular basis did not see a problem with the practice, remarking ‘[It’s] taking the consensual nudity of an adult porn star and simply placing a different face on it’ (Farokhmanesh, 2018). Little consideration seems to be given to the ethics of the practice. As Lee (2018) notes, ‘only rarely do we see flickers of a heavy conscience as they discuss the true effects of what they are doing. Is creating a pornographic movie using someone’s face unethical? Does it really matter if it is not real? Is anyone being hurt?’ Even these, however, are not the key questions: as Lee (2018) suggests, ‘Perhaps they should ask: How does this make the victim feel?’ (emphasis added).

In 1900, Abigail Roberson had strong feelings about the appropriation of her image, and these prompted a successful legal suit against two companies for using her image without consent in a commercial advertisement. Roberson’s lawyer argued that the companies invaded her ‘right of privacy’ and had stolen her property, ‘asserting that one’s image is one’s property’ (Jhaveri, 2018). Roberson’s case was later overturned on the grounds that ‘no harm’ was in fact done. The judge presiding over her case wrote of the final decision ‘that Roberson’s physical property hadn’t been stolen, that her reputation wasn’t damaged, and that her distress was purely mental, so she didn’t have a valid case’ (Jhaveri, 2018). Hers was the first legal case in New York to use the phrase ‘right of privacy’, and was a catalyst for widespread discussion about the non-consensual use of an individual’s image and the potential harm such use can have. Jhaveri (2018) points out that this case remains relevant today ‘because a question at its heart remains unanswered: What are the legal limits on what someone can do with an image of your face?’

The damage or harm caused to those targeted by non-consensual fake pornography is ill-defined in social and cultural — as well as legal — contexts. Enter the Internet, avatars, and digital photography, and questions about the right of privacy, someone’s image, and the harm caused by these transgressions grow more complex. Questions about the type of harm these targets experience have circulated since the early days of online communities. Online users, and women in particular, have been targets of non-consensual pornography and image abuse (Powell, et al., 2018) well before they had to deal with technology as sophisticated as deepfakes.

Almost a century after Roberson’s lawsuit, in 1993, the text-based online community and multi-user domain (MUD), LambdaMOO, faced one of the earliest examples of technology-facilitated forced participation in representations of sexual activity. Julian Dibbel (1993) refers to the event as ‘a rape in cyberspace’; others term it ‘the Bungle Affair’. A LambdaMOO user who went by the pseudonym ‘Mr. Bungle’ used a ‘voodoo doll’ subprogram available within the community to attribute sexually violent and violating actions to other characters without consent. Shortly after the incident, one user, who went by the username exu, posted a public statement to the community explaining:

‘[...]I’m not calling for policies, trials, or better jails. I’m not sure what I’m calling for. Virtual castration, if I could manage it. Mostly, [this type of thing] doesn’t happen here. Mostly, perhaps I thought it wouldn’t happen to me. Mostly, I trust people to conduct themselves with some veneer of civility. Mostly, I want his ass’ (Dibbell, 1993).

Reflecting on a conversation with exu, Dibbell (1993) recalls: ‘posttraumatic tears were streaming down her face — a real-life fact that should suffice to prove that the words’ emotional content was no mere fiction.’

The incident described in Dibbell’s article was only one of innumerable instances of non-consensual pornography and of the non-consensual use of women’s digtial existence for sexual and violent ends. In 2013, for example, hackers developed a mod for Grand Theft Auto 5 that allowed them to access another player’s game, ‘[...] often as a naked or near-naked man, [and] lock onto another player and then thrust persistently back and forth,’ effectively carrying out a virtual rape or sexual assault of the targeted player (Kasumovic and Brooks, 2014). Virtual rape is not the only form of tech-facilitated violence and abuse against women in digtial culture. In 2012, a man by the username @Bendilin created a game titled ‘Beat up Anita Sarkeesian’ (Fernandez-Blance, 2012). The patently misogynistic ‘game’ involved clicking on an image of the face of media critic and Feminist Frequency founder Anita Sarkeesian in order to inflict a virtual blow: each new click resulted in a more ‘battered’ and thus more disturbing version of her image. Two years later, game developer Zoë Quinn endured an onslaught of online abuse initially perpetrated by an ex-romantic partner. The online abuse included, among many other offences, ‘sexual or violent images with [her] face Photoshopped into them’ [5].

In each of these cases, the potential for and possible magnitude of harm has been discussed; in none of these cases, however, has the harm been given the legal or social legitimacy it warrants. As mentioned earlier, deepfakes’ double nature as both being and not being the individual in question makes it difficult for these productions to fit in to existing frameworks for understanding harm. As with the other examples presented here, the harm triggered by deepfakes is often not easily characterized or quantified. The damage is metaphysical and ontological in nature, inflicted on important facets of the self, including reputation and the psyche.

Citron (2018) offers a possible location for the harm. She advances the concept of ‘sexual privacy’, noting that it has a ‘special normative significance to individuals, groups, and society’ [6]. She identifies sexual privacy as key to identity, since sexual privacy ‘frees us to ‘be and become’ who we want to be’ [7], and to equality, because ‘coerced visibility of our intimate lives — an invasion of sexual privacy — can lead to marginalization’ [8]. Citron (2018) explicitly identifies the harm of non-consensual fake pornography as a harm to sexual privacy:

‘Deep fake sex videos involve an invasion of sexual privacy by exercising dominion over individuals’ intimate identities. They reduce individuals to their genitalia and breasts, creating a sexual identity not of their making. They undermine the ability to present identities with integrity, affixing them with damaged ones. They are an affront to individuals’ sense that their intimate personalities are their own’ [9].

According to Citron (2018), ‘the harm of sexual privacy invasions is profound’ [10], undermining identity development by denying a target ‘agency over their intimate lives’ [11]. These types of invasions create a situation allowing a ‘single aspect of one’s self to eclipse all other aspects’ [12], in terms both of how targets see themselves and how they are seen and judged by others, reducing them to sexual objects to be ‘exploited and exposed’ [13]. This description seems to fit very well with the experiences and concerns expressed by targets of tech-facilitated violence and abuse, including the harm of non-consensual fake pornography (Citron, 2014; Duggan, 2014; Hodson, et al., 2018; Vitak, et al., 2017).

Understanding the implications of deepfakes requires deep consideration of the lines, if any, between the virtual and the real. But most importantly, understanding the harm caused by deepfakes necessarily involves moving beyond a focus on the technology to incorporate the attitudes and behaviors — the misogyny and sexism — that drive their creation. If the examples discussed in this section, coupled with the ‘uncanny’ and not quite realistic representation of celebrities in early deepfakes, prove anything, it is that the harm of fake pornography is not caused by the realism of the representation. Whether the representation is textual, image-based, or in video form such as deepfakes, and even when the fake pornographic representation ‘[...] is neither exactly real nor exactly make-believe’ (Dibbell, 1993), the harm is ‘[...] nonetheless profoundly, compellingly, and emotionally true.’

Cole (2018d) wants to remind everyone that ‘[d]eepfakes were created as a way to own women’s bodies.’ This is a critical point and it is important to put this notion in tension with the reality that even patently unrealistic misogynistic representations can cause social and emotional damage to targets. Producers and consumers of fake pornography could (and do) try to argue that they do not cause harm because the videos are not real — but for the targets whose faces are the ones being used, that argument is likely moot, since the intention is evident. Moreover, the position is disingenuous — because realism is important and valued by producers and consumers, and they respond to these productions, in some sense, as real representations. From the perspective of the viewer the fake videos are realistic enough. In an interview with the Verge, sociologist Katherine Cross explains:

‘Deepfake users ‘understand, intuitively, that this is more real than they want to admit. [...] If it’s all totally harmless and essentially unreal, they wouldn’t mind putting together deepfake porn of people they know. But of course, they do, and it’s because they understand the symbolism of all this. What are the semiotics of a woman, in a pornographic frame, on her knees [giving] a blowjob and why does it make you so uncomfortable to put your mother or sister in that role? Why, then, do it to a woman you don’t know?’’ (Farokhmanesh, 2018)

These questions, which should remain at the forefront of conversations surrounding deepfakes, are easily lost when we become preoccupied with the technology itself.

 

++++++++++

Conclusion

Deepfakes are not problematic in and of themselves, and indeed the technology can be used to many positive (or, at least, humorous) ends. Deepfakes, however, exist in a social environment rife with cybermisogyny (Mantilla, 2015, 2013), toxic-technocultures (Massanari, 2017), and attitudes that devalue, objectify, and use women’s bodies against them. The basic technology, which in fact embodies none of these characteristics or propensities, is deployed within this harmful environment to produce an extremely problematic outcome: fake or non-consensual pornography.

In order to protect potential targets, it is critical that we consider, and develop, technological solutions that will identify and limit fake pornography. We must rely on platforms to enact policies and terms of service that prohibit these productions. We also must ensure that our legal systems provide adequate protection for targets of non-consensual fake pornography.

Even together, however, these measures will be insufficient to address the problem — because profoundly, and centrally, this technology would not be used against women if it did not exist in a deeply misogynistic environment. You do not need deepfakes to express misogyny — all you need is a negative attitude toward women coupled with the desire to make that attitude public. And, critically, fixing the deepfake problem does nothing to address the underlying problem of misogyny. That requires recognition of the gendered nature of the problem, and an emphasis on the social and cultural contexts that are the conditions in which misogyny in all forms, including non-consensual fake pornography, thrives. End of article

 

About the authors

Jacquelyn Burkell is an associate professor in the Faculty of Information and Media Studies at Western University.
E-mail: jburkell [at] uwo [dot] ca

Chandell Gosse is a Ph.D. candidate in the Faculty of Information and Media Studies at Western University.
E-mail: cgosse [at] uwo [dot] ca

 

Acknowledgments

This project is part of the eQuality Project (http://www.equalityproject.ca/). Funding for this project comes from the Social Sciences and Humanities Research Council.

 

Notes

1. Marshall, 2016, p. 271.

2. Chesney and Citron, 2018, p. 17.

3. Chesney and Citron, 2018, p. 54.

4. Ibid.

5. Quinn, 2017, p. 51.

6. Citron, 2018, p. 22.

7. Citron, 2018, p. 10.

8. Citron, 2018, p. 13.

9. Citron, 2018, p. 32.

10. Citron, 2018, p. 39.

11. Ibid.

12. Citron, 2018, p. 39.

13. Citron, 2018, p. 40.

 

References

Jascha Achenbach, Thomas Waltemate, Marc Erich Latoschik, and Mario Botsch, 2017. “Fast generation of realistic virtual humans,” VRST ’17: Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology, article number 12.
doi: https://doi.org/10.1145/3139131.3139154, accessed 29 November 2019.

Shamir Allibhai, 2018. “Detecting fake video needs to start with video authentication,” Hackernoon (18 October), at https://hackernoon.com/detecting-fake-video-needs-to-start-with-video-authentication-224a988996ce, accessed 20 September 2019.

Amber Video, 2018. “How blockchains can be used in authenticating video and countering deepfakes,” Medium (18 September), at https://medium.com/amber-video/how-blockchains-can-be-used-in-authenticating-video-and-countering-deepfakes-25d596ad7a5, accessed 20 September 2019.

Gregory Barber, 2019. “Deepfakes are getting better, but they’re still easy to spot,” Wired (26 May), at https://www.wired.com/story/deepfakes-getting-better-theyre-easy-spot/, accessed 20 September 2019.

Damon Beres and Marcus Gilmer, 2018. “A guide to ‘deepfakes,’ the Internet’s latest moral crisis,” Mashable (2 February), at https://mashable.com/2018/02/02/what-are-deepfakes/#KB9xEQbQXqqR, accessed 20 September 2019.

Ryan Black, Pablo Tseng, and Sally Wong, 2018. “What can the law do about ‘Deepfake’?” McMillan Litigation and Intellectual Property Bulletin (March), at https://mcmillan.ca/Files/206422_What_Can_The_Law_Do_About_Deepfake.pdf, accessed 20 September 2019.

Leah Marie Brown, 2010. “The royal dildo,” Leah Marie Brown Historicals (8 July), at http://leahmariebrownhistoricals.blogspot.com/2010/07/royal-dildo.html, accessed 20 September 2019.

Robert Chesney and Danielle Keats Citron, 2018. “Deep fakes: A looming challenge for privacy, democracy, and national security,” University of Maryland, Legal Studies, Research Paper, number 2018–21.
doi: http://dx.doi.org/10.2139/ssrn.3213954, accessed 29 November 2019.

Danielle Keats Citron, 2018. “Sexual privacy,” Yale Law Journal, volume 128, number 7, pp. 1,792–2,121, and at https://www.yalelawjournal.org/article/sexual-privacy, accessed 29 November 2019.

Danielle Keats Citron, 2014. Hate crimes in cyberspace. Cambridge, Mass.: Harvard University Press.

Samantha Cole, 2018a. “We are truly fucked: Everyone is making AI-generated fake porn now,” Motherboard (24 January), at https://motherboard.vice.com/en_us/article/bjye8a/reddit-fake-porn-app-daisy-ridley, accessed 20 September 2019.

Samantha Cole, 2018b. “AI-generated fake porn makers have been kicked off their favorite host,” Motherboard (31 January), at https://motherboard.vice.com/en_us/article/vby5jx/deepfakes-ai-porn-removed-from-gfycat, accessed 20 September 2019.

Samantha Cole, 2018c. “Pornhub is banning AI-generated fake porn videos, says they’re non-consensual,” Motherboard (6 February), at https://motherboard.vice.com/en_us/article/zmwvdw/pornhub-bans-deepfakes, accessed 20 September 2019.

Samantha Cole, 2018d. “Deepfakes were created as a way to own women’s bodies — we can’t forget that,” Vice (18 June), at https://broadly.vice.com/en_us/article/nekqmd/deepfake-porn-origins-sexism-reddit-v25n2, accessed 20 September 2019.

Samantha Cole, 2018e. “There is no tech solution to deepfakes,” Motherboard (14 August), at https://motherboard.vice.com/en_us/article/594qx5/there-is-no-tech-solution-to-deepfakes, accessed 20 September 2019.

Samantha Cole, 2017. “AI-assisted fake porn is here and we’re all fucked,” Motherboard (11 December), at https://motherboard.vice.com/en_us/article/gydydm/gal-gadot-fake-ai-porn, accessed 20 September 2019.

Josh Constine, 2018. “Truepic raises $8M to expose deepfakes, verify photos for Reddit,” Techcrunch (20 June), at https://techcrunch.com/2018/06/20/detect-deepfake/, accessed 20 September 2019.

Julian Dibbell, 1993. “A rape in cyberspace: How an evil clown, a Haitian trickster spirit, two wizards, and a cast of dozens turned a database into a society,” >Village Voice (23 December), at http://www.juliandibbell.com/texts/bungle_vv.html, accessed 20 September 2019.

Maeve Duggan, 2014. “Online harassment,” Pew Research Center> (22 October), at http://www.pewinternet.org/2014/10/22/online-harassment/, accessed 25 December 2019.

Emma Grey Ellis, 2018. “People can put your face on porn — and the law can’t help you,” Wired (26 January), at https://www.wired.com/story/face-swap-porn-legal-limbo/, accessed 20 September 2019.

Fanlore.org, n.d. “Open letters to Star Wars zine publishers,” at https://fanlore.org/wiki/Open_Letters_to_Star_Wars_Zine_Publishers_(1981)#Some_Early_Stories_that_Rocked_the_Boa accessed 20 September 2019.

Megan Farokhmanesh, 2018. “Deepfakes are disappearing from parts of the Web, but they’re not going away,” Verge (9 February), at https://www.theverge.com/2018/2/9/16986602/deepfakes-banned-reddit-ai-faceswap-porn, accessed 20 September 2019.

Katherine Fernandez-Blance, 2012. “Gamer campaign against Anita Sarkeesian catches Toronto feminist in crossfire,” Toronto Star (10 July), at https://www.thestar.com/news/gta/2012/07/10/gamer_campaign_against_anita_sarkeesian_catches_toronto_feminist_in_crossfire.html, accessed 20 September 2019.

Jon Fingas, 2018. “Pornhub hasn’t been actively enforcing its deepfake ban,” Engadget (18 April), at https://www.engadget.com/2018/04/18/pornhub-still-has-many-deepfake-videos/, accessed 20 September 2019.

Kashmira Gander, 2016. “The people who photoshop friends and family onto porn,” Independent (13 October), at https://www.independent.co.uk/life-style/love-sex/porn-photoshopping-4chan-family-friends-superimposed-into-sex-scenes-world-a7358706.html, accessed 20 September 2019.

Julien Gorbach, 2018. “Not your grandpa’s hoax: A comparative history of fake news,” American Journalism, volume 35, number 2, pp. 236–249.
doi: https://doi.org/10.1080/08821127.2018.1457915, accessed 29 November 2019.

David Greene, 2018. “We don’t need new laws for faked videos, we already have them,” Electronic Frontier Foundation (13 February), at https://www.eff.org/deeplinks/2018/02/we-dont-need-new-laws-faked-videos-we-already-have-them, accessed 20 September 2019.

Sam Gregory, n.d. “Deepfakes and synthetic media: What should we fear? What can we do?” Witness, at https://blog.witness.org/2018/07/deepfakes/, accessed 20 September 2019.

Michael Grothaus, 2014. “How fake celebrity porn destroyed one guy’s life and saved the other from suicide,” Vice (8 May), at https://www.vice.com/en_ca/article/yvqe5v/celebrity-pornalikes-michael-grothaus, accessed 20 September 2019.

Jessie Guy-Ryan, 2016. “It’s not just Tony the Tiger: Tijuana bibles and the history of cartoon sex icons,” Atlas Obscura (30 January), at https://www.atlasobscura.com/articles/its-not-just-tony-the-tiger-tijuana-bibles-and-the-history-of-cartoon-sex-icons, accessed 20 September 2019.

Haya R. Hasan and Khaled Salah, 2019. “Combating deepfake videos using blockchain and smart contracts,” IEEE Access, volume 7 (18 March), pp. 41,596–41,606.
doi: https://doi.org/10.1109/ACCESS.2019.2905689, accessed 29 November 2019.

Nicola Henry, Anastasia Powell, and Asher Flynn, 2018. “AI can now create fake porn, making revenge porn even more complicated,” The Conversation (28 February), at https://theconversation.com/ai-can-now-create-fake-porn-making-revenge-porn-even-more-complicated-92267, accessed 20 September 2019.

Hlve, 2018. “Pornhub says digitally generated ‘deepfakes’ are non-consensual and it will remove them,” at https://www.reddit.com/r/technology/comments/7vu8bf/pornhub_says_digitally_generated_deepfakes_are/ accessed 20 September 2019.

Jaigris Hodson, Chandell Gosse, George Veletsianos, and Shandell Houlden, 2018. “I get by with a little help from my friends: The ecological model and support for women scholars experiencing online harassment,” First Monday, volume 23, number 8, at https://firstmonday.org/article/view/9136/7505, accessed 29 November 2019.
doi: http://dx.doi.org/10.5210/fm.v23i8.9136, accessed 29 November 2019.

Jonathan Hui, 2018. “How deep learning fakes videos (deepfakes) and how to detect it?” Medium (28 April), at https://medium.com/@jonathan_hui/how-deep-learning-fakes-videos-deepfakes-and-how-to-detect-it-c0b50fbf7cb9, accessed 20 September 2019.

Ishaan Jhaveri, 2018. “How a 19th-century teenager sparked a battle over who owns our faces,” Gizmodo (25 October), at https://gizmodo.com/how-a-19th-century-teenager-sparked-a-battle-over-who-o-1829572319, accessed 20 September 2019.

Hanbyul Joo, Tomas Simon, and Yaser Sheikh, 2018. “Total capture: A 3D deformation model for tracking faces, hands, and bodies,” Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8,320–8,329.
doi: http://dx.doi.org/10.1109/CVPR.2018.00868, accessed 29 November 2019.

Michael Kasumovic and Rob Brooks, 2014. “Virtual rape in Grand Theft Auto 5: Learning the limits of the game.” The Conversation (18 August), at http://theconversation.com/virtual-rape-in-grand-theft-auto-5-learning-the-limits-of-the-game-30520, accessed 20 September 2019.

Sean Keane, 2018. “Congress wrestles with ‘deepfake’ threat to Facebook,” CNet (5 September), at https://www.cnet.com/news/congress-wrestles-with-deepfake-threat-to-facebook/, accessed 20 September 2019.

Roulah Khalaf, 2018. “If you thought fake news was a problem, wait for ‘deepfakes’,” Financial Times (25 July), at https://www.ft.com/content/8e63b372-8f19-11e8-b639-7680cedcc421, accessed 20 September 2019.

Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, and Christian Theobalt, 2018. “Deep video portraits,” ACM Transactions on Graphics, volume 37, number 4, article number 163.
doi: http://dx.doi.org/10.1145/3197517.3201283, accessed 29 November 2019.

Kitty Knowles, 2016. “Find porn stars who look like people you know using facial recognition,” The Memo (23 September), at https://www.thememo.com/2016/09/23/porn-celebrity-porn-megacams-facial-recognition-porn/, accessed 20 September 2019.

Dave Lee, 2018. “Deepfakes porn has serious consequences,” BBC News (3 February), at https://www.bbc.com/news/technology-42912529, accessed 20 September 2019.

Tim Leslie, Nathan Hoad, and Ben Spraggon, 2018. “Can you tell a fake video from a real one?” ABC News Australia (26 September), at https://www.abc.net.au/news/2018-09-27/fake-news-part-one/10308638, accessed 20 September 2019.

Yuezen Li, Ming-Ching Chang, and Siwei Lyu, 2018. “In Ictu Oculi: Exposing AI generated fake face videos by detecting eye blinking,” arXiv (11 June), at https://arxiv.org/abs/1806.02877, accessed 20 September 2019.

Shannon Liao, 2017. “Gyfcat says it’ll use machine learning to make more high-res GIF,” Verge (13 December), at https://www.theverge.com/2017/12/13/16773836/gfycat-machine-learning-ai-library-gifs, accessed 20 September 2019.

Tracie Egan Morrissey, 2012. “Who’s your porn star doppelganger?” Jezebel (21 December), at https://jezebel.com/5962502/whos-your-porn-star-doppelganger, accessed 26 September 2018.

Maxine Lynn, 2018. “Deepfakes: The lawless new world of involuntary porn,” Unzipped: Sex, Tech, and the Law (14 March), http://www.sextechlaw.com/deepfakes-porn-law/, accessed 26 September 2018.

Catharine MacKinnon, 1987. Feminism unmodified: Discourses on life and law. Cambridge, Mass.: Harvard University Press.

Karla Mantilla, 2015. Gendertrolling: How misogyny went viral. Santa Barbara, Calif.: Praeger.

Karla Mantilla, 2013. “Gendertrolling: Misogyny adapts to new media,” Feminist Studies, volume 39, number 2, pp. 563–570.

P. David Marshall, 2016. The celebrity persona pandemic. Minneapolis: University of Minnesota Press.

Adrienne Massanari, 2017. “#Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture supports toxic technocultures,” New Media & Society, volumer 19, number 3, pp. 329–346.
doi: https://doi.org/10.1177/1461444815608807, accessed 29 November 2019.

Louise Matsakis, 2018. “Artificial intelligence is now fighting fake porn,” Wired (14 February), at https://www.wired.com/story/gfycat-artificial-intelligence-deepfakes/, accessed 20 September 2019.

Margi Murphy, 2018. “Facebook reveals ‘deepfake’ avatars that bear uncanny resemblance to humans,” Telegraph (26 September), at https://www.telegraph.co.uk/technology/2018/09/26/facebook-reveals-deepfake-avatars-bear-uncanny-resemblance-humans/, accessed 26 September 2018.

Eoin O’Carroll, 2018. “From fake news to fabricated video, can we preserve our shared reality?” Christian Science Monitor (22 February), at https://www.csmonitor.com/Technology/2018/0222/From-fake-news-to-fabricated-video-can-we-preserve-our-shared-reality, accessed 20 September 2019.

Anastasia Powell, Nicola Henry, and Asher Flynn, 2018. “Image-based sexual abuse,” In: Walter S. Dekeseredy and Molly Dragiewicz (editors). Routledge handbook of critical criminology. Second edition. London: Routledge, pp. 305–315.
doi: https://doi.org/10.4324/9781315622040-28, accessed 29 November 2019.

Katyanna Quach, 2018a. “Everybody dance now: Watch this AI code fool friends into thinking you can cut a rug like a pro,” The Register (24 August), at https://www.theregister.co.uk/2018/08/24/ai_dancing/ accessed 20 September 2019.

Katyanna Quach, 2018b. “FYI: There’s now an AI app that generated convincing fake smut vids using celebs’ faces,” The Register (25 January), at https://www.theregister.co.uk/2018/01/25/ai_fake_skin_flicks/, accessed 20 September 2019.

Zoë Quinn, 2017. Crash override: How Gamergate (nearly) destroyed my life, and how we can win the fight against online hate. New York: PublicAffairs.

Reddit, n.d. “/r/Deepfakes has been banned,” at https://www.reddit.com/r/SFWdeepfakes/comments/7vy36n/rdeepfakes_has_been_banned/, accessed 20 September 2019.

Reddit, n.d. “Update on site-wide rules regarding involuntary pornography and the sexualization of minors,” at https://www.reddit.com/r/announcements/comments/7vxzrb/update_on_sitewide_rules_regarding_involuntary/, accessed 20 September 2019.

Adi Robertson, 2018. “Reddit bans ‘deepfakes’ AI porn communities,” Verge (7 February), at https://www.theverge.com/2018/2/7/16982046/reddit-deepfakes-ai-celebrity-face-swap-porn-community-ban, accessed 20 September 2019.

Janko Roettgers, 2018. “Naughty America wants to monetize deepfake porn,” Variety (20 August), at https://variety.com/2018/digital/news/deepfake-porn-custom-clips-naughty-america-1202910584/, accessed 20 September 2019.

Kevin Roose, 2018. “Here come the fake videos, too,” New York Times (4 March), to https://www.nytimes.com/2018/03/04/technology/fake-videos-deepfakes.html, accessed 20 September 2019.

Grace Lisa Scott, 2018. “Deepfakes still exist, amid Twitter and Pornhub bans,” Inverse (6 February), at https://www.inverse.com/article/41017-pornhub-says-it-s-banning-deepfakes-as-the-internet-cracks-down-on-ai-porn, accessed 20 September 2019.

Supasorn Suwajanakorn, Steven M. Seitz, and Ira Kemelmacher-Shlizerman, 2017. “Synthesizing Obama: Learning lip sync from audio,” ACM Transactions on Graphics, volume 36, number 4, article number 95.
doi: https://doi.org/10.1145/3072959.3073640, accessed 29 November 2019.

Amelia Tait, 2018. “How to identify if an online video is fake,” New Statesman (14 February), at https://www.newstatesman.com/science-tech/technology/2018/02/how-identify-if-online-video-fake, accessed 20 September 2019.

Justus Thies, Michael Zollhfer, March Stamminger, Christian Theobalt, and Matthias Nießner, 2019. “Face2Face: Real-time face capture and reenactment of RGB videos,” Communications of the ACM, volume 62, number 1, pp. 96–104.
doi: https://doi.org/10.1145/3292039, accessed 29 November 2019.

James Vincent, 2018. “Why we need a better definition of ‘deepfake’,” Verge (22 May), at https://www.theverge.com/2018/5/22/17380306/deepfake-definition-ai-manipulation-fake-news, accessed 20 September 2019.

James Vincent, 2017. “Lyrebird claims it can recreate any voice using just one minute of sample audio,” Verge (24 April), at https://www.theverge.com/2017/4/24/15406882/ai-voice-synthesis-copy-human-speech-lyrebird, accessed 20 September 2019.

Jessica Vitak, Kalyani Chadha, Linda Steiner, and Zahra Ashktorab, 2017. “Identifying women’s experiences with and strategies for mitigating negative effects of online harassment,” CSCW ’17: Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, pp. 1,231–1,245.
doi: http://doi.org/10.1145/2998181.2998337, accessed 25 November 2019.

WITNESS Media Lab, n.d. “Prepare, don’t panic: Synthetic media and deepfakes,” at https://lab.witness.org/projects/synthetic-media-and-deep-fakes/, accessed 25 November 2019.

Antonia Woodford, 2018. “Expanding fact-checking to photos and videos,” Facebook Newsroom (13 September), at https://newsroom.fb.com/news/2018/09/expanding-fact-checking/, accessed 20 September 2019.

 


Editorial history

Received 22 September 2019; revised 27 November 2019; accepted 29 November 2019.


Creative Commons License
This paper is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Nothing new here: Emphasizing the social and cultural context of deepfakes
by Jacquelyn Burkell and Chandell Gosse.
First Monday, Volume 24, Number 12 - 2 December 2019
https://firstmonday.org/ojs/index.php/fm/article/view/10287/8297
doi: http://dx.doi.org/10.5210/fm.v24i12.10287





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2019. ISSN 1396-0466.