Digital detritus: 'Error' and the logic of opacity in social media content moderation
First Monday

Digital detritus: 'Error' and the logic of opacity in social media content moderation by Sarah T. Roberts



Abstract
The late 2016 case of the Facebook content moderation controversy over the infamous Vietnam-era photo, “The Terror of War,” is examined in this paper for both its specifics, as well as a mechanism to engage in a larger discussion of the politics and economics of the content moderation of user-generated content. In the context of mainstream commercial social media platforms, obfuscation and secrecy work together to form an operating logic of opacity, a term and concept introduced in this paper. The lack of clarity around platform policies, procedures and the values that inform them lead users to wildly different interpretations of the user experience on the same site, resulting in confusion in no small part by the platforms’ own design. Platforms operationalize their content moderation practices under a complex web of nebulous rules and procedural opacity, while governments and other actors clamor for tighter controls on some material, and other members of civil society demand greater freedoms for online expression. Few parties acknowledge the fact that mainstream social media platforms are already highly regulated, albeit rarely in such a way that can be satisfactory to all. The final turn in the paper connects the functions of the commercial content moderation process on social media platforms like Facebook to their output, being either the content that appears on a site, or content that is rescinded: digital detritus. While meaning and intent of user-generated content may often be imagined to be the most important factors by which content is evaluated for a site, this paper argues that its value to the platform as a potentially revenue-generating commodity is actually the key criterion and the one to which all moderation decisions are ultimately reduced. The result is commercialized online spaces that have far less to offer in terms of political and democratic challenge to the status quo and which, in fact, may serve to reify and consolidate power rather than confront it.

Contents

Introduction: ‘Terror’ as error
User-generated content as currency and liability
Terror online
Commercial content moderation and the logic of opacity
The power of platforms: Gatekeeping and shaping information
CCM as commodity control
Conclusion: What platforms want

 


 

Introduction: ‘Terror’ as error

The Vietnam War era in the United States produced some of the most enduringly haunting images of war in the country’s history. The decade-plus long engagement of U. S. troops in overt and covert action in Southeast Asia made its way to the front pages of American daily newspapers and into the living rooms of millions of people every night on the evening news. My own mother, a teenager during much of the war, has often recalled to me over the years her experience of lying on the living room floor in front of the massive tube television console, as CBS reported from the warzones and jungles of Vietnam. Newspapers of record ran relentless coverage of the war as it dragged on for a decade. Those reporters who covered the war beat were not embedded, as in the second Iraq war (Tuosto, 2008), [1] and the footage and stories they sent back were often raw and disturbing. The reports themselves frequently challenged the military and political establishment, both reflecting and influencing domestic anti-war sentiment during one of the most fractured and fraught periods of twentieth century history (Haigh, et al., 2006).

From such journalistic quarters came a haunting and provocative image that, late in the war, left a lasting mark on the world as a quintessential encapsulation of Vietnam’s horrors, and, by extrapolation, of the horrors of war, in general. A young Vietnamese girl, naked and burned by a South Vietnamese napalm attack on a district occupied by North Vietnamese forces, runs down a road, crying and in agony. Nine-year-old Kim Phúc’s horrific injuries and expression of fear, as captured on film by recently deceased AP photographer Nick Ut, shocked a world already weary of war horror. The editorial board of the New York Times wrestled with whether to publish the image but ultimately did so. In 1973, the photo won a Pulitzer Prize.

The photo, fittingly known as “The Terror of War,” once again came into wide-scale public consciousness in the autumn of 2016 after a journalist in Norway posted it to his Facebook feed and watched as it was removed, again and again, ostensibly for violating the site’s Terms of Use. In this case, largely due to the journalist’s relative prominence as well as to his tenacity, the removal of “The Terror of War” sparked immense public outcry and backlash, first in Norway, then across northern Europe. The Prime Minister of Norway herself got involved, reposting the photo only to have it removed from her feed. Ultimately, Facebook relented on its decision to remove the photo, publishing a statement that nevertheless likened the photo to “child pornography,” and its takedown, therefore, to protecting the vulnerability of a child — victimized, evidently, by her naked skin and not her burning flesh. Indeed, Facebook’s primary concern did appear to be that of nudity, and nudity as pornography, stating, as reported by the Guardian, that “while we recognize that this photo is iconic, it’s difficult to create a distinction between allowing a photograph of a nude child in one instance and not others” (Levin, et al., 2016).

On the surface, this may seem like a reasonable explanation for why the photo was removed, particularly when many may assume that decisions about content removal fall to machines: computers driven by algorithms to smartly and expediently address content violations with even application and technologically-enabled rationalization. That false assumption — one that the social media industry has largely been responsible for promulgating, or, at least, not actively correcting — is itself worthy of discussion. But there are a number of more profound and key aspects to this story — aspects that Facebook, Google and others are reluctant to acknowledge much less discuss: the platforms’ own “embeddedness” with the U. S. political establishment, and their own relationship to policy, foreign and domestic. With billions of users worldwide, social media may indeed have the ability to change the world — or at least world opinion. But how does it do that and in the service of whom? The question of vulnerability indeed also comes into play, but the vulnerability of greatest concern seems to be less so that of Kim Phúc or of other child victims of war, and more so that of Facebook, the platform and firm, and, quite possibly, the American power establishment that was ultimately deeply harmed by the publication of this photo the first time around.

This paper unveils the complex, contradictory and often invisible relationships that platforms undertake with users when they endeavor to remove the content they have uploaded, and the meaning-making inherent in those acts of removal. While at first blush, these contradictions within mainstream social media platforms seem benignly paradoxical or absurd, this paper exposes their deeply political reifications of power structure, commodification and status quo.

 

++++++++++

User-generated content as currency and liability

Mainstream social media platforms such as Facebook, YouTube, Tumblr, Instagram and others receive an unprecedented amount of user-generated content, or UGC, uploaded for sharing and distribution from users around the globe. YouTube alone receives 400 hours of video uploaded to its platform per minute, information no longer available on its own statistics page but reported in a recent New York Times article as demonstration of the difficulty platforms face (Browne, 2017). This amount is a fourfold increase from YouTube’s last reported statistic, from 2014, when it publicly acknowledged 100 hours of video content uploaded per minute. Whether it is video, photos, text-based posts or a hybrid combination of multiple media, UGC is the lifeblood of social media platforms. It serves to keep the userbase actively engaged as content producers, lured by the promise of being able to circulate material at scale and virtually instantaneously around the globe. At the same time, new UGC, in the form of refreshed feeds that constantly provide content for consumption, keeps users engaged on platforms, checking and rechecking updates from friends and other sources. In essence, UGC is the currency by which users are engaged as consumers and producers on social media sites. The vastness of the volume all but insures that users will come back to a continually updated feed with new videos to view, news items to read, posts with which to engage.

Yet social media’s greatest currency, its UGC, is also a great liability. In opening up its platforms to anyone in the world with the ability and desire to disseminate content, firms run the risk of receiving material that is simply not appropriate for consumption. It can run the gamut from depictions of illegal acts, child sexual exploitation, violence and gore to hate speech and, increasingly, to material that is illegal in certain jurisdictions, causing governments to demand that gatekeeping be put in place in order that platforms be responsive to local laws (Lomas, 2017).

While some algorithmic tools and computational mechanisms can aid in the control and enforcement of platforms’ own norms and rules regarding UGC, the vast majority of this work falls to human agents who labor from places around the globe to remove such content from sites. Working in many different industrial, cultural and economic contexts, the workers share the circumstance of toiling largely in the shadows (often covered by non-disclosure agreements) and doing UGC adjudication work for pay (as opposed to volunteer models that characterized the early social Internet and that persist today). They are often subject to metrics that ask them to process each piece of questionable UGC within a matter of seconds, resulting in thousands of decisions per day, in work that resembles a virtual assembly line of queued photos, videos and posts constantly fed to a worker for decision-making (Roberts, 2017a). Typically, the entire process of review is set off by the flagging activities that regular users undertake when they encounter UGC that disturbs, upsets or angers them (Crawford and Gillespie, 2016). While the routes of the decision-making processes are often circuitous, complex and based on particularities of internal rules (as evinced by leaked flowcharts and secret policies), they ultimately all lead to one of two decisions possible: keep up or delete.

In the case of the photo of Kim Phúc, an error on the part of a CCM worker was blamed, ultimately, as the cause for the deletion. Under pressure, reviewing the content in a matter of seconds and likely without context, the worker followed the internal rules that banned child nudity, a “flattening” [2] of meaning that eliminated any other evaluation of the photo than whether or not it was UGC likely to engage, or to repel, its viewers. Applying such a binary judgment (child nudity: yes or no) to a complex image such as the one taken by Nick Ut seems ridiculous when considered as an isolated case, but it serves as an exemplar of the kind of logic that undergirds UGC management practices at scale [3]. Indeed, the mass commodification, solicitation and circulation of UGC permits few other logics to prevail.

 

++++++++++

Terror online

On 4 June 2017, the United States awoke to a newsday in which the terror attacks of the previous Saturday night on London, England’s iconic London Bridge occupied the headlines. In their wake, Conservative Prime Minister Theresa May announced that new pressures must be put on the world’s Internet platforms to regulate and rescind material that, May claimed, played a key role in the radicalization of those who undertake attacks on innocents (Stone, 2017). Her comments were widely understood to be in reference to Islamic extremism, on the one hand, and directed towards large American-based global firms such as Facebook, on the other. What constitutes online terrorism is typically narrow in scope and tends to refer to radical Islamist anti-Western positions and expressions. Racist, pro-Nazi, misogynist and sexually-violent threats and material do not typically earn this moniker when online terrorism is discussed in the press, by governments or by those in charge at the firms, and is often normalized or otherwise considered unproblematic by platforms (Noble, 2018). Indeed, Facebook clearly recognized itself in May’s statement and responded in its own Monday morning comment to news outlet CNBC:

In a statement, Simon Milner, director of policy at Facebook, said the social network giant wants to “provide a service where people feel safe. That means we do not allow groups or people that engage in terrorist activity, or posts that express support for terrorism. We want Facebook to be a hostile environment for terrorists.” [4]

In the same article, Dartmouth computer scientist Hany Farid was quoted as having expressed frustration at Facebook’s rebuff of his offer of eGlyph, a hashing technology similar to the one he co-developed with Microsoft in order to combat child sexual exploitation online [5]. His frustration was unsurprising, given Facebook’s public statements, like the quote above, outlining its desire to curtail uses of its platform for violent, sexually exploitative and terroristic means vis-à-vis its seeming reluctance to implement meaningful programs in a comprehensive way, and the difficulty it may be having in so doing, from a technical standpoint but also from one of logic, politics and policy.

 

++++++++++

Commercial content moderation and the logic of opacity

That platforms operate their moderation practices under a complex web of nebulous rules and procedural opacity renders situations such as those I have described even more challenging, with governments and others clamoring for tighter controls on some material, and other members of civil society demanding greater freedoms for online expression — and few parties acknowledging the fact that mainstream social media platforms are already highly regulated, albeit rarely in such a way that can be satisfactory to all.

The confusion is in no small part by the platforms’ own design: they are extremely reluctant to acknowledge, much less describe in great detail the internal policies and practices that govern what goes into the content ultimately available on a mainstream site. Indeed, acknowledgement of social media’s content moderation practices by the platforms themselves is a relatively new development, as firms have long been unwilling to go on record about much of anything at all when it comes to the gatekeeping of user-generated content on their sites. When they have acknowledged that such practices are undertaken, there has often been significant equivocation regarding which practices may be automated through artificial intelligence, filters and others kinds of computational mechanisms versus what portion of the tasks are the responsibility of human professionals, the CCM workers described.

Even when platforms do acknowledge their moderation practices and the human workforce that undertakes them, they still are loath to give details about who does the work, where in the world they do it, under what conditions and whom the moderation activity is intended to benefit. To this last point, I argue that content moderation activities are fundamentally and primarily undertaken to protect and enhance platforms’ advertising revenue, protect platforms themselves from liability, provide a palatable user experience (highly context-dependent) and, when necessary and in response to specific jurisdictional or regulatory rulings, to comply with legal mandates. In short, the role of content moderation is fundamentally a matter of brand protection for the firm.

Major announcements regarding internal CCM and UGC practices are relatively rare, but since civil society advocates, news media and the general public have begun to take greater notice of social media moderation practices and their implications, we have seen more acknowledgements from major mainstream firms. These acknowledgments have largely come after criticism of the platforms has put pressure on them to respond to moderation failures. This was the case in May 2017, when Facebook made its announcement that it would add 3,000 moderators to a global workforce that it stated was already at 4,500 (Gibbs, 2017). No other details were given about who the workers would be or in what capacity they would be employed. Leaks, rather than a formal announcement, were behind the information that Google’s YouTube property had employed legions of contract workers among a number of firms, such as ZeroChaos and Leapforce, to serve as its “ad raters,” a fact that they later confirmed by summarily firing all those employed by ZeroChaos overnight in early August 2017 (Alba, 2017).

Obfuscation and secrecy work together to form an operating logic of opacity around social media moderation. It is a logic that has left users typically giving little thought to the mechanisms that may or may not be in place to govern what appears, and what is removed, on a social media site. The lack of clarity around policies, procedures and the values that inform them can lead users to have wildly different interpretations of the user experience on the same site. Users may believe, on the one hand, that their site of choice is a place of self-expression and the free exchange of ideas (as encapsulated in posts, images, videos, memes) of all kinds; others may find that the same site’s regulation is draconian, difficult to understand and overzealous. Moreover, the lack of official information from social media firms pertaining to the decisions and operations surrounding what material is deemed acceptable and not further renders the machinery of content moderation opaque.

In part, the strategy behind this operating logic of opacity serves to render platforms as objective in the public imagination, driven by machine/machine-like rote behavior that removes any subjectivity and room for nuance — and, therefore, for large-scale questioning of the policies and values governing the decision-making. In this capacity, the logic of opacity is, if nothing else, an act of depoliticization. Images, videos and other material have only one type of value to the platform, measured by their ability to either attract users and direct them to advertisers or to repel them and deny advertisers their connection to the user. CCM workers are therefore directly engaged in this activity of audience experience curation for economic ends. It frequently puts them at odds with their own value systems or ones shared communally, and results in situations that, on their surface, seem absurd, but can be easily explained when a commodity logic is applied to them, such as in the case of “The Terror of War.”

These activities of commodification depoliticize. Yet the process is obscured by a social media landscape that tacitly, if not explicitly, trades on notions of free circulation of self-expression, on the one hand, and a purported neutrality, on the other, that deny the inherent gatekeeping baked in at the platform level by both its function as an advertising marketplace and the systems of review and deletion that have, until recently, been invisible to or otherwise largely unnoticed by most users.

Ultimately, content moderation gatekeeping practices serve as structural mechanisms to either privilege or devalue material that threatens status quo positions — whether in relation to the platform’s goals and objectives of revenue generation, or status quo as applied to higher-order political or cultural expectations. Meanwhile, some users are highly vulnerable to and affected by these activities of deletion, erasure and removal, due to their marginalized identities, political stances or other challenges to social or political norms. But in such cases, while removals may be tangible to individual users on a case-by-case basis, they are typically not understood by those same users as systemic and structural, nor as responding to codified policies and practices designed ultimately for brand protection and to serve the demand for revenue generation from advertising.

The totality of the policies and procedures that dictate the treatment of UGC under the logic of opacity leads to a black box phenomenon, much like that described by scholars Alexander Halavais (2009) and Frank Pasquale (2015), among others. In this case, as in those addressed by Halavais and Pasquale, the black box is a metaphor for the intangible and unknowable suite of sociotechnical assemblages (made up of rules and operations) constituting management of UGC on mainstream platforms. These paradoxical and often ill-fitting hybrids of the sociotechnical are present in other areas of social media, where platforms have likewise endeavored to codify and reduce complex examples of human expression and social construction into digestible, and monetizable, constituent parts, as in the case of gender identity (Bivens, 2017) and friendship (Bucher, 2013). The logic of opacity pervades other parts of the social media ecosystem in the service of other kinds of beliefs, such as that of the immateriality of the digital. The work of critical communications scholars like Mél Hogan (Hogan, 2015) and Tamara Shepherd (Hogan and Shepherd, 2015) has been instrumental in connecting the material and environmental impact of large-scale social media storage centers that are very much not in the cloud.

The logic of opacity that undergirds social media content moderation further enhances the myth that what ends up on a site is there by purposeful design of artificial intelligence and algorithmic computation, rather than by human decision-making and, often, the dumb luck of escaping reporting that brings content in front of human eyes. Social media firms take advantage of the ability to offshore this labor en masse to sites in the global South, thus distancing themselves, metaphorically and geographically, from the everyday working conditions of CCM workers, as well as blanketing those workers in non-disclosure agreements (NDAs) that preclude them from speaking about their work and experiences. These labor arrangements introduce a plausible deniability for social media platforms, who can focus on the fun, lucrative and more glamorous aspects of product innovation at their Silicon Valley headquarters, while leaving the task of cleanup, and the risks of exposure to it, for people on the other side of the world — or, certainly, at a lower pay grade.

Despite the difficulty of doing so, social media companies often profess an eagerness to completely and cost-effectively mechanize CCM processes, turning to computer vision and AI or machine-learning to eliminate the human element in the production chain. While greater reliance upon algorithmic decision-making is typically touted as a leap forward, one that could potentially streamline CCM and reduce its vulnerability to human error (what, in other circumstances, we might think of as “judgment”), it would also eliminate the very human reflection that leads to pushback, questioning and dissent. Machines do not need to sign non-disclosure agreements, and they are not able to violate them in order to talk to academics, the press or anyone else.

The operational logic of opacity further serves to obscure the raison d’être of social media firms; that is, to generate revenue through the procurement and delivery of consumers to advertisers, leading to profits. Profitability is also reflected in the value of the stock held by shareholders, in the case of publicly traded social media firms. That is, the boundaries of what is and is not acceptable content have been sufficiently obscured — through mechanisms such as impossibly lengthy and dense Terms of Service agreements and nebulous language around which content is deemed offensive enough to warrant removal — that firms are able to make behind-the-scenes binary decisions concerning acceptability when necessary. In reducing content generated by users into constituent value propositions of what is likely to be profitable and what would detract from that goal, firms make content decisions based on whether the UGC in question is likely to aid or hinder their ability to drive traffic, engage users, sell advertisements and to make money.

 

++++++++++

The power of platforms: Gatekeeping and shaping information

Throughout the 1970s, prominent elements of the U. S. news media establishment continued to serve as provocateurs to the ruling political élite, famously bringing down the Nixon administration through exposure, at the Washington Post, of the Watergate break-in and subsequent related misdeeds. The coverage saw the resignation of a sitting U. S. President, Congressional inquiry and prison terms for some participants. Yet the power of the Fourth Estate has been greatly diminished in the decades following the events of the 1970s — hastened, on the one hand, by a corporatized and conglomerated media landscape that has seen the shuttering of countless mid-size daily papers across the country and shrinking reporting corps covering local, national and international politics, and, on the other, a fractured digital landscape of difficult to monetize print-to-digital news.

Social media platforms have stepped into the vacuum left behind by traditional media outlets in order to fill the void for an information-hungry public, first simply as circulators of the news, but now as shapers of it. Facebook culture, as one might term it, has proved to be an amazingly powerful disseminator of information, and is even the first-source news outlet of choice for many, where news media stories are posted and recirculated and the platform itself acts as dissemination point (often a thorny problem for news services, who still struggle with monetization of their content in an age where such information is often consumed without remuneration).

Indeed, in spite of incredible digital-age journalism such as the publication of stories related to the Edward Snowden leaks, or the Panama Papers, the institution of journalism itself is under great pressure. As the work of Nicole Cohen shows, reporters are subjected to real-time “analytics” that monitor the reception of their stories and even suggest, in some cases, how they might edit them to gain broader circulation (i.e., more clicks), thus increasing advertising revenue — on digital advertising platforms provided by Silicon Valley social media platforms and firms [6]. Silicon Valley also makes its presence and pressure on journalism known in other ways; witness right-wing tech billionaire Peter Thiel’s bankrolling of a lawsuit that ultimately closed down Gawker and its related news properties, after that site published an unflattering yet accurate story revealing the fact that Thiel is gay.

The informational power of social media continues to be demonstrated time and again. In particular, U. S.-based platforms such as YouTube (Google) and Facebook have come into the limelight for both tacitly and directly using their power in support of the state and, with CEOs of these firms frequently acting as heads of state themselves, the results of internal policy decisions that influence content cannot be taken lightly [7]. Other aspects of social media, such as their significant capabilities in the arenas of surveillance and data collection, have been frequently discussed but scarcely resolved. And despite industry claims to the contrary, many consumers are not, in fact, comfortable with the Faustian bargains into which they enter in order to use social media tools.

 

++++++++++

CCM as commodity control

So how to account for the vast corpus of material that has been removed online, social media’s own digital detritus? Is there sense to be made of it? What conclusions or insights, if any, can be drawn? And what does it mean when an image of a wartime atrocity visited by a powerful state upon a child is rendered detritus under prevailing moderation logic?

Silvia Federici is a Marxist feminist scholar whose work has critiqued labor exploitation under capitalism as well as the lack of an analysis in Marxism around unwaged labor undertaken by women, typically in the context of domestic labor. One of her essays on the topic (Federici, 2012) notes that not even bringing such labor into the realm of so-called “affective labor” [8] is sufficient in accounting for such. In her analysis, Federici brings into relationship numerous symptoms of an inherently anti-feminist capitalism typically diagnosed independently of each other: the neoliberal impact of the dismantling of the welfare state and the subsequent selloff of state resources as profit centers; the destruction of the environment; the extrastate-ness of and primacy placed on borderless finance capital coupled with the statelessness rendered upon people by forced migration, on the one hand, and mass incarceration, on the other. Last but certainly key to her analysis has been the destruction under late capitalism of economies predicated on anything other than work for wage earning, to include particularly those economies focused on subsistence or barter. Also related was the blind spot of ignoring automation as a key function of the growth and expansion of that capitalism, rather than simply as shrinking of the work force and an increasing of leisure time as need for labor decreased — the latter an aspiration expressed and predicted by Daniel Bell (1976) and numerous others at the beginning of the knowledge society’s ascent in the early 1970s and through to present day.

Indeed, social media’s scope, speed and automation mask much of their power to replicate and amplify many of capitalism’s most problematic conditions through their novelty of tools and functionalities, their ubiquity, their innovation that, rather than create great ruptures with contemporary social modes, often serve more to reify, entrench and expand them at unprecedented pace and scale.

These logics absolutely extend to the case of the solicitation, production, circulation, management and consumption of the UGC economy of social media platforms, and the way in which a female nude body, as just one salient example, is framed in that context. In short, it must be forced into categories that read it as inherently sexualized because it can only be understood as commodity inside those economies. Such positioning of the female body within the commodified social media space leaves it room for two ends: either to be productive in terms of capital generation or to be rescinded and removed when it risks negatively impacting that production or when the capital generation does not come to the platform by beneficial means. The latter case leads to discipline through deletion, erasure and categorization as error, violation or detritus. In her book, Caliban and the witch, Federici states:

In particular, feminists have uncovered and denounced the strategies and the violence by means of which male-centered systems of exploitation have attempted to discipline and appropriate the female body, demonstrating that women’s bodies have been the main targets, the privileged sites, for the deployment of power techniques and power relations. [9]

In that text, Federici takes on the case of a historical, and quite literal, witch hunt; in this one, a witch hunt of a more metaphorical nature can be perceived, in which the female form as violation is sought out and eliminated. It is a logic that makes it reasonable that a child being violated by the state in its ultimate and most violent self-expression can be categorized, whether by machine or human intervention, as sexual exploitation. It has, at various times, decried breastfeeding imagery, menstruation and other natural functions of women’s bodies, as vulgar, gross, or as nudity (all terms found in social media platform community guidelines). It is this logic that takes a picture of a female child whose skin has been burned off by napalm and categorizes it as child pornography; the harm is not the violence being done to the child (and all children) through war, but in the naked form captured in the photo-as-commodity — in the language of platforms, the “content” — which circulates in order to garner views, participation and advertising revenue for the platforms, and this image does not fit.

In fact, the challenge to power the image represents is that it defies the categorization of commodification based on sexual value. It actually does the opposite: questions state power, questions capitalism, questions commodification — and it is primarily on those grounds that it cannot stand. But the logic and language of the platform is so embedded in capitalism and capital accumulation that that analysis cannot be made and what is left is only to assign it as detritus or error, on sexual/obscenity grounds. To be sure, “The Terror of War” is obscene, but not for the reasons at platforms’ avail when such an image is rendered and reduced into “content” and when a CCM worker has 15 seconds to decide on the binary stay-delete state.

 

++++++++++

Conclusion: What platforms want

The visual theorist W. J. T. Mitchell famously asked, “What do pictures really want?” in an effort to shift visual interpretation of art to nuanced and challenging questions of desire, agency and complexity of images (or paintings, in the case of his essay), as well as to suggest a potency and power that are inherent to pictures themselves, if only they can be apprehended: “Pictures are things that have been marked with all the stigmata of personhood: they exhibit both physical and virtual bodies; they speak to us, sometimes literally, sometimes figuratively.” [10] Yet, as I hope to have adequately convinced the reader by now, the insertion of pictures, images and photographs into the social media ecosystem results inevitably in a flattening and a removal of sense and meaning. This is accomplished by virtue of the architectural logic of the platforms themselves and enforced through the moderation process, where images (and other content) are evaluated for their contributions to the platform’s economy above all else. The question becomes, “What do platforms want?” and the content is made to respond to those desires and values through gatekeeping and moderation practices.

Platforms have traditionally avoided acknowledgment that they undertake moderation at all, equivocating about who or what does the moderation work, how and under what conditions and refusing to articulate the specific policies and procedures by which their moderation practices are undertaken. These characteristics result in a logic of opacity that works in conjunction with content as commodity. It is measured less for its social meaning or value, for its impact or, per Mitchell, for its desire, but instead for its characteristics as defined by the internal and inaccessible moderation policies and practices which are grounded in the firms’ calculations about what kinds of content will attract or repel advertising revenue. Because acts of deletion are largely intangible to users — after all, how can a user account for or perceive of a lack outside of her own direct experience of deletion — even the ability to make meaning out of the removal of content is out of reach.

In her book-length study of the photographic records of murder victims under the Khmer Rouge, archival theorist Michelle Caswell (2014) wrestled with the flawed concept of neutrality as applied to archives in taking on the display of images of atrocities. Ultimately, Caswell questions the ability of any institution to adequately frame and contextualize such imagery. In the case of social media platforms, where images are rendered into content to be evaluated for their potential economic value to the platform over all other characteristics, this crisis of context is even more acute, as the means for distribution and display are fundamentally reductive and depoliticizing and the mechanism for circulation is a financial exchange.

Caswell, in her work, ultimately asks where, if anywhere, it is appropriate to exhibit photos of atrocities such as those of the Khmer Rouge’s victims at Tuol Sleng, or, in this case, “The Terror of War” depicting the burning of the child Kim Phúc in the Vietnamese countryside. Perhaps the problem is so deeply structural that spaces like Facebook and other UGC-reliant advertising platforms, by virtue of their own ecosystem made up of architecture and functionality, economy and policy, ultimately suffer from an inability to convey any real depth of meaning at all. Under these circumstances, the utility of platforms, governed by profit motive and operating under a logic of opacity, to the end of greater ideals or challenge to status quo is seriously in doubt. End of article

 

About the author

Sarah T. Roberts is an assistant professor in the Department of Information Studies, Graduate School of Education & Information Studies, University of California, Los Angeles.
E-mail: sarah [dot] roberts [at] ucla [dot] edu

 

Notes

1. As Tuosto (2008, p. 22) points out, “Journalism is not simply investigative reporting for the sake of finding truth; it is a capitalist enterprise with a market and consumers to which it must cater.” This was the case both during Vietnam as well as in the Iraq wars covered in the cited article, but the embedding process in the early 2000s fundamentally changed journalists’ relationships to their subjects and altered the outcome of war reporting, as studies went on to show.

2. Adorno, 2001, p. 77.

3. Researcher Kate Klonick speculated that this same photo had likely been removed “thousands of times” before the Norwegian case brought the issue to prominence (Angwin and Grassegger, 2017).

4. DiChristopher, 2017, at https://www.cnbc.com/2017/06/04/facebook-wants-to-be-hostile-environment-for-terrorists.html.

5. Farid, 2016, at https://www.counterextremism.com/video/how-ceps-eglyph-technology-works.

6. Cohen, 2015, p. 108.

7. A case in point is an example from a firm using CCM to manage its UGC-generated video content: “Whether or not the policy group realized it, the worker told me, its decisions were in line with U. S. foreign policy: to support various factions in Syria, and to disavow any connection to or responsibility for the drug wars of Northern Mexico. These complex, politically charged decisions to keep or remove content happened without anyone in the public able to know. Some videos appeared on the platform as if they were always supposed to be there, others disappeared without a trace” (Roberts, 2017b).

8. Hardt and Negri, 2004, pp. 108–111.

9. Federici, 2004, p. 15.

10. Mitchell, 1996, p. 72.

 

References

Theodor Adorno, 2001. The culture industry: Selected essays on mass culture. New York: Routledge.

Davey Alba, 2017. “Google drops firm reviewing YouTube videos,” Wired (4 August), at https://www.wired.com/story/google-drops-zerochaos-for-youtube-videos/, accessed 31 August 2017.

Julia Angwin and Hannes Grassegger, 2017. “Facebook’s secret censorship rules protect white men from hate speech but not black children,” Mother Jones (28 June), at http://www.motherjones.com/politics/2017/06/facebooks-secret-censorship-rules-protect-white-men-from-hate-speech-but-not-black-children/, accessed 31 August 2017.

Daniel Bell, 1976. The coming of post-industrial society: A venture in social forecasting. New York: Basic Books.

Rena Bivens, 2017. “The gender binary will not be deprogrammed: Ten years of coding gender on Facebook,” New Media & Society, volume 19, number 6, pp. 880–898.
doi: https://doi.org/10.1177/1461444815621527, accessed 7 February 2018.

Malachy Browne, 2017. “YouTube removes videos showing atrocities in Syria,” New York Times (22 August), at https://www.nytimes.com/2017/08/22/world/middleeast/syria-youtube-videos-isis.html, accessed 31 August 2017.

Taina Bucher, 2013. “The friendship assemblage: Investigating programmed sociality on Facebook,” Television & New Media, volume 14, number 6, pp. 479–493.
doi: https://doi.org/10.1177/1527476412452800, accessed 7 February 2018.

Michelle Caswell, 2014. Archiving the unspeakable: Silence, memory, and the photographic record in Cambodia. Madison: University of Wisconsin Press.

Nicole S. Cohen, 2015. “From pink slips to pink slime: Transforming media labor in a digital age,” Communication Review, volume 18, number 2, pp. 98–122.
doi: https://doi.org/10.1080/10714421.2015.1031996, accessed 7 February 2018.

Kate Crawford and Tarleton Gillespie, 2016. “What is a flag for? Social media reporting tools and the vocabulary of complaint,” New Media & Society, volume 18, number 3, pp. 410–428.
doi: https://doi.org/10.1177/1461444814543163, accessed 7 February 2018.

Tom DiChristopher, 2017. “Facebook wants to be ‘hostile environment for terrorists’ as May calls for internet regulations,” CNBC Tech (4 June), at https://www.cnbc.com/2017/06/04/facebook-wants-to-be-hostile-environment-for-terrorists.html, accessed 31 August 2017.

Hany Farid, 2016. “How CEP’s eGLYPH technology works,” Counter Extremism Project (8 December), at https://www.counterextremism.com/video/how-ceps-eglyph-technology-works, accessed 31 August 2017.

Silvia Federici, 2012. Revolution at point zero: Housework, reproduction, and feminist struggle. Oakland, Calif.: PM Press.

Silvia Federici, 2004. Caliban and the witch: Women, the body and primitive accumulation. Brooklyn, N. Y.: Autonomedia.

Samuel Gibbs, 2017. “Facebook live: Zuckerberg adds 3,000 moderators in wake of murders,” Guardian (3 May), at https://www.theguardian.com/technology/2017/may/03/facebook-live-zuckerberg-adds-3000-moderators-murders, accessed 31 August 2017.

Michel M. Haigh, Michael Pfau, Jamie Danesi, Robert Tallmon, Tracy Bunko, Shannon Nyberg, Bertha Thompson, Chance Babin, Sal Cardella, Michael Mink and Brian Temple, 2006. “A comparison of embedded and nonembedded print coverage of the U. S. invasion and occupation of Iraq,” International Journal of Press/Politics, volume 11, number 2, pp. 139–153.
doi: https://doi.org/10.1177/1081180X05286041, accessed 7 February 2018.

Alexander Halavais, 2009. Search engine society. Malden, Mass.: Polity.

Michael Hardt and Antonio Negri, 2004. Multitude: War and democracy in the age of Empire. New York: Penguin Press.

Mél Hogan, 2015. “Data flows and water woes: The Utah Data Center,” Big Data & Society, volume 2, number 2.
doi: https://doi.org/10.1177/2053951715592429, accessed 15 August 2015.

Mél Hogan and Tamara Shepherd, 2015. “Information ownership and materiality in an age of big data surveillance,” Journal of Information Policy, volume 5, pp. 6–31.
doi: https://doi.org/10.5325/jinfopoli.5.2015.0006, accessed 7 February 2018.

Sam Levin, Julia Carrie Wong and Luke Harding, 2016. “Facebook backs down from ‘napalm girl’ censorship and reinstates photo,” Guardian (9 September), at https://www.theguardian.com/technology/2016/sep/09/facebook-reinstates-napalm-girl-photo, accessed 31 August 2017.

Natasha Lomas, 2017. “Facebook again under fire for spreading illegal content,” TechCrunch (13 April), at http://social.techcrunch.com/2017/04/13/facebook-under-fire-for-spreading-illegal-content/, accessed 31 August 2017.

W. J. T. Mitchell, 1996. “What do pictures ‘really’ want?” October, volume 77, pp. 71–82.
doi: https://doi.org/10.2307/778960, accessed 7 February 2018.

Safiya Umoja Noble, 2018. Algorithms of oppression: How search engines reinforce racism. New York: NYU Press.

Frank Pasquale, 2015. The black box society: The secret algorithms that control money and information. Cambridge, Mass.: Harvard University Press.

Sarah T. Roberts, 2017a. “Content moderation,” In: Laurie A. Schintler and Connie L. McNeely (editors). Encyclopedia of big data. Cham, Switzerland: Springer International.
doi: https://doi.org/10.1007/978-3-319-32001-4_44-1, accessed 22 August 2017.

Sarah T. Roberts, 2017b. “Social media’s silent filter,” Atlantic (8 March), at https://www.theatlantic.com/technology/archive/2017/03/commercial-content-moderation/518796/, accessed 31 August 2017.

Jon Stone, 2017. “Theresa May says the Internet must now be regulated following London Bridge terror attack,” Independent (4 June), at http://www.independent.co.uk/news/uk/politics/theresa-may-internet-regulated-london-bridge-terror-attack-google-facebook-whatsapp-borough-security-a7771896.html, accessed 31 August 2017.

Kylie Tuosto, 2008. “The ‘grunt truth’ of embedded journalism: the new media/military relationship,” Stanford Journal of International Relations, volume 10, number 1, pp. 20–31, and at https://web.stanford.edu/group/sjir/pdf/journalism_real_final_v2.pdf, accessed 7 February 2018.

 


Editorial history

Received 22 January 2018; accepted 7 February 2018.


Creative Commons License
“Digital detritus: ‘Error’ and the logic of opacity in social media content moderation” by Sarah T. Roberts is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Digital detritus: ‘Error’ and the logic of opacity in social media content moderation
by Sarah T. Roberts.
First Monday, Volume 23, Number 3 - 5 March 2018
http://firstmonday.org/ojs/index.php/fm/article/view/8283/6649
doi: http://dx.doi.org/10.5210/fm.v23i3.8283





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2018. ISSN 1396-0466.