What social media platforms can learn from audience measurement: Lessons in the self-regulation of "black boxes"
First Monday

What social media platforms can learn from audience measurement: Lessons in the self-regulation of black boxes by Philip M. Napoli and Anne Napoli

The widespread concerns about the misuses and negative effects of social media platforms have prompted a range of governance responses, including preliminary efforts toward self-regulatory models. Building upon these initiatives, this paper looks to the self-regulation of the audience measurement industry as a possible template for the self-regulation of social media. This article explores the parallels between audience measurement systems and social media platforms; reviews the self-regulatory apparatus in place for the audience measurement industry; and, considers the lessons that the self-regulation of audience measurement might offer to the design and implementation of self-regulatory approaches to social media.


The parallels between audience measurement and social media
Self-regulation of audience measurement: Origins, evolution, operation, and authority
Exploring applicability to social media




Social media platforms are coming under greater legal and regulatory scrutiny. Growing concerns over issues such as disinformation, hate speech, foreign interference, political advertising, and the security of user data have led to many calls for government intervention (see, e.g., Clifford, 2018; U.K. House of Commons, 2018; McNamee, 2018; Weixelbaum, 2018). In the U.S., a wide range of policy proposals are circulating, including: requiring social media platforms to identify and label bots and inauthentic accounts; limiting platforms’ data gathering and sharing behaviors; restricting the micro-targeting of political advertisements; revising the expansive immunity from liability that social media platforms currently enjoy under Section 230 of the Communications Decency Act; requiring greater algorithmic transparency; and breaking up large firms such as Facebook (Feld, 2019; O’Sullivan, 2019; Rodrigo, 2019; Warner, 2018). In other countries, such as Germany and France, significant regulatory interventions into the operation of social media platforms have already been adopted (Napoli, 2019).

Many critics of government intervention have argued that some form of industry self-regulation may be more appropriate and effective (see, e.g., Aitoro, 2018; European Commission, 2018a). The European Commission’s High Level Expert Group on Fake News, for instance, recommended a self-regulatory approach based on a “clearly defined multi-stakeholder engagement process” [1]. In keeping with the recommendations in this report, major digital platforms, social networks, and advertisers adopted an EU Code of Practice on Disinformation (European Commission, 2018b) that includes a range of commitments, focused on areas such as transparency and disclosure, consumer empowerment, and maintaining the integrity of their services. In addition, Facebook has initiated an effort to construct an independent panel to review controversial content moderation decisions (Harris, 2019).

The goal of this article is not to argue for the greater feasibility or efficacy of either the government- or self-regulatory approach. Rather, the goal of this article is to consider one possible self-regulatory model and its applicability to — or at least lessons for — social media.

Self-regulation has a long tradition in the U.S. media sector, with the motion picture, music, television, and videogame industries all adopting self-imposed and largely self-designed content ratings systems to restrict children’s access to adult content. These systems all arose in response to threats of direct government regulation (see Campbell, 1999), which is perhaps a reflection of the somewhat inconsistent track records of media firms when it comes to corporate social responsibility (Sandoval, 2014). These threats typically took the form of congressional hearings or inquiries — classic instances of what is often called “regulation by raised eyebrow” (Corn-Revere, 1988).

Over the past three years, there has been a steady stream of congressional hearings on various aspects of the operation of social media, including: their role in the 2016 election (U.S. Congress. Senate. Select Committee on Intelligence, 2017); their data gathering and sharing practices (U.S. Congress. Senate. Committee on the Judiciary, 2018); their potential efforts to suppress conservative viewpoints (U.S. House of Representatives, Judiciary Committee, 2018b); their preparedness for foreign interference efforts in advance of the 2018 election (U.S. Congress. House. Judiciary Committee, 2018a); and their role in the dissemination of violence and political extremism (U.S. Congress. Senate. Committee on Commerce, Science, & Transportation, 2019). These activities suggest that a similar process of regulation by raised eyebrow may be at work, and that some form of systematic industry self-regulation is likely to emerge.

The issues of concern in the social media realm are far more complex and multi-faceted than the concerns about children being exposed to adult content that are at the core of the self-regulatory structures and mechanisms that have been established for other media sectors (though protecting children from adult content is also a problem that confronts social media platforms). There is, however, another media-related self-regulatory context that may better reflect the nature of the concerns surrounding — and the nature of the operation of — social media, and thus may provide some useful guidance: the audience measurement industry.

The media industries are served by a wide range of audience measurement systems. The job of these systems is to provide accurate and reliable measurement of the audiences consuming content through the various media technologies (see generally, Napoli, 2011). Whether it is television, radio, online, or print, there are increasingly interconnected audience measurement systems that provide content producers/distributors and advertisers with data on who’s consuming what, and on the various demographic (and in some cases behavioral and psychographic) characteristics of these audiences. These measurement systems operate under a long-established — though seldom studied — self-regulatory model.

This paper explores the structure and dynamics of self-regulation in the U.S. audience measurement industry, and its possible lessons for social media. It is premised on the contention that there are a number of important similarities between audience measurement systems and social media algorithms. As a result, there is utility in examining the self-regulation of audience measurement in some detail, with an eye toward identifying lessons that could help guide the construction of a self-regulatory apparatus for social media.

This article begins by laying out the key similarities between audience measurement systems and social media platforms. As this section will illustrate, there are a number of important similarities, including industry structure, social/cultural impact, and “black box” characteristics. The second section provides detail on the origins, evolution, and operation of the self-regulatory system for audience measurement in the U.S. The third section uses the similarities between audience measurement and social media algorithms as a starting point for considering the applicability of various aspects of the self-regulation of audience measurement to social media. The concluding section summarizes the key findings and considers avenues for future research.



The parallels between audience measurement and social media

There are enough meaningful parallels between the audience measurement and social media industries that an examination of the self-regulatory system for audience measurement should inform discussions about possible self-regulation for social media. This section delineates these key similarities.

Lack of competition

Both the audience measurement and social media industries are characterized by a lack of competition. In audience measurement, the Nielsen Company has established itself in a dominant position, as the leading provider of television, radio, and online audience measurement in the U.S. (Homonoff, 2015; Worden, 2011). Similar non-competitive conditions tend to exist in other countries as well, despite the fact that the barriers to entry into the industry have been characterized as being fairly low (Furchtgott-Roth, et al., 2007). Traditionally in the audience measurement industry, each media platform has been measured by one dominant firm, with secondary firms fighting for market share. Clients of these dominant firms have long complained about issues such as pricing, lack of innovation, and unilateral decision-making (typically in relation to technological or methodological changes in the measurement systems) — concerns that generally are characteristic of monopolistic scenarios [2].

The displacement of the dominant firm by one of these upstarts is a very rare phenomenon in audience measurement. More often, the dominant firm simply adopts whatever technological or methodological innovation that might distinguish the upstart firm; purchases the upstart firm outright; or simply wins the war of attrition, givens its greater size and resources (see Bourdon and Méadel, 2015). So, for instance, when the field of “social TV analytics” developed, there were initially a large number of upstart firms (Napoli, 2014) [3]. Nielsen quickly purchased one of these firms, and soon established itself as the market leader in social TV analytics (Kosterich and Napoli, 2016). Nielsen’s monopoly status in television audience measurement has even been acknowledged by the courts. In an antitrust lawsuit from a disgruntled client, the 11th Circuit Court of Appeals decided that, while Nielsen was indeed a monopoly, the company was not using its market power to keep competitors out of the market (Goetzl, 2013).

Indeed, perhaps one of the most interesting aspects of the market for audience measurement data is that there are a variety of reasons that the market participants who rely upon audience data (i.e., ratings) prefer a single provider over competition [4]. The logic here is that, given that audience measurement systems provide the “currency” that serves the market for media audiences, the presence of multiple competing currencies is inherently inefficient. In a competitive audience measurement scenario, advertisers and content providers would need to incur the additional costs of subscribing to multiple audience measurement systems; something these stakeholders have demonstrated on many occasions that they are unwilling to do. Further inefficiency arises from how negotiations are affected when market participants have access to more than one currency. Market participants will tend to gravitate to the measurement results that best serve their financial interests, which can lead to substantial disagreements when different measurement systems are producing significantly different audience estimates. It is difficult to negotiate contract terms in an environment of multiple currencies. For these reasons, some analysts have contended that audience measurement is a natural monopoly (Taneja, 2013; Bourdon and Méadel, 2015).

A very similar market structure seems to have emerged in the social media industry. Like audience measurement, the social media industry is not a monopoly in a strict sense. There are multiple social media platforms in operation, and, in theory, the barriers to entry for new social media platforms are relatively low. However, the reality is that Facebook has established an incredibly dominant position in the social media industry, with its closest competitors, such as Twitter, Snapchat, and YouTube, lagging far behind, in terms of both users and ad revenues, not unlike the way competitors to Nielsen such as Rentrak and comScore have market shares that are dwarfed by Nielsen’s. Like Nielsen in audience measurement, Facebook has absorbed upstart social media platforms (such as Instagram and WhatsApp) and related firms — an activity that is at the core of the antitrust scrutiny that Facebook is currently undergoing (Brody and Stoller, 2019). In 2018, Facebook accounted for nearly 80 percent of all social media advertising spending in the U.S., leaving the various other platforms to divvy up the remaining 20 percent (Statista, n.d.). From a usage standpoint, Facebook is used by nearly 70 percent of adults in the U.S. Facebook-owned Instagram is used by roughly 35 percent of adults; followed by Pinterest at 29 percent and Twitter at 24 percent (Wagner and Molla, 2018).

As has been the case with audience measurement, some analysts of the social media industry have asserted that it too may be a natural monopoly (Moazed and Johnson, 2016; Taplin, 2017), though this seems to be a more difficult case to make in the context of social media than it is for audience measurement. Nonetheless, it is important to recognize the conditions that have contributed to Facebook’s dominance, as they parallel, to some extent, the conditions that have facilitated single-firm dominance in audience measurement. Specifically, Facebook’s dominance reflects the importance of network effects — the extent to which the size of the user base of a service enhances the value of the service to potential users. Thus, the more people who use Facebook, the more appealing Facebook becomes to other potential users, since the platform represents the most efficient way to have the widest reach possible (a one-stop shop for staying connected with all of your friends, family, work colleagues, favorite brands, news organizations, etc.) (see, e.g., Magnin, 2016; Wu, 2010). And, of course, the more people who use Facebook, the more appealing Facebook becomes to advertisers, given the platform’s massive reach across so many different geographic, demographic, and psychographic categories (Upson, 2018). Network effects are also fundamental to the ascension of a dominant firm in audience measurement. The more market participants the utilize a particular measurement system as their currency, the stronger the incentive for other market participants to adopt the same currency (Viljakainen, 2013).

Susceptibility to manipulation

Both audience measurement systems and social media platforms are susceptible to manipulation by third parties. In audience measurement, ratings systems need to insulate themselves from various forms of rating distortion. The integrity of television and radio audience ratings can be affected, for instance, if members of the measurement sample are affiliated with, or known to, any of the media outlets being measured; and are thus subject to influence or manipulation. Radio and television programmers also have been known try to “hype” ratings during measurement periods, with actions such as contests or sweepstakes intended to temporarily boost audience sizes beyond their normal levels (Webster, et al., 2014). In online audience measurement, bots are a common tool for creating inflated audience estimates; and thus must be policed vigilantly (Greiner, 2017). Even print audience measurement hasn’t been immune to manipulation efforts. In one high-profile case, a major New York newspaper engaged in an elaborate scheme of “circulation fraud” intended to deceive the Audit Bureau of Circulation, the organization that was, at that time, charged with auditing and reporting the circulation of print newspapers (Shin, 2005). In this case, the paper’s fraud strategies included placing bogus newspaper vendors at various points around the city for the auditors to see. Once the auditors departed, these bogus vendors essentially threw all of their newspapers in the trash and left.

These scenarios are not unlike those facing social media platforms, which face constant efforts by third parties seeking to “game” their news feed algorithms in order to achieve higher placement and wider distribution of their content for economic or political gain (Bradshaw and Howard, 2018; Svantesson and van Caenegem, 2017). An entire industry has arisen around “optimizing” content for social media curation algorithms. At the same time, platforms such as Facebook are constantly adjusting their algorithms in ways to diminish the prominence of “click bait” and other forms of “low quality” content that often are produced with an eye towards exploiting what is known about the algorithms’ ranking criteria and how these criteria impact the performance of various types of content (Napoli, 2019).

Those seeking to manipulate the algorithmic content curation systems of social media platforms have a wide range of tools and techniques at their disposal. At the most basic level, they can flood the platforms with content, which requires relatively little in terms of resources when bots and fake accounts can be deployed to disseminate, share, and react to content, thereby increasing the content’s prominence in the social media ecosystem (Bradshaw and Howard, 2018). A key strategy here involves creating “the illusion of popularity” through artificially inflated shares, likes, comments, and reactions, given the importance of popularity and reaction indicators in the ranking criteria of social media algorithms (Hern, 2017). Fake followers can even be purchased in bulk (Confessore, et al., 2018). In this way, artificial popularity serves as an algorithmic input that can then facilitate legitimate popularity.

In many instances, fake social media accounts are created that are incredibly well disguised as legitimate news sources; often operating as such for years before engaging in fake news dissemination. Such “sleeper” accounts achieve legitimacy not only with the audience, but with the social media algorithms that employ criteria intended to measure the legitimacy and trustworthiness of individual accounts (Napoli, 2019).

This discussion represents only the tip of the iceberg in terms of how the curation algorithms of social media platforms are subject to — and thus need to be policed for — third party manipulation. The key point here is that third-party efforts to “game the system” characterize both the audience measurement and social media industries.

Social/cultural significance of outputs

The actions of audience measurement firms and social media platforms can have significant social and cultural repercussions. If an audience measurement system under-represents a certain demographic group in its methodology, the amount of content produced for that group’s needs/interests may diminish [5]. This can lead to the decline of culturally significant content that appeals to small, but loyal audience, or to the decline of content that serves the needs and interests of specific minority groups.

Concerns such as these have been at the core of controversies that have arisen around audience measurement over the past two decades (Bourdon and Méadel, 2015). For instance, when radio audience measurement service Arbitron (since absorbed by Nielsen) began altering its methodology from paper diaries to portable electronic meters, early results suggested that the audience estimates for minority-targeted radio stations dropped dramatically due to the methodological shift (Napoli, 2009). Similar concerns arose when Nielsen migrated from paper diaries to electronic set-top boxes for its local television audience measurement system (Napoli, 2005). It’s worth noting that both of these instances led to congressional hearings, questions about the lack of competition in audience measurement, and threats (never acted upon) of governmental intervention into the audience measurement industry (Napoli, 2005; Napoli, 2009). Throughout such controversies, measurement firms maintained the position that they were seeking to provide the most objective, accurate, and reliable audience estimates possible. Clearly then, issues of neutrality, representation, and media diversity are tied up in the dynamics of audience measurement.

Similar concerns permeate the operation of social media platforms. Social media platforms have moved into a powerful gatekeeping position for many forms of content available online. As a result, political and cultural concerns similar to those found in audience measurement have arisen in the social media realm. The examples here are numerous and wide ranging. Facebook, for instance, was subjected to intense criticism when it was found that reporting of the protests surrounding police behaviors in Ferguson, Missouri, failed to appear prominently on the platform, while such reporting was appearing quite prominently on Twitter (Ferguson, 2014). Twitter, however, endured similar criticisms when it appeared that the substantial discussion of the Occupy movement was failing to register on the platform’s Trending topics list (Gillespie, 2011). In both of these instances, the explanations for the absences revolved around the complex set of factors and weightings associated with the relevant algorithms (Napoli, 2019).

More generally, we have seen increasing concern over the ways in which social media algorithms may facilitate the construction of partisan filter bubbles (see, e.g., El-Bermawy, 2016), and the ways in which these algorithms may systematically push users toward consuming more extreme, more politically polarizing, and ultimately, less factually accurate, content (see, e.g., Tufekci, 2018). The U.S. Congress has even recently held hearings looking into whether search and social media algorithms are systematically suppressing conservative viewpoints (see, e.g., U.S. Congress. House. Judiciary Committee, 2018a; U.S. Congress. House. Judiciary Committee, 2018b). All of these concerns reflect the substantial political and cultural ramifications of the curation process of social media algorithms and the extent to which they adequately represent diverse viewpoints and populations.

And, just as a change in the technology or methodology of an audience measurement system can significantly affect the performance of individual — or categories of — content providers, so too can a slight tweak to a social media platform’s news feed algorithm suddenly impact the size of the audience that the platform drives to certain types of online content providers. Once again, the underlying political or cultural ramifications may be profound. For instance, in January of 2018, Facebook announced a change to its News Feed algorithm that would place higher priority on posts from “friends and family” and de-prioritize, to some extent, posts from brands and publishers (Mosseri, 2018). By many accounts, this algorithmic adjustment resulted in steep declines in the audience traffic that Facebook drives to publishers’ sites (Oremus, 2018). And when Facebook further adjusted its algorithm to favor posts from news outlets deemed most “trustworthy” in user surveys, the result ended up (not surprisingly) increasing the traffic to certain news organizations but decreasing the traffic to others (Oremus, 2018; M. Ingram, 2018). YouTube has similarly adjusted its algorithm to favor more “authoritative” sources, in ways that are similarly likely to affect the audience reach of certain content providers (Nicas, 2017; Hern, 2018).

Needless to say, there are profound political ramifications in how these platforms’ operationalizations of “trustworthiness” and “authoritativeness” affect which types of media organizations are able to thrive on these platforms. Many online content providers have become nearly as dependent on social media referrals for their traffic (and ad revenue) as ad-supported television programmers, radio stations, and Web sites are on the audience measurement systems that depict their audience size and composition for advertisers (Rashidian, et al., 2018). In this way, the economic prospects for the producers of contemporary news and cultural content are inextricably linked with the dynamics of both audience measurement systems and social media algorithms, and are vulnerable to any methodological or algorithmic changes. The key point here, then, is that issues of political and cultural representation, and the diversity of such representation, are tightly intertwined in the activities of — and concerns about — the operation of both audience measurement systems and social media platforms.

Ambiguous First Amendment status

It is also worth noting the somewhat ambiguous First Amendment status of both the ratings produced by audience measurement firms and the algorithms produced by social media firms. In both cases, there remains room for debate as to whether audience ratings systems and algorithms represent forms of speech that are deserving of full First Amendment protection. The uncertainty in both cases derives from the broader (and still contentious) question of whether data represent a form of speech eligible for First Amendment protection (see, e.g., Bambauer, 2014). Data represent the primary output of audience measurement systems. And in the case of social media algorithms, data are the key input that generates their content curation outputs. Thus, the two contexts represent somewhat different instances of the intersection of data and speech, and the First Amendment uncertainty that can arise. This issue is of particular relevance to discussions of possible governmental regulation in these spheres, given the extent to which the First Amendment provides substantial protections against government intervention and increases the likelihood of a self-regulatory model taking hold in speech-related contexts.

In the case of audience measurement, measurement firms have argued that they deserve full First Amendment protection (and thus freedom from government regulation), as well as copyright protection, because audience ratings represent their “opinions,” rather than the kind of factual, commercial information that generally receives far less First Amendment and copyright protection (Napoli, 2009). Yet these same measurement firms also have described their ratings as “objective” and “accurate” when faced with accusations of inaccuracy; terms that seem somewhat incompatible with the idea of audience ratings as opinions (Napoli, 2009). Nielsen’s President and CEO once testified before Congress that Nielsen is “in the truth business.” (Whiting, 2005). Given the continued ambiguity around the parameters of commercial speech, and the fact that no court has ruled on the speech status of audience ratings, exactly what, if any, degree of First Amendment protection audience ratings should receive remains unclear (Napoli, 2009).

In terms of social media, legal scholars continue to debate whether algorithms and their outputs represent a form of speech entitled to full First Amendment protection. Some analysts contend that the increasing scope of First Amendment protections means that algorithms are entitled to substantial First Amendment protection, particularly given the challenge of clearly drawing a line separating algorithmic and human decision-making (see, e.g., Benjamin, 2013). Others contend that the operation of algorithms is inherently “functional” and lacking the expressive characteristics of speech that tend to trigger First Amendment protection (Wu, 2013) [6]. What these arguments both have in common, however, is the recognition that existing legal doctrine does not yet offer sufficient clarity as to exactly if or how the First Amendment should apply to algorithms. As with audience measurement systems, the First Amendment status of algorithms remains unclear, which complicates the question of the extent to which they can be subject to regulatory interventions.

“Black boxes”

Finally, and perhaps most significantly, both audience measurement systems and social media algorithms have frequently been characterized as “black boxes.” The term black box refers to any system that can be observed in terms of its inputs and outputs, but the inner workings of which remain opaque. The term has been applied ad infinitum to the algorithms that power social media platforms, and to the lack of transparency that characterizes them (Diakopoulos, 2013; Pasquale, 2015). Of course, competitive advantage in the social media sector depends, to some extent, on these curation algorithms remaining black boxes. Opacity is also essential for discouraging as much as possible the gaming of these curation algorithms by third parties. As a result, end users of social media platforms know relatively little about the criteria and prioritization weights that determine the composition of their news feeds.

At the same time, there are a variety of compelling reasons why some evaluation of the inner workings of these black boxes would be valuable to end users. Greater transparency could help to reveal potential biases in these algorithmic systems. It could also inform users of the specific criteria (and relative weights of these criteria) determining the curation of their news feeds, leading to more digitally literate (and perhaps less impressionable) users.

Audience measurement firms face similar incentives to keep the details of their measurement systems proprietary (Metzger, 2005). This is largely for competitive reasons. Methodological innovation can be a significant source of competitive advantage in the audience measurement industry; whether it be in terms of incumbent firms using such innovation to stave off competition or increase their market share; or in terms of new competitors entering the market and utilizing methodological innovations to try to establish a foothold or increase their market share. Therefore, it’s not surprising that audience measurement systems, like social media algorithms, have frequently been characterized as “black boxes.” (see, e.g., Lafayette, 2018). End users of these audience measurement systems — the media companies, advertising and media buying firms, and advertisers that rely upon audience ratings in their decision-making — thus find themselves in the position of relying upon services whose inner workings remain relatively opaque. Here again, once could imagine that subscribers to audience measurement services would value some mechanism for evaluating the methodological rigor and reliability of these systems, even if they could not be provided with complete details regarding how these systems go about producing audience ratings data.


In the end, audience measurement systems and social media algorithms both produce ratings. Audience measurement firms produce ratings that reflect the size and composition of the audiences for media content. These ratings then determine the allocation of advertising dollars. Social media algorithms produce ratings of the estimated relevance of individual content options available to the social media user. These ratings then determine the appearance and rank ordering of the content within an individual’s curated news feed or trending list; and thus, to some extent, the size of the audience that consumes the content made available on the platform.

In this way, audience ratings and social media algorithms are both information regimes. Audience measurement systems have long been recognized as market information regimes — mechanisms for assessing the relative performance of market participants directed at informing the decision-making of marketplace participants (Anand and Peterson, 2000; Kosterich and Napoli 2016). Social media algorithms have been characterized as user information regimes — similarly data-driven mechanisms for assessing popularity/relevance directed at informing the decision-making of the users of individual platforms (Webster, 2010). Their shared status as information regimes helps to explain the many similarities outlined above. These similarities suggest that the self-regulatory apparatus for audience measurement may be relevant to social media. This self-regulatory apparatus is the focus of the next section.



Self-regulation of audience measurement: Origins, evolution, operation, and authority

Thus far in the U.S. (and in many other countries as well) (Furchtgott-Roth, et al., 2007), despite the lack of competition, we have not seen direct government regulation of the audience measurement industry. What we have seen instead in the U.S. is a form of self-regulation, instituted through the establishment of an organization known as the Media Rating Council (MRC). The Media Rating Council was created in the 1960s. In keeping with the other self-regulatory structures in the media sector, the impetus for the Media Rating Council came from a series of congressional hearings; in this case investigating the accuracy and reliability of television and radio audience measurement systems (Media Rating Council, n.d., a) [7]. Although these investigations did not lead to direct government regulation of the audience measurement industry, they did lead to the establishment of an independent review organization (originally called the Broadcasting Rating Council; now known as the Media Rating Council) to assess and certify audience measurement systems (Napoli, 2005). This action reflected the conclusion of the congressional commission’s report, which argued that self-regulation of the audience measurement industry would be more effective and more efficient than government regulation, particularly given the lack of relevant expertise and authority within any existing government agencies [8].

The MRC’s membership represents a cross-section of the media industries and associated stakeholders, including media companies in television, radio, print, and online media, as well as advertising agencies, advertisers, and media buyers (Media Rating Council, n.d., b). Thus, the MRC is comprised exclusively of the various types of clients of the audience measurement industry. Audience measurement firms are ineligible for MRC membership.

The MRC has two primary responsibilities: setting standards and accreditation. In the standard-setting realm, the MRC establishes and maintains minimum standards pertaining to the quality and the integrity of the process of audience measurement. Under this heading, the MRC outlines minimum methodological standards related to issues such as sample recruitment, training of personnel, and data processing. The MRC also establishes and maintains standards in regards to disclosure. That is, the MRC has specified those methodological details that must be made available to the customers of an audience measurement service. Included in this requirement is that all measurement services must disclose “all omissions, errors, and biases known to the ratings service which may exert a significant effect on the findings shown in the report.” (Media Rating Council, 2011). Measurement firms must disclose substantial amounts (but not all) of the methodological details related to sampling procedures and weighting of data. They must also disclose whether any of the services they offer have not been accredited by the MRC. Essentially, it must be clear to consumers which services have — and have not — received the MRC seal of approval (Media Rating Council, 2011).

The accreditation process is the second key aspect of the MRC’s role that is relevant to the discussion here. The MRC conducts confidential audits of audience measurement systems in order to certify that they are meeting minimum standards of methodological rigor and accuracy. The MRC outsources the audit process to specialized units of large accounting firms such as Ernst & Young. The results of the audits are then evaluated by a committee of representatives of member organizations that use the type of audience data produced by the measurement system being audited. This committee then makes a recommendation to the MRC Board of Directors, which makes the final decision regarding accreditation (Furchtgott-Roth, et al., 2007). All measurement services are audited, at minimum, annually. It is important to note that these audits apply to individual systems, not to measurement firms as a whole. Thus, for instance, the local television audience measurement systems that Nielsen operates in each individual market represent individual audience measurement systems in need of regular auditing.

It is important to emphasize that the MRC conducts these audits in a confidential manner. That is, while the audit decision is made public, the details of the audit — and thus the confidential methodological details of the audience measurement service — remain undisclosed. All individuals involved in the conducting and evaluating of audits sign non-disclosure agreements. Leaks of audit details have been incredibly rare.

In sum, the MRC has been described as establishing “standards of ‘ethical behavior’ for rating services.” [9] It is important to emphasize, however, that the MRC operates without any explicit regulatory authority. That is, audience measurement services are free to operate without MRC certification or audits. An audience measurement service that fails an MRC audit is still free to operate in the market. The MRC’s oversight model, then, is one in which the MRC’s “seal of approval” essentially serves as a source of market intelligence for consumers of audience data. Services that abide by the MRC’s standards and pass the MRC’s audits are more likely to gain acceptance in the marketplace. As a representative for an online audience measurement service stated, “we actually win deals off of someone else because we are MRC-accredited” (Baysinger, 2017). Thus, MRC accreditation can serve as a source of competitive advantage.

However, not all stakeholders have been convinced that this voluntary compliance model provides adequate oversight. Consequently, in 2005, at the urging of some television broadcasters, U.S. Senator Conrad Burns of Montana (a former broadcaster) introduced the Fairness, Accuracy, Inclusiveness, and Responsibility in Ratings Act (“FAIR Act”), which sought to confer greater regulatory authority upon the MRC, making MRC accreditation mandatory for any television audience measurement services on the market. Any methodological or technological changes to existing measurement systems would also be subject to mandatory MRC accreditation (FAIR Act, 2005). Thus, Congress would essentially confer upon the MRC a degree of oversight authority that it lacked. Under this model, the self-regulatory apparatus remains, but the heightened authority emanates from congressional decree.

Some industry stakeholders supported the proposed legislation — particularly those who were, at the time, unhappy with Nielsen’s recent introduction of its Local People Meter measurement system (see. e.g., Mullen, 2005; Metzger, 2005). Nielsen introduced this system in some markets even when it had not yet received MRC accreditation. The outputs of this new system produced steep ratings declines for some programmers (Napoli, 2005). Many other industry stakeholders were, however, opposed to conferring this greater authority upon the MRC (Crawford, 2005). Even the MRC itself opposed any legislation granting it greater authority, citing antitrust and liability concerns (Ivie, 2005).

This legislation never passed, given the intensity of opposition; and thus, the MRC’s regulatory authority remains entirely voluntary. It is interesting to consider the fact that industry stakeholders for the most part rejected the prospect of greater regulatory authority over their contractors. As is often the case, however, the congressional inquiry did lead to some action; in this case, the adoption of a voluntary code of conduct that included stipulations that no measurement system would be launched prior to an audit being conducted; and that when a measurement service introduces a new measurement system, “consideration should be given to discontinuing the existing accredited currency product only when the replacement currency product has successfully achieved accreditation.” [10] This voluntary code of conduct was subsequently approved by antitrust authorities (Barnett, 2008).

Despite the failure of the proposed legislation, it is worth considering the pros and cons of greater regulatory authority for the MRC, as this is an issue that could certainly be relevant to discussions of a similar self-regulatory model for social media. Certainly, such a structure could better check any abuses of market power by dominant measurement firms, and could result in better uniformity in the measurement and reporting standards and practices across firms. On the other hand, it has been argued that such a structure might slow the rate at which innovations are brought to market. It has also been argued that placing greater authority in the hands of the self-interested clients of these measurement firms could be problematic, given that different clients often have competing interests, which may lead to rent-seeking in, and manipulation of, the accreditation voting process (see Furchtgott-Roth, et al., 2007). On the voting process front, one of the issues raised in the congressional hearing on the FAIR Act was that some large media conglomerates have multiple voting representatives on the Media Rating Council, given the different and distinct entities that operate under a single corporate umbrella (broadcast networks, cable networks, local station groups, etc.). This situation can give these conglomerates the ability to exert disproportionate influence over MRC decisions; a degree of influence that could become more powerful should MRC accreditation become mandatory (see Crawford, 2005). Thus, some stakeholders proposed a complete (though unspecified) overhaul of the MRC’s voting procedures and membership; particularly if MRC accreditation were to become mandatory (Crawford, 2005).



Exploring applicability to social media

In light of the similarities between audience measurement systems and social media algorithms, the key question is whether any aspects of the self-regulatory apparatus for audience measurement are transferrable to the social media context. It is important to emphasize at the outset that this analysis is narrowly focused on social media platforms’ news feed and trending algorithms, given the prominence of the problems associated with the operation of these algorithms and their particularly tight associations with audience measurement systems. This is a somewhat different (though certainly connected) focus from where we’ve seen self-regulatory progress thus far — specifically, in regards to content moderation decision-making (Oversight Board Charter, 2019). Other analyses of legislative or self-regulatory approaches to algorithms have considered broader, sector-spanning oversight models, given algorithms’ increasing prominence in fields such as health care, finance, education, and criminal justice (e.g., an “FDA for algorithms”) (Tutt, 2017; O’Neil, 2016; Pasquale, 2015). While there may be potential in such approaches, it is also important to consider the importance of the relevant subject matter expertise to accompany the algorithmic design and implementation expertise that should reside within any regulatory bodies that are created. For these reasons, the focus here is narrow rather than broad.

With these considerations in mind, the goal here is to explore the prospect of a self-regulatory body similar to the MRC that oversees certain aspects of the operation of social media platforms, and to offer some preliminary thoughts on which aspects of the MRC model do/do not have the potential to transfer to the social media context.


This analysis begins with the question of need. That is, does some sort of industry-developed and supported oversight authority address a compelling need, particularly given the range of actions that social media platforms are now taking to more aggressively police their platforms, evaluate their algorithms, and revise their data gathering and sharing practices?

From a purely pragmatic standpoint, it is important to note that the establishment and maintenance of the MRC appears to have been a successful strategy for holding direct government regulation at bay. Given the growing number of calls for some form of regulation of the social media industry in the U.S.; and the fact that, in other countries, explicit regulatory actions have already been taken (Napoli, 2019), it might be in the best interests of the large social media platforms and/or their key stakeholders to move quickly to establish some sort of self-regulatory body. Given the current political climate in the U.S., the prospect of any kind of government intervention into this space seems particularly unsettling.

From a more normative standpoint, it is worth recalling one of the key issues that media firms and advertisers have had with dominant audience measurement firms such as Nielsen — unilateral decision-making. Given the size, reach, and potential impact of social media platforms such as Facebook, YouTube, and Twitter on such a broad range of stakeholders, it seems increasingly problematic that these platforms can suddenly and unilaterally alter their curation algorithms or the privacy policies that affect the user data that feed into these algorithms.

Of course, any such algorithmic or data-gathering changes are typically the result of rigorous internal analysis, but given the trust issues surrounding social media platforms, and their demonstrated vulnerability to misuse and manipulation, the question here is whether from an ethical and public interest standpoint, some more multilateral process of evaluation, input, and decision-making would be desirable. Perhaps, for example, this oversight entity could conduct parallel analyses of the likely impact of a specific algorithmic adjustment on users’ news feeds that could confirm the results of the internal analysis, and perhaps even apply a broader range of evaluative criteria than might characterize the more strategically focused internal analyses. Thus, this hypothetical entity could certify that this algorithmic adjustment meets minimum standards of accuracy, fairness, objectivity, resistance to manipulation, etc., and that it produces the desired outcome.

A further case for the need for such oversight can be made based on contentions that, even with all of the actions that these social media firms have been taking, they still are not doing nearly enough. According to many critics, these platforms still do not demonstrate the necessary commitment to social responsibility; and are often inconsistent or socially or politically misguided in developing and applying their standards and practices (see, e.g., Vaidhyanathan, 2018; Angwin and Grassegger, 2017; D. Ingram, 2018; Shaban, 2018). Such critiques seem to point to the need for a more multilaterally generated code of conduct, similar to that produced by the MRC for audience measurement firms, in order that a broader range of priorities and perspectives guide decision-making. In this way, more widely agreed-upon standards of ethical behavior and social responsibility could guide the behaviors of all platforms (Zelenkauskaite, 2017). Given the magnitude of what is at stake, politically and culturally, in relation to the operation of these platforms, this step towards a more multilateral governance framework seems well justified.

It is not surprising, then, that we are seeing the beginnings of efforts to establish and potentially institutionalize public interest-oriented statements of principles in the digital platform realm. For instance, in late 2016, a collection of companies including Facebook, Google, and IBM formed an industry consortium called the Partnership on AI, which has begun to formulate broad tenets around fairness, trustworthiness, accountability, and transparency (Partnership on AI, n.d.) [11].

In sum, it would seem that many of the same issues around trust and commitment to social responsibility that led to the creation of the Media Rating Council to oversee the audience measurement industry characterize contemporary perspectives on the social media industry. From this standpoint, the case can be made that the need for some sort of independent oversight and associated code of conduct and accreditation system is present.


The next question, then, involves structure. Specifically, to what extent could the structure of the Media Rating Council guide efforts to develop a similar self-regulatory apparatus for social media? One obvious difference between audience measurement and social media involves the universe of relevant stakeholders that might be members of such a council. In audience measurement, this group is inherently limited to the media outlets and advertisers that rely on audience data; though one could make a compelling case that the MRC would benefit from the inclusion of non-profit organizations focused media diversity issues and/or the needs and interests of minority groups that could be affected by shifts in audience measurement systems. In any case, from amongst the industry stakeholders, any organization willing to pay the membership fee can obtain representation on the Media Rating Council.

In the social media context, it is an exponentially larger scope and range of stakeholders (at both the individual and organizational level) that rely upon these platforms, which makes a straight transfer of the MRC model untenable. In addition, the relationship between the stakeholders on the MRC and the audience measurement firms it oversees is one of client and contractor. This relationship serves as the basis for what regulatory authority the MRC does possess. While this is the nature of the relationship between some stakeholders and social media platforms (e.g., advertisers), it certainly doesn’t adequately characterize the nature of the relationship between social media platforms and other categories of stakeholders (individual users, media companies, brands, political organizations, etc.).

This more complex array of stakeholder relationships is, in many ways, the essence of contemporary digital platforms. And certainly, any self-regulatory body overseeing social media would need to reflect the concerns and interests not only of advertisers and media firms. Essentially, in moving from the market information regime context of audience measurement systems to the user information regime context of social media platforms, the transferability of the MRC structure breaks down, and the question of how to construct and populate an oversight body becomes much more complicated.

While this is not an issue that can be fully resolved here, it is worth briefly considering what types of individuals/organizations might serve on a hypothetical Social Media Council; and how their participation might be determined. One possibility might be to bring together representatives from groups that themselves represent large numbers of relevant stakeholders. Along these lines, the hypothetical Social Media Council might be comprised of representatives from a wide range of industry and professional associations, public interest/civil society organizations, and perhaps even relevant academic associations.

Or, perhaps it would make sense to imbue an existing, well-regarded organization with the relevant expertise, such as Computer Professionals for Social Responsibility, with the oversight authority, or create a new oversight body, like the Data Rights Board proposed by the Economist (2018). There are no doubt various other permutations that might be worth considering here. Inevitably, of course, this whole process of the establishment and operation of any such oversight structure would be incredibly politically fraught, and highly vulnerable to critiques of exclusionary practices or under-representation of certain stakeholder groups — much more so than has been the case in audience measurement, given the exponentially broader scope and impact of social media platforms.

In terms of the internal organizational structure, it would seem feasible that any types of audits or evaluations conducted by this hypothetical body could follow the template of the MRC, in which this work is conducted by qualified, independent third parties (specialized units of large accounting firms in the case of the MRC). The results of such evaluations could then be presented to the voting membership, and kept confidential under the same types of non-disclosure agreements used by the MRC. As with audience measurement systems, we need to recognize the practical limits of mandated or voluntary transparency in relation to the operation of social media algorithms (Ananny and Crawford, 2018).

Firms with the relevant expertise for conducting these types of analyses already exist. For instance, O’Neil Risk Consulting and Algorithmic Auditing (ORCAA) has received a fair bit of publicity for offering tech firms audits that can detect issues of bias, inaccuracy, and discrimination in their algorithms. Algorithms that pass the audit receive an ORCAA “seal of approval” (Schwab, 2018). ORCAA is a private-sector enterprise offering its services to the tech sector, with little take-up at this point (Schwab, 2018). But such a process could become integrated into the activities and oversight authority of a self-regulatory body.


And finally, there is the question of authority. That is, what degree of regulatory authority might this hypothetical Social Media Council have over the behaviors of social media firms; and what specific areas of activity might this authority extend to?

Let’s start with the question of possible areas of oversight. Perhaps, this hypothetical entity could establish standards regarding algorithmic design and the gathering and sharing of the user data that feeds into these algorithmic systems. It could also establish disclosure standards for the algorithmic details that need to be made publicly available (similar to the MRC’s disclosure requirements). One could similarly imagine this entity articulating standards for public interest components of news curation algorithms and content moderation guidelines, in relation to issues such as fake news and hate speech. Essentially, it seems reasonable for a social media-focused self-regulatory body to develop and facilitate the adoption of a code of conduct not unlike the code of conduct developed by the MRC.

And, in connection with these standards, one could similarly imagine the establishment of MRC-like auditing teams with the relevant expertise to evaluate news curation algorithms to determine whether they meet these minimum standards, and similarly capable of maintaining the necessary confidentiality of the audit details. And, just as significant methodological changes in audience measurement systems require re-accreditation from the MRC, one could imagine this self-regulatory body auditing and accrediting significant changes in news feed or trending list algorithms.

In what would appear to be an initial step in this direction, in September 2018, social media companies such as Facebook, Google, and Twitter, along with advertisers, developed and submitted to the European Commission a voluntary Code of Practice on Disinformation. According to the European Commission, this Code of Practice represents the “first time worldwide that industry agrees, on a voluntary basis, to self-regulatory standards to fight disinformation” (European Commission, 2018b). The code includes pledges by the signatories to significantly improve the scrutiny of ad placements, increase transparency of political advertisements, develop more rigorous policies related to the misuse of bots, develop tools to prioritize authentic and authoritative information, and help users identify disinformation and find diverse perspectives. Though an encouraging step, the code has been criticized for lacking meaningful commitments, measurable objectives, and compliance or enforcement tools (Stolton, 2018).

The final, and perhaps most challenging, question related to the transferability of the MRC model to social media is whether the MRC’s voluntary compliance model should apply in the context of social media; or whether the mandatory compliance model proposed by Congress in the FAIR Ratings Act of 2005 would be preferable/feasible. One shortcoming, many would argue, of the MRC, is that it has no enforceable authority. That is, measurement firms are free to bring new services to the market without MRC accreditation if they so choose. The embracing of MRC accreditation as an important indicator of data quality by the market as a whole is intended to discourage such actions; and, for the most part, it appears to do that.

Within the context of social media, could we similarly assume that the presence or absence of some sort of independent “seal of approval” would sufficiently affect the behaviors of platform users, content providers, and advertisers to compel participation in the accreditation process and adherence to the code of conduct by these platforms? If not, this process would need to overlap into the policy-making realm to a greater extent than has been the case thus far in audience measurement.

Given the exponentially greater size, resources, range of stakeholders, and, to put it bluntly, power vis-à-vis their stakeholders, that many social media platforms have in comparison to audience measurement firms, it seems far less likely that an oversight authority premised on voluntary compliance could meaningfully impact platforms’ behaviors. For this reason, perhaps the best path forward is some variation on the FAIR Ratings Act that establishes an independent self-regulatory authority that can compel mandatory compliance from social media platforms in terms of at least some of the potential areas of authority laid out above.


Needless to say, there are a range of concerns that arise around the prospect of any kind of formal self-regulatory model for social media. One concern (frequently expressed within the context of audience measurement as well) is that any kind of mandatory certification model could discourage competition and innovation. The flip side of this argument is that the social media industry is already lacking in competition, and the process in which dominant social media platforms like Facebook either adopt the innovations of emergent firms, or purchase these emergent firms outright to adopt these firms’ innovations and enhance their own market share does not represent a robust innovation environment in need of preservation (see, e.g., Foer, 2017; Griffith, 2017; Manjoo, 2017). As media researcher Gale Metzger said when testifying before Congress in favor of the FAIR Ratings Act, “It would be difficult to have less competition or less innovation than we have now.” Metzger was referencing Nielsen’s dominant market position, and the perceived lack of innovation that he saw arising from Nielsen’s market dominance. Others argue that product certification from a self-regulatory body can actually have a range of competition-enhancing effects, including reducing barriers to entry by making it possible for new marketplace participants to “demonstrate the value of their products quickly, objectively, and convincingly” (Howe and Badger, 1982).

One could also argue that the various actions that platforms such as Facebook and Twitter have taken to counter problems such as fake news, hate speech, and political manipulation (actions taken largely in response to external pressures), and the various systems they have developed to support these actions, represent substantial innovations in their own right. Thus, a self-regulatory apparatus that compels further responsiveness to these and other issues could spur further innovation. From this standpoint, the innovation argument may not be as lopsided against a strong self-regulatory authority as is often assumed.

Many of these arguments touch on the broader antitrust implications of any type of industry self-regulation. As was noted above, the MRC cited antitrust concerns as one of its reasons for opposing the FAIR Ratings Act. Given that a specific structure hasn’t been proposed here, it’s difficult to flesh out the antitrust issue in further detail. However, it is important to recognize that the key antitrust challenge for non-profit certification organizations along the lines being considered here is the potential conflicts of interest among certification board members (a concern that has arisen within the context of audience measurement) leading to voting behaviors that impede the development of new competitors [12]. Given the range and diversity of stakeholder groups that would likely be represented on any hypothetical Social Media Council, the likelihood of any such council acting in a coordinated way to stifle competition seems remote.




This article is intended primarily as a conversation-starter for the question of how a self-regulatory apparatus for social media platforms might be constructed. The Media Rating Council and its oversight of the audience measurement industry have been used as the starting point for this conversation, given the substantial similarities that have been demonstrated between audience measurement and social media. As this article has illustrated, there are some aspects of the self-regulatory model for audience measurement that seem directly transferrable to the social media context (third party audits/certification; the development of a code of conduct), but others that do not (a client-focused composition of the oversight council). Future research should provide a broader, more wide-ranging assessment of self-regulatory models across other industry sectors in order to identify additional elements that may prove useful in constructing a self-regulatory framework for social media. End of article


About the authors

Philip M. Napoli is the James R. Shepley Professor of Public Policy in the Sanford School of Public Policy at Duke University, where he is also a Faculty Affiliate with the DeWitt Wallace Center for Media & Democracy. From 2017 through 2019 he was an Andrew Carnegie Fellow.
Direct comments to: philip [dot] napoli [at] duke [dot] edu

Anne Napoli is a media and telecommunications attorney who has held senior positions at the U.S. Federal Communications Commission and Verizon.
E-mail: anapoli12 [at] gmail [dot] com



This research was conducted with the support of an Andrew Carnegie Fellowship from the Carnegie Corporation of New York. The opinions contained herein reflect only those of the author and not the Carnegie Corporation or its representatives.



1. European Commission, 2018a, p. 6.

2. For a discussion of monopoly in audience measurement, see Furchtgott-Roth, et al., 2007. See also Metzger, 2005.

3. Social TV analytics refers to the assessment of television program performance in terms of the volume and valence of social media activity that they generate.

4. For a more detailed discussion, see Furchtgott-Roth, et al., 2007.

5. See, e.g., Burns, 2005. As Senator Conrad Burns noted in his opening statement in a 2005 congressional hearing on television audience measurement, “[R]ating systems have extraordinary cultural, social, and economic implications.”

6. For similar arguments in relation to search engine algorithms, see Bracha, 2014.

7. For more historical detail, see Napoli, 2005. See also U.S. Congress. House. Committee on Interstate and Foreign Commerce. Special Subcommittee on Investigations, 1966.

8. According to the report, “It is highly doubtful that Government regulation of the operation of rating services, at this time, at least, is likely to be more effective than a well-administered program of industry self-regulation. Furthermore, there is not in existence at present any Federal agency which is discharging functions closely related to those performed under the program of self-regulation. The enactment of legislation providing for such regulation would not appear to be in the public interest at this time.” U.S. Congress. House. Committee on Interstate and Foreign Commerce. Special Subcommittee on Investigations, 1966, p. 19.

9. Goldberg, 1989, p. 27.

10. Yarowsky, 2006, p. 4.

11. For more details on recent efforts to bring greater social responsibility to algorithmic decision-making, see Rieke, et al., 2018.

12. See Howe and Badger, 1982 for a detailed discussion of the competition-enhancing and undermining potential of non-profit certification organizations.



Jill Aitoro, 2018. “Feds, allow social media to self-regulate,” Federal Times (27 February), at https://www.federaltimes.com/opinions/2018/02/27/feds-allow-social-media-to-self-regulate/, accessed 10 May 2019.

N. Anand and Richard. A. Peterson, 2000. “When market information constitutes fields: Sensemaking of markets in the commercial music industry,” Organization Science, volume 11, number 3, pp. 270–284.
doi: https://doi.org/10.1287/orsc., accessed 15 November 2019.

Mike Ananny and Kate Crawford, 2018. “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability,” New Media & Society, volume 20, number 3, pp. 973–989.
doi: https://doi.org/10.1177/1461444816676645, accessed 15 November 2019.

Julie Angwin and Hannes Grassegger, 2017. “Facebook’s secret censorship rules protect white men from hate speech but not black children,” ProPublica (28 June), at https://www.propublica.org/article/facebook-hate-speech-censorship-internal-documents-algorithms, accessed 13 May 2019.

Jane Bambauer, 2014. “Is data speech?” Stanford Law Review, volume 66, number 1, pp. 57–120, and at http://www.stanfordlawreview.org/wp-content/uploads/sites/3/2014/01/66_Stan._L_Rev_57_Bambauer.pdf, accessed 13 May 2019.

Thomas O. Barnett, 2008. “Letter to Jonathan R. Yarowsky, Patton Boggs, L.L.P.,” U.S. Department of Justice (11 April), at https://www.justice.gov/atr/response-media-rating-councils-request-business-review-letter/, accessed 14 May 2019.

Tim Baysinger, 2017. “How the Media Rating Council became digital media’s seal of approval,” Digiday (8 June), https://digiday.com/marketing/media-ratings-council-became-digital-medias-seal-approval/, accessed 13 May 2019.

Stuart M. Benjamin, 2013. “Algorithms and speech,” University of Pennsylvania Law Review, volume 161, number 6, pp. 1,445–1,494, and at https://scholarship.law.upenn.edu/cgi/viewcontent.cgi?article=1020&context=penn_law_review, accessed 13 May 2019.

Jérôme Bourdon and Cécile Méadel, 2015. “Ratings as politics: Television audience measurement and the state,” International Journal of Communication, volume 9, pp. 2,243–2,262, and at https://ijoc.org/index.php/ijoc/article/view/3342/1427, accessed 10 May 2019.

Oren Bracha, 2014. “The folklore of informationalism: The case of search engine speech,” Fordham Law Review, volume 82, number 4, pp. 1,629–1,687, and at https://ir.lawnet.fordham.edu/flr/vol82/iss4/2/, accessed 10 May 2019.

Samantha Bradshaw and Philip N. Howard, 2018. “Challenging truth and trust: A global inventory of organized social media manipulation,” Oxford Computational Propaganda Project, Working Paper 2018.1, at https://comprop.oii.ox.ac.uk/research/cybertroops2018/, accessed 13 May 2019.

Ben Brody and Daniel Stoller, 2019. “Facebook acquisitions probed by FTC in broad antitrust inquiry,” Bloomberg (1 August), at https://www.bloomberg.com/news/articles/2019-08-01/facebook-acquisitions-probed-by-ftc-in-broad-antitrust-inquiry, accessed 5 November 2019.

Conrad Burns, 2005. “Opening statement on S. 1372, The Fair Ratings Act,” hearing before the Committee on Commerce Science, and Transportation, U.S. Senate, 109th Congress, 1st Session (27 July), at https://www.govinfo.gov/content/pkg/CHRG-109shrg65216/html/CHRG-109shrg65216.htm, accessed 15 November 2019.

Angela J. Campbell, 1999. “Self-regulation of the media,” Federal Communications Law Journal, volume 51, number 3, pp. 711–772, and at https://www.repository.law.indiana.edu/fclj/vol51/iss3/11/, accessed 10 May 2019.

Catherine Clifford, 2018. “Elon Musk says regulate social media: ‘We can’t have willy-nilly proliferation of fake news, that’s crazy’,” CNBC (11 April), at https://www.cnbc.com/2018/04/11/elon-musk-wants-social-media-to-be-regulated-for-fake-news.html, accessed 10 May 2019.

Nicholas Confessore, Gabriel J.X. Dance, Richard Harris, and Mark Hansen, 2018. “The follower factory,” New York Times (27 January), at https://www.nytimes.com/interactive/2018/01/27/technology/social-media-bots.html, accessed 6 June 2019.

Robert Corn-Revere, 1988. “Regulation by raised eyebrow,” Student Lawyer, volume 16, number 6, pp. 26–29.

Kathy Crawford, 2005. “Statement on S. 1372, The Fair Ratings Act,” hearing before the Committee on Commerce Science, and Transportation, U.S. Senate, 109th Congress, 1st Session (27 July), at https://www.govinfo.gov/content/pkg/CHRG-109shrg65216/html/CHRG-109shrg65216.htm, accessed 15 November 2019.

Nicholas Diakopoulos, 2013. “Algorithmic accountability reporting: On the investigation of black boxes,” Tow Center for Digital Journalism, Columbia University (10 July), at https://academiccommons.columbia.edu/doi/10.7916/D8ZK5TW2, accessed 13 May 2019.

Economist, 2018. “Facebook faces a reputational meltdown” (22 March), at https://www.economist.com/leaders/2018/03/22/facebook-faces-a-reputational-meltdown, accessed 13 May 2019.

Mustafa M. El-Bermawy, 2016. “Your filter bubble is destroying democracy,” Wired (18 November), at https://www.wired.com/2016/11/filter-bubble-destroying-democracy/, accessed 15 November 2019.

European Commission, 2018a. “Final report of the Independent High Level Group on Fake News and Disinformation” (12 March), at https://ec.europa.eu/digital-single-market/en/news/final-report-high-level-expert-group-fake-news-and-online-disinformation, accessed 10 May 2019.

European Commission, 2018b. “Code of practice on disinformation” (26 September), at https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation, accessed 10 May 2019.

FAIR Act, 2005. “Fairness, Accuracy, Inclusiveness, and Responsiveness in Ratings Act of 2005,” S. 1372, 109th U.S. Congress, 1st Session, at https://www.congress.gov/109/bills/s1372/BILLS-109s1372is.pdf, accessed 14 May 2016.

Harold Feld, 2019. “The case for the Digital Platform Act: Breakups, the starfish problem, & tech regulation,” Roosevelt Institute and Public Knowledge, at at https://www.digitalplatformact.com, accessed 15 November 2019.

Gail Ferguson, 2014. “How Facebook and Twitter control what you see about Ferguson,” Washington Post (19 August), at https://www.washingtonpost.com/news/morning-mix/wp/2014/08/19/how-facebook-and-twitter-control-what-you-see-about-ferguson/?utm_term=.1ca878f6ed44, accessed 15 November 2019.

Franklin Foer, 2017. World without mind: The existential threat of big tech. New York: Penguin Press.

Harold Furchtgott-Roth, Robert W. Hahn, and Anne Layne-Farrar, 2007. “The law and economics of regulating ratings firms,” Journal of Competition Law & Economics, volume 3, number 1, pp. 49–96.
doi: https://doi.org/10.1093/joclec/nhl015, accessed 15 November 2019.

Tarleton Gillespie, 2011. “Can an algorithm be wrong? Twitter Trends, the specter of censorship, and our faith in the algorithms around us,” Social Media Collective (19 October), at https://socialmediacollective.org/2011/10/19/can-an-algorithm-be-wrong/, accessed 13 May 2019.

Melvin A. Goldberg, 1989. “Broadcast ratings and ethics,” Review of Business, volume 11, number 1, pp. 19–20, 27.

David Goetzl, 2013. “Confirmed: Nielsen is a monopoly — but court OK with it,” MediaPost (6 March), at https://www.mediapost.com/publications/article/195168/confirmed-nielsen-is-a-monopoly-but-court-ok-w.html, accessed 13 May 2019.

Asaf Greiner, 2017. “Invasion of the ad fraud super bots,” Forbes (30 November), at https://www.forbes.com/sites/forbestechcouncil/2017/11/30/invasion-of-the-ad-fraud-super-bots/#24d7d4a07996, accessed 13 May 2019.

Erin Griffith, 2017. “Will Facebook kill all future Facebooks?” Wired (25 October), at https://www.wired.com/story/facebooks-aggressive-moves-on-startups-threaten-innovation/, accessed 13 May 2019.

Brent Harris, 2019. “Establishing structure and governance for an independent oversight board,” Facebook Newsroom (17 September), at https://newsroom.fb.com/news/2019/09/oversight-board-structure/, accessed 5 November 2019.

Alex Hern, 2018. “YouTube to crack down on fake news, backing ‘authoritative’ sources,” Guardian (9 July), at https://www.theguardian.com/technology/2018/jul/09/youtube-fake-news-changes, accessed 13 May 2019.

Alex Hern, 2017. “Facebook and Twitter are being used to manipulate public opinion — Report,” Guardian (19 June), at https://www.theguardian.com/technology/2017/jun/19/social-media-proganda-manipulating-public-opinion-bots-accounts-facebook-twitter, accessed 14 May 2019.

Howard Homonoff, 2015. “Nielsen, ComScore, and Rentrak: Keeping score in media measurement,” Forbes (5 October), at https://www.forbes.com/sites/howardhomonoff/2015/10/05/nielsen-comscore-and-rentrak-keeping-score-in-media-measurement/#325db6eb21d9, accessed 13 May 2019.

Jonathan T. Howe and Leland J. Badger, 1982. “The antitrust challenge to non-profit certification organizations: Conflicts of interest and a practical rule of reason approach to certification programs as industry-wide builders of competition and efficiency,” Washington University Law Quarterly, volume 60, number 2, pp. 357–391, and at https://openscholarship.wustl.edu/law_lawreview/vol60/iss2/5, accessed 14 May 2019.

David Ingram, 2018. “Exclusive: Facebook to put 1.5 billion users out of reach of new EU privacy law,” Reuters (18 April), at https://www.reuters.com/article/us-facebook-privacy-eu-exclusive/exclusive-facebook-to-put-1-5-billion-users-out-of-reach-of-new-eu-privacy-law-idUSKBN1HQ00P, accessed 13 May 2019.

Matthew Ingram, 2018. “New data casts doubt on Facebook’s commitment to quality news,” Columbia Journalism Review (7 May), at https://www.cjr.org/the_new_gatekeepers/facebook-algorithm-quality-news.php, accessed 13 May 2019.

George Ivie, 2005. “Statement on S. 1372, The Fair Ratings Act,” hearing before the Committee on Commerce Science, and Transportation, U.S. Senate, 109th Congress, 1st Session (27 July), at https://www.govinfo.gov/content/pkg/CHRG-109shrg65216/html/CHRG-109shrg65216.htm, accessed 15 November 2019.

Jon Lafayette, 2018. “Taking the measure of audience measurement,” Broadcasting & Cable (16 March), at http://www.broadcastingcable.com/news/currency/taking-measure-audience-measurement/169879, accessed 13 May 2019.

Alex Magnin, 2016. “Network effects and the new physics of digital media,” Observer (13 May), at http://observer.com/2016/05/network-effects-and-the-new-physics-of-digital-media/, accessed 13 May 2019.

Farhad Manjoo, 2017. “Why Facebook keeps beating every rival: It’s the network, of course,” New York Times (19 April), at https://www.nytimes.com/2017/04/19/technology/facebook-snapchat-instagram-innovation.html, accessed 13 May 2019.

Roger McNamee, 2018. “Why not regulate social media like tobacco or alcohol?” Guardian (29 January), at https://www.theguardian.com/media/2018/jan/29/social-media-tobacco-facebook-google, accessed 10 May 2019.

Media Rating Council, n.d., a. “History and mission of the MRC,” at http://mediaratingcouncil.org/History.htm, accessed 14 May 2019.

Media Rating Council, n.d., b. “2019 membership,” at http://mediaratingcouncil.org/Member%20Companies.htm, accessed 14 May 2019.

Media Rating Council, 2011. “Minimum standards for media rating research” (December), at http://mediaratingcouncil.org/MRC%20Minimum%20Standards%20-%20December%202011.pdf, p. 5.

Gale Metzger, 2005. “Statement on S. 1372, The Fair Ratings Act,” hearing before the Committee on Commerce Science, and Transportation, U.S. Senate, 109th Congress, 1st Session (27 July), at https://www.govinfo.gov/content/pkg/CHRG-109shrg65216/html/CHRG-109shrg65216.htm, accessed 15 November 2019.

Alex Moazed and Nicholas L. Johnson, 2016. Modern monopolies: What it takes to dominate the 21st century economy. New York: St. Martin’s Press.

Adam Mosseri, 2018. “Bringing people closer together,” Facebook (11 January), at https://newsroom.fb.com/news/2018/01/news-feed-fyi-bringing-people-closer-together/, accessed 13 May 2019.

Patrick J. Mullen, 2005. “Statement on S. 1372, The Fair Ratings Act,” hearing before the Committee on Commerce Science, and Transportation, U.S. Senate, 109th Congress, 1st Session (27 July), at https://www.govinfo.gov/content/pkg/CHRG-109shrg65216/html/CHRG-109shrg65216.htm, accessed 15 November 2019.

Philip M. Napoli, 2019. Social media and the public interest: Media regulation in the disinformation age. New York: Columbia University Press.

Philip M. Napoli, 2014. “The institutionally effective audience in flux: Social media and the reassessment of the audience commodity,” In: Lee McGuigan and Vincent Manzerolle (editors). The audience commodity in a digital age: Revisiting a critical theory of commercial media. New York: Peter Lang, pp. 115–133.

Philip M. Napoli, 2011. Audience evolution: New technologies and the transformation of media audiences. New York: Columbia University Press.

Philip M. Napoli, 2009. “Audience measurement, the diversity principle, and the First Amendment right to construct the audience,” St. John’s Journal of Legal Commentary, volume 24, number 2, pp. 359–385.

Philip M. Napoli, 2005. “Audience measurement and media policy: Audience economics, the diversity principle, and the local people meter,” Communication Law and Policy, volume 10, number 4, pp. 349–382.
doi: https://doi.org/10.1207/s15326926clp1004_1, accessed 15 November 2019.

Jack Nicas, 2017. “YouTube tweaks search results as Las Vegas conspiracy theories rise to the top,” Wall Street Journal (5 October), at https://www.wsj.com/articles/youtube-tweaks-its-search-results-after-rise-of-las-vegas-conspiracy-theories-1507219180, accessed 13 May 2019.

Cathy O’Neil, 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown.

Will Oremus, 2018. “The great Facebook crash,” Slate (27 June), at https://slate.com/technology/2018/06/facebooks-retreat-from-the-news-has-painful-for-publishers-including-slate.html, accessed 13 May 2019.

Donie O’Sullivan, 2019. “Senator calls on Facebook and Google to ban political ad targeting,” CNN (14 August), at https://www.cnn.com/2019/08/14/politics/facebook-google-ads-ron-wyden/index.html, accessed 6 November 2019.

Oversight Board Charter, 2019. “Facebook Newsroom” (September), at https://fbnewsroomus.files.wordpress.com/2019/09/oversight_board_charter.pdf, accessed 6 November 2019.

Partnership on AI, n.d.. “About us,” at https://www.partnershiponai.org/about/, accessed 13 May 2019.

Frank Pasquale, 2015. The black box society: The secret algorithms that control money and information. Cambridge, Mass.: Harvard University Press.

Nushin Rashidian, Pete Brown, and Elizabeth Hansen, 2018. “Friend and foe: The platform press at the heart of journalism,” Tow Center for Digital Journalism, Columbia University (14 June), at https://www.cjr.org/tow_center_reports/the-platform-press-at-the-heart-of-journalism.php/, accessed 13 May 2019.

Aaron Rieke, Miranda Bogen, and David G. Robinson, 2018. “Public scrutiny of automated decisions: Early lessons and emerging methods,” Omidyar Network (27 February), at https://www.omidyar.com/insights/public-scrutiny-automated-decisions-early-lessons-and-emerging-methods, accessed 13 May 2019.

Chris Mills Rodrigo, 2019. “Senate bill takes aim at ‘secret’ online algorithms,” The Hill (31 October), at https://thehill.com/policy/technology/468385-bipartisan-senators-release-online-platform-algorithm-transparency-bill, accessed 5 November 2019.

Marisol Sandoval, 2014. From corporate to social media: Critical perspectives on corporate social responsibility in media and communications industries. New York: Routledge.
doi: https://doi.org/10.4324/9781315858210, accessed 15 November 2019.

Katharine Schwab, 2018. “This logo is like an ‘organic’ sticker for algorithms,” Fast Company (18 May), at https://www.fastcompany.com/90172734/this-logo-is-like-an-organic-sticker-for-algorithms-that-arent-evil, accessed 13 May 2019.

Hamza Shaban, 2018. “YouTube’s new attempt to limit propaganda draws fire from PBS,” Washington Post (3 February), at https://www.washingtonpost.com/news/the-switch/wp/2018/02/03/youtubes-new-attempt-to-limit-propaganda-draws-fire-from-pbs/?utm_term=.b8774ff1f990, accessed 13 May 2019.

Annys Shin, 2005. “Former Newsday workers arrested on fraud charges,” Washington Post (16 June), at https://www.washingtonpost.com/archive/business/2005/06/16/former-newsday-workers-arrested-on-fraud-charges/6eee0324-3bab-4a92-817d-3d757abfba13/?utm_term=.88439a12f70a, accessed 13 May 2019.

Samuel Stolton, 2018. “Disinformation crackdown: Tech giants commit to EU code of practice,” EURACTIV.com (26 September), at https://www.euractiv.com/section/digital /news/disinformation-crackdown-tech-giants-commit-to-eu-code-of-practice/, accessed 6 June 2019.

Dan Jerker B. Svantesson and William van Caenegem, 2017. “Is it time for an offense of dishonest algorithmic manipulation for electoral gain?” Alternative Law Journal, volume 42, number 3, pp. 184–189.
doi: https://doi.org/10.1177/1037969X17730192, accessed 15 November 2019.

Harsh Taneja, 2013. “Audience measurement and media fragmentation: Revisiting the monopoly question,” Journal of Media Economics, volume 26, number 4, pp. 203–219.
doi: https://doi.org/10.1080/08997764.2013.842919, accessed 15 November 2019.

Jonathan Taplin, 2017. Move fast and break things: How Facebook, Google, and Amazon cornered culture and undermined democracy. New York: Little, Brown.

Zeynep Tufekci, 2018. “YouTube, the great radicalizer,” New York Times (10 March), at https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html, accessed 13 May 2019.

Andrew Tutt, 2017. “An FDA for algorithms,” Administrative Law Review, volume 69, number 1, pp. 83–123, and at http://www.administrativelawreview.org/wp-content/uploads/2019/09/69-1-Andrew-Tutt.pdf, accessed 13 May 2019.

U.K. House of Commons, Digital, Culture, Media and Sport Committee, 2018. “Disinformation and ‘fake news’: Interim report” (29 July), at https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/363/36302.htm, accessed 10 May 2019.

U.S. Congress. House. Committee on Interstate and Foreign Commerce. Special Subcommittee on Investigations, 1966. Broadcast ratings: Report of the Committee on Interstate and Foreign Commerce, pursuant to section 136 of the Legislative Reorganization Act of 1946, Public Law 601, 79th Congress, and House Resolution 35, 89th Congress, Special Subcommittee on Investigations. Washington, D.C.: U.S. Government Printing Office.

U.S. Congress. House. Judiciary Committee, 2018a. “Hearing on filtering practices of social media platforms” (26 April), at https://www.youtube.com/watch?v=751DJU6IjeA, accessed 10 May 2019.

U.S. Congress. House. Judiciary Committee, 2018b. “Hearing on Facebook, Google, and Twitter: Examining the content filtering practices of social media giants” (17 July), at https://www.c-span.org/video/?448566-1/house-judiciary-committee-examines-social-media-filtering-practices, accessed 10 May 2019.

U.S. Congress. Senate. Committee on Commerce, Science, & Transportation, 2019. “Hearing on mass violence, extremism, and digital responsibility” (18 September), at https://www.commerce.senate.gov/2019/9/mass-violence-extremism-and-digital-responsibility, accessed 5 November 2019.

U.S. Congress. Senate. Committee on the Judiciary, 2018. “Hearing on Cambridge Analytica and the future of data privacy” (16 May), at https://www.judiciary.senate.gov/meetings/cambridge-analytica-and-the-future-of-data-privacy, accessed 10 May 2019.

U.S. Congress. Senate. Select Committee on Intelligence, 2017. “Hearing on social media influence in the 2016 elections” (1 November), at https://www.intelligence.senate.gov/hearings/open-hearing-social-media-influence-2016-us-elections, accessed 10 May 2019.

Sandra Upson, 2018. “What hearings? Advertisers still love Facebook,” Wired (13 April), at https://www.wired.com/story/what-hearings-advertisers-still-love-facebook/, accessed 13 May 2019.

Siva Vaidhyanathan, 2018. Antisocial media: How Facebook disconnects us and undermines democracy. New York: Oxford University Press.

Kurt Wagner and Rani Molla, 2018. “Facebook is not getting any bigger in the United States,” Vox (1 March), at https://www.vox.com/2018/3/1/17063208/facebook-us-growth-pew-research-users, accessed 13 May 2019.

Mark R. Warner, 2018. “Potential policy proposals for regulation of social media and technology firms,” White paper (draft, 20 August), at https://www.ftc.gov/system/files/documents/public_comments/2018/08/ftc-2018-0048-d-0104-155263.pdf, accessed 6 June 2019.

James G. Webster, 2010. “User information regimes: How social media shape patterns of consumption,” Northwestern University Law Review, volume 104, number 2, pp. 593–612, and at https://webster.soc.northwestern.edu/pubs/Webster%20(2010)%20User%20Information%20Regimes.pdf, accessed 15 November 2019.

James G. Webster, Patricia F. Phalen, and Lawrence W. Lichty, 2014. Ratings analysis: Audience measurement and analytics. Fourth edition. New York: Routledge.

Jason Weixelbaum, 2018. “Why it’s time to regulate social media companies like Facebook,” Washington Post (29 March), at https://www.washingtonpost.com/news/made-by-history/wp/2018/03/29/why-its-time-to-regulate-social-media-companies-like-facebook/?utm_term=.48f6b93b4ba7, accessed 10 May 2019.

Susan D. Whiting, 2005. “Statement on S. 1372, The Fair Ratings Act,” hearing before the Committee on Commerce Science, and Transportation, U.S. Senate, 109th Congress, 1st Session (27 July), at https://www.govinfo.gov/content/pkg/CHRG-109shrg65216/html/CHRG-109shrg65216.htm, accessed 15 November 2019.

Nat Worden, 2011. “Nielsen’s post-IPO challenge: Preserving ratings monopoly,” Wall Street Journal (25 January), at https://www.wsj.com/articles/SB10001424052748704698004576104103397970050, accessed 15 November 2019.

Tim Wu, 2013. “Machine speech,” University of Pennsylvania Law Review, volume 161, number 6, pp. 1,495–1,533, and at https://scholarship.law.upenn.edu/penn_law_review/vol161/iss6/2/, accessed 15 November 2019.

Tim Wu, 2010. “In the grip of the new monopolists,” Wall Street Journal (13 November), at https://www.wsj.com/articles/SB10001424052748704635704575604993311538482, accessed 13 May 2019.

Jonathan R. Yarowsky, 2006. “Letter to the honorable Thomas A. Barnett, Assistant Attorney General, Antitrust Division, Department of Justice” (2 November), at https://www.justice.gov/sites/default/files/atr/legacy/2014/01/08/302132.pdf, accessed 13 May 2019.

Asta Zelenkauskaite, 2017. “Remediation, convergence, and big data: Conceptual limits of cross-platform social media,” Convergence, volume 23, number 5, pp. 512–527.
doi: https://doi.org/10.1177/1354856516631519, accessed 15 November 2019.


Editorial history

Received 7 June 2019; revised 6 November 2019; accepted 7 November 2019.

Copyright © 2019, Philip M. Napoli and Anne Napoli. All Rights Reserved.

What social media platforms can learn from audience measurement: Lessons in the self-regulation of “black boxes”
by Philip M. Napoli and Anne Napoli.
First Monday, Volume 24, Number 12 - 2 December 2019
doi: http://dx.doi.org/10.5210/fm.v24i12.10124

A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2020. ISSN 1396-0466.