Censorship is f̶u̶t̶i̶l̶e̶ possible but difficult: A study in algorithmic ethnography
First Monday

Censorship is f̶u̶t̶i̶l̶e̶ possible but difficult: A study in algorithmic ethnography by Paul A. Watters



Abstract
Discourse around censorship tends to be sensationalised in many quarters. Nabi (2014), for example, recently sought to “prove ... the futility” of governments engaged in censorship programmes through the Streisand Effect (Greenberg, 2007). While most countries have an imperfect censorship regime, the sovereign rights of nations to make their own laws must be recognised, including (but not limited to) the protection of children, and the victims of child exploitation, gambling addicts, and Internet banking users, whose systems may be infected by malicious software, resulting in financial losses. The broader question to be posed seems to be, under what circumstances is censorship justified, and how can it best be achieved? In this paper, we present the results of a study that illustrates the overwhelming harms to users that emerge from an unregulated Internet regime: 89 percent of ads delivered to Canadian users on 5,000 rogue sites for the most complained-about movies and TV shows were classified as “high risk”. We conclude that more granular policies on what should be censored and better tools to enforce those policies are needed, rather than accepting that censorship is impossible.

Contents

Introduction
Methods
Results
Discussion and conclusion

 


 

Introduction

Censorship is a legitimate government function that stems from de jure, or legal sovereignty. In short, a recognised government may make such laws as it sees fit, which includes law to protect its citizens and provide security (Briggs, 1939). What makes a government recognised as legitimate as opposed to exercising de facto sovereignty is a matter of some debate; the fundamental purpose of de jure states is to protect its citizens. A state that fails to protect its citizens is a failed state (Rotberg, 2002).

A number of challenges to security and sovereignty emerged during the twentieth century: the establishment of the United Nations, for example, emerged from a growing consensus that all persons in the world have the right to have their basic security needs met, whether by the state or not (Axworthy, 2001). Perhaps most significantly, the Internet has posed a direct challenge to national sovereignty, since enforcement of national laws becomes problematic once given the volume and scale of cross-border interactions (Perritt, 1998). The key legal challenge for the Internet is that its governance structure is mostly self-regulating and decentralised; private corporations like ICANN, the body tasked with assigning and managing Internet domain names, often exercise more power over Internet functions than sovereign governments, which inevitably leads to conflicts involving law enforcement (Watters, et al., 2013). Canada was one of the first countries to champion the notion of “human security” (Axworthy, 1997).

Debates around the exercise of sovereign powers often become conflated with political views about who controls the Internet, and whether censorship is desirable or not (Deibert, 2009). The latest example of this view was promoted by Nabi (2014), who argues that since censorship has the potential effect of popularising forbidden content in some cases, it is futile, and should be abandoned. In logical terms, the inference appears to be that because a task is technically difficult and/or draws attention to the activity, it should be abandoned. Nabi cites the typical scenario of “authoritarian” regimes blocking social media as an example of censorship by states. Two states are mentioned — Turkey and Pakistan — which are both democracies, hence the perhaps careful wording around “authoritarian” rather than (say) “non-democratic”. Neither state meets the key criterion for one definition of an authoritarian state, which is the “indefinite political tenure” of its government (Vestal, 1999). Yet, the blocking of entire user-generated content sites such as YouTube in Pakistan (Khattak, et al., 2014) is not consistent with democratic principles. A broader definition of authoritarianism advanced in literature would be the strict adherence to authority and obedience, and in the case of apparent or illiberal democracies (Altinay, 2014), stealth authoritarian tactics (Varol, forthcoming).

Most sovereign nations have censorship regimes in place, with a view to protecting their citizens from harms (Garland, 1996). In many cases, there is broad international agreement on these harms, and a common desire to prevent them and prosecute those responsible for breaking the law. A relevant example is the global fight against Child Exploitation Material (CEM; Taylor and Quayle, 2013). The production of CEM involves children being used as “actors” in highly explicit pornographic films and photographs, which are then distributed and sold to a large and growing consumer base globally (Rimm, 1994). Apart from the primary victimisation which occurs during the filmed sexual assaults, secondary victimisation (Mitchell and Wells, 2007) occurs whenever the films or images are viewed and/or resold. In the most famous case involving secondary victimisation, “Amy” — the child exploited during the creation of the “Misty Series” — has sought restitution from 350 cases that involved her photos being used [1]. Every time a new case is heard, she receives a notification.

In many countries, laws protecting children from being abused in this way are covered by the censorship acts. In New Zealand, for example, the relevant law is the Films, Videos, and Publications Classification Amendment Act 2005 [2], which amended the Films, Videos, and Publications Classification Act 1993 [3]. The Act requires that all publications (including films and pictures distributed over the Internet) be categorised as follows:

  • Unrestricted (pursuant to Sec 23 (a))
  • Objectionable (pursuant to Sec 23 (b))
  • Objectionable, but available only to those over the age of 18 (pursuant to Sec 23 (c)(1)), to other classes of persons (pursuant to Sec 23 (c)(2)), or for one or more specified purposes (pursuant to Sec 23 (c)(3))

All objectionable content under the secondary category is illegal, and includes materials that depict “sexual conduct with or by children, or young persons, or both” or that “exploits the nudity of children, young persons, or both” [4].

If we take Nabi’s (2014) argument, since censorship on the Internet is (a) difficult to enforce and (b) draws attention to that which a government is trying to censor, how should we deal with material that presents clear harms to users (not to mention victims?) My argument is that problem (b) can be effectively dealt with by solving problem (a), i.e., more people may become interested in CEM because they become aware of it through censorship, but if effective controls are in place, then they will still not be able to gain access (Prichard, et al., 2011).

Nabi’s (2014) argument is that censorship does not work because it has side effects, and due to these side effects, censorship enforcers should reconsider their position. Taken with the view that censorship inadvertently affects harmless content, as exposed by a recent BBC documentary [5], a naive and flawed deduction would be that one should not bother with censorship in the first place and leave the online audience at the mercy of harmful advertisements, which I reject. It is possible for governments and other bodies — ultimately reflecting community standards and expectations — to put in place effective controls to ensure that content which deserves to be censored is, and harmless material should be freely available to all. Indeed, the current situation where ICANN — as a private corporation that retains complete control over Internet naming (Watters, et al., 2013) — should be more broadly based and inclusive.

The two broad categories of tools available to governments are policy and technical controls. To make censorship effective, both tools are needed, along with a strong international commitment to the common human rights and expectations of security outlined by Axworthy (1997). Simply because technical controls have not yet been conceived does not make them impossible, especially when there is a policy imperative of this magnitude. Also, citizens should not blindly trust the government to “do the right thing”, since many previous experiments in censorship have at times been ineffective, have been misused, and have lacked procedural safeguards (Cook, 1977). For example, corporations, subject to arbitrary censorship and state control, may also withdraw from markets (Helft and Barboza, 2010). At the extreme end of the scale, countries like have simply blocked entire sites en masse (Bamman, et al., 2012), forcing legitimate political discourse further underground.

In this paper, we present a case study to explore a simple method to quantify the harms that users might experience online — in a security context, risk is always a function of likelihood (prevalence) and impact. Measuring harm has become a key exercise in crime prevention, especially in being able to determine the Return on Investment (RoI) for policing (McFadden, 2006). While being a victim of CEM is at the extreme end of the impact scale, we are also concerned with high likelihood but lower impact events, since they will pose significant risks to a broader set of the population. We name this technique a type of algorithmic ethnography, in recognition of the role that algorithms play in determining which material is displayed to users online (Anderson, 2013).

In this case study, we aim to determine the prevalence of “high risk” advertising on piracy Web sites, that present harms to users: these harms include exposure to pornography, gambling, scams and malware (Alazab, et al., 2010). Like CEM, laws are in place in many countries to effectively censor the distribution of material on piracy Web sites. In New Zealand, for example, there is a “three strikes” policy defined in the Copyright (Infringing File Sharing) Amendment Act 2011 [6], where a user may be required to appear before a tribunal which can impose a penalty for copyright infringement. While there are some critics who disagree with this level of control of Internet content — obtained illicitly — similar arguments tend to be made that policy and technical controls are ultimately ineffective. We aim to show that one of the consequences of governments being unable (or unwilling) to effectively police the Internet is a serious risk of harms to users.

 

++++++++++

Methods

A method was devised to measure the prevalence of “high risk” advertising on piracy Web sites, and this method was applied to a sample of the piracy Web sites “most complained about” by Hollywood movie and TV rights holders. As mentioned above, this approach represents a type of algorithmic ethnography — algorithms devised by advertising networks are responsible for placing ads in the most relevant “place”, such as a banner, and operate revenue models based on pay-per-click, pay-per-view, pay-per-purchase etc. So it is perhaps unsurprising that ad networks tend to match advertisers who wish to promote high risk goods and services with piracy Web sites. In this study, we aim to quantify the prevalence, and therefore the risk. One can easily argue that people should not be visiting piracy Web sites in the first place, and by visiting them they by default are accepting the risk involved (i.e., exposure to harmful ads). Yet the reality is that piracy Web sites are routinely the most visited sites on the Internet [7], so if we are interested in measuring risk, we need to consider likelihood, as much as common sense. Given the multinational organisation of such sites — hosted in one country, DNS registered in another, cash proceeds flowing to another — simple solutions to reduce the risk, such as banning sites, are generally impractical in addressing transnational cybercrime (Watters, et al., 2013). If all users were law abiding and not drawn to the “dark side” of the Internet by the lure of free content, no one would be exposed to this material. For instance, similar searches on Netflix would not return pornographic material/ads.

A number of studies have investigated the risks associated with online advertising. Taplin (2013) conducted a study of the “top 500” pirate sites and identified reputational risks for mainstream advertisers whose advertisements were placed on these sites. It was unclear from that study (and others) whether the advertisers were aware of how their brands were being devalued through association with piracy. Watters (2014) conducted a subsequent study which found that only one percent of the ads fell into this reputational risk category for Australian users, but that 99 percent of visible ads were high risk, since they presented direct harms to users. These included searches on The Pirate Bay Web site for Disney children’s movies, for example, which would return a page covered in advertising banners for pornography. Other high risk categories identified from examining 5,000 URLs deemed to be infringing by Google (and notified to Chilling Effects, following Google’s DMCA process).

In this study, a sample of the “top 500 most complained about” URLs confirmed by Google as containing or linking to copyright infringing material was analysed for Canadian users. Why Canada? Apart from its US$3.1 billion annual spend on online advertising, it is also a country which does not have a strong tradition of visible government censorship (Ryder, 1999), although in areas such as the censorship of gay and lesbian themed material, censorship has been imposed by government (Cossman, 2014). Most often, there has been a strong push for self-censorship, in areas like the dissemination of CEM [8]. However, it is also a nation which is undergoing regulatory changes aimed at stemming hate speech, for example, through the Canadian Human Rights Commission (CHRA). In some cases, these restrictions have been found to be unconstitutional, since they may conflict with the Constitution Act 1982 2(b), guaranteeing freedom of expression [9]. For a fuller discussion of the issues, see Moon (2000). The question that this paper seeks to answer is, in an era of change but being fundamentally unregulated, are Canadian users being placed at risk?

As mentioned earlier, an algorithmic ethnographic approach was adopted for this study, whereby the behaviour of the algorithms responsible for placing ads on piracy Web sites (through advertising networks) was observed “in the wild”. The observational process worked as follows:

  1. The Google Transparency report was downloaded for the most recent month; this contains a list of all URLs removed from the Google index because they have been verified as being involved in piracy.
  2. Each complaint in the report comprises a set of URLs which typically refer to the same title. These complaints were loaded into a database, and complaints relating only to Hollywood movies and TV were retained.
  3. The retained records were then sorted by the number of URLs complained about in descending order.
  4. The top 500 records were then selected for processing, and the first ten URLs in each report were used to build a database of 5,000 URLs (the sample).
  5. For each URL in the sample, the HTML source of the page was downloaded, and a snapshot of the page taken.
  6. Advertising items within each page were identified by cross-checking each line of HTML against the Easy List [10], which is used by Adblock Plus to block advertisements in browsers). Note that advertising items include both visible banner ads, as well as JavaScript and other page elements associated with advertising.
  7. Each visible ad on every page in the same was manually categorised into the mainstream or high risk category. The advertisers were also identified.

In terms of programmatic marketing/advertising, ad customisation is also done on the basis of micro-segmentation, wherein two users visiting the same Web site at the same time, would get different ads based on their metadata and preferences, history (logged through cookies), etc. The data collected for this study, only uses one particular dataset of crawled data, where all cookies are removed before the next page is downloaded. Subsequent crawls may result in a different set of ads being displayed, but given the large sample being examined (5,000), large deviations from the observed would be unusual.

 

++++++++++

Results

From the 5,000 pages analysed in the sample, a total of 12,190 advertising items, and 3,025 visible ads were identified. After categorisation, it was found that 11 percent of these ads were mainstream, and 89 percent were high risk. Table 1 shows the breakdown by category for high risk ads: banner ads containing links to malware were the most prevalent (43.6 percent), followed by ads for the sex industry (30.0 percent) and scams (18.2 percent).

 

Table 1: Frequency by ad category — High risk ads.
 SexMalwareDownloadGamblingScams
N8051,172106113489
Percentage30.0%43.6%3.9%4.2%18.2%

 

Table 2 contains an overall frequency analysis of advertising items. An example of malware downloaded is from the highest frequency ad network cucirca.eu: a link from a rogue site to download a TV episode or movie is provided, e.g., http://www.cucirca.eu/2009/06/12/watch-24-online/ — once the user visits this site, and clicks on the “play” button, a new page is loaded which contains the link http://dke.videoconverterdownloadfree.com/download/flash-hd/?ts=205&subid=20oQO13apaWR1wMv3EGDU81vrbo1000.&line_item=272727&dp=Mvw18_ckncFRp_eH5gUOutKjU8-z62Jccswiwbe-ZImJSId0h6t615IEJKrCvlKAmQfdhcHdtq6nyE7KqW9J0JUxsNbvytie2PVJFLltKX_Q9dqp36D_LyDMIarSgtDD_1F0tAsDvmw1v2yByfV_jG16RM9zwk6JJGbkwD6rJiu2M_lGy4veBRNKXihmdbac9wi56IIO2vDrhBuwoo7COq2cfsNg-gyMc205a3ig4GeVkrHm5JMNw42axUhO_LV4z11uoZR1y9mRbMuAZf6Jjxml0HUQtuDrfVLx10X1_GOI5Z24luKqlkC-AB12X67w2PN2psxrSEU7xKN89FCr-esXaev321cjhitjZEWD2_x_LdBcGANcrpYEs0hxYjolMQAF&dp2=P11887105_CR12960055_CA11966097. Upon visiting this page, a download is initiated to the user’s computer containing the file setup.exe which is 1.3M in size, as the page indicates that “video downloader” is required to view the movie or TV episode.

 

Table 2: Frequency analysis by domain — Top 10. [11]
DomainFrequencyPercentage of items
cucirca.eu2,49220.4
propellerads.com1,65313.6
adexprt.com9647.9
fhserve.com7235.9
adcash.com4303.5
filestube.com2592.1
isohunt.com2432.0
admxr.com2361.9
pobieramy24.pl2271.9
baypops.com1491.2

 

Running this “video downloader” file through the online scanner virscan.org — which analyses suspicious files using 36 different products — the file is verified as malware by four products:

  • TROJ_GEN.F47V0902 (TrendMicro-HouseCall)
  • Bundlore (fs) (VIPRE)
  • PUP.Optional.Bundlelore.A (Malwarebytes)
  • Adware.Downware.925 (DrWeb)

A review of the other known filenames associated with similar malware on other rogue indicates a typical strategy of associating a desirable filename with the malicious code, ie, using a filename that users desiring to download infringing content will click on, including Mortal Kombat — Komplete Edition Crack (2013) Download.exe and Transformers 3 — Dark of the Moon (2011) [1080p].exe.

Table 3 provides a breakdown of distinct ads by the Top 10 domains for each category. Some domains appear to host ads across a wide range of categories, but in most instances, they seem dedicated to one particular type of advertising.

 

High-risk ad type frequencies by network
 
Table 3: High-risk ad type frequencies by network.

 

 

++++++++++

Discussion and conclusion

At first glance, it is tempting to conclude that, in the absence of the regulation that governs most other aspects of our lives, the Internet is an inherently dangerous place. The fact that 89 percent of advertising on the piracy Web sites sampled was high risk indicates that mechanisms to reduce the risk need to be contemplated. Yet even during the very early years of the commercial Internet, there was great controversy over whether this was legally possible (Kegley, 1996). It is certainly possible that industry self-regulation may assist; bodies such as the Network Advertising Initiative (NAI) have developed codes of practice aimed at protecting consumer data privacy [12], yet the very nature of behavioural advertising often requires the storage and utilisation of large amounts of private data. Other initiatives [13] have more directly focused attention on ensuring that advertising networks do not promote illegal or inappropriate content, and that brands can be assured that their ads will not be placed, for example, on piracy Web sites.

Yet, despite these initiatives, and with the absence of any laws governing the Internet in jurisdictions like Canada, what actions can sovereign governments take to protect their populations? Returning to the arguments outlined by Nabi (2014), censorship may be technically difficult, but it is not impossible. Strategies such as URL blocking or site blocking at the national level can be an effective first defence against illegal or inappropriate content. Piracy site owners, for example, could respond my shifting their hosting or domains to another registrar or country. This underlines the need to ensure co-ordinated action between governments, in much the same way that the Financial Action Task Force (FATF) facilitates the development on international strategies to combat money laundering, which is often used to fund terrorism. There should be no refuge on the Internet for criminals.

Savvy users could always move to the next level of technical sophistication by paying for a Virtual Private Network (VPN) subscription, or by using an anonymisation service, such as TOR. Yet, there are relatively small numbers of commercial VPN providers globally, and the pool of IP addresses that they utilise could be easily blocked, or perhaps a form of key escrow implemented, if users wish to make use of cryptographic protocols. As with any technology, systems like TOR are open to compromise, which has happened recently [14] — the current set of TOR exit nodes is relatively small, and it is certainly possible that governments could actively monitor traffic entering and exiting the network [15]. Thus, far from being technically infeasible, sovereign governments have a range of technical and policy responses which could be easily used to enforce a censorship regime.

Historically, very few censorship regimes have taken the views of their citizens into consideration through civilian oversight. This is also a shortcoming of Nabi (2014), where Nabi opines that “Knowledge of this observation should prompt state-level censors to rethink their position and explore alternative mechanisms to deal with such issues — for example, by directly engaging with their citizens and considering their points of view” without discussing how this is going to happen in the real world. Australia may be unusual in that all decisions made by the Classification Board (comprised of public servants) can be reviewed by an independent Classification Review Board (comprised of ordinary members of the public). Any decision made by the Classification Review Board replaces the original Classification Board decision [16]. Broadening out the representation of community members would improve classification decisions, but decision-making could also fall ransom to special interest, lobbyist or pressure groups. However, it may be that a broad consensus, arrived at through democratic means, may prevail in most cases. Or, that more fine-grained access control decisions can be arrived at through the Delphi technique or similar (Kelarev, et al., 2011).

There is evidence of both active and passive resistance to overcome the reduction in information that results from gross banning of such sites in Turkey and elsewhere (Yalkin, et al., 2014). Governments need to do a better job of justifying censorship; stifling free speech and thought to “safeguard” the religious sensibilities of some elements of the populace is not a sufficient justification, or to use broad-based restrictions on the dissemination of information (and particularly user-generated content, which may be critical of government policies).

Nabi (2014) is correct in asserting that all censorship regimes have technical limitations. Binary decision making systems that rely on simple term blocking do not work well, if the term is not-exclusive to one category. Thus, while Ho and Watters (2005) demonstrated that a number of terms were only ever found on pornographic Web sites (making them suitable for blocking), other terms were more prone to miscategoristion. For example, the term “tit” might be more likely found on a pornographic Web site, but “blue tit” would be more likely associated to be with an ornithological site. Thus, blocking a page which contained the term “tit” would result in overblocking, and would represent a crude and inaccurate approach. Innocuous Web sites, such as those dealing with sex education, LGBTI rights, domestic violence, etc could ended up being blocked.

The social impact of overblocking was revealed in a recent BBC documentary [17], which illustrated that keyword-based ISP filters were blocking sex education sites, for example, but still failing to block some hardcore pornographic Web sites. It’s important to note that these automated systems do not involve explicit access control decisions being made after careful consideration by a panel of experts; they are automated systems that perform simple lexical matching. There is a need to study the behaviour of such systems and the categorisations that they make within their natural environment (hence the term algorithmic ethnography). It was also revealed that some vendors have been selling filtering tools that explicitly blocked LGBTI material [18]; these products have now been modified. The uncontrolled use of these products has the potential to be as damaging as having no censorship regime in place at all.

At the user level, a number of additional strategies could be considered. Most browsers now support ad blocking software, such as Adblock Plus, which uses the Easylist database that was utilised in this study to detect advertising items embedded in HTML. These technologies could be promoted or mandated to protect users from malicious advertising, along the same lines that ISPs currently do with anti-virus products. Such technologies need to be further developed, so that they can readily classify malicious versus non-malicious advertising, or at a more abstract level, conform at least with access policies that are set by government. An alternative to a machine learning system to automate classification would be to engage a reputation-based ratings system, such that users could “vote up” or “vote down” content. Users could then set a threshold within their ad blocking software to block ads which did not have broad community support or consensus. The use of reputation scores is already used in the P2P community to identify fake torrents (Watters and Layton, 2011); perhaps the approach could be generalised to form a perceived “safe advertiser” or “safe advertisement” list. Such a system would engage ordinary users to form a consensus around community standards, rather than relying on corporations or bureaucrats to make arbitrary decisions about content.

In summary, this paper illustrates the pressing need — using a case study involving one technology — for governments to be able to make decisions to prevent their citizens from being able to download certain types of content, including malware, gambling and pornography, under a well-defined set of criteria, and only in certain contexts. The fact that current censorship regimes are failing to address this problem is worrying; at the same time, a reliance on automated systems to perform simple keyword blocking can lead to inappropriate censorship. Further study of how these systems operate is needed, and the algorimthic ethnographic approach described in this study provides one way to do that. Ultimately, policy-led initiatives that reflect a broad, democratic consensus around the kinds of content that should be blocked or permitted needs to form the basis for any censorship regime. End of article

 

About the author

Paul A. Watters is Professor in Information Technology in the Centre for Information Technology at Massey University, New Zealand.
E-mail: P [dot] A [dot] Watters [at] massey [dot] ac [dot] nz

 

Notes

1. http://abcnews.go.com/Health/internet-porn-misty-series-traumatizes-child-victim-pedophiles/story?id=9773590.

2. http://www.legislation.govt.nz/act/public/2005/0002/latest/DLM333252.html.

3. http://www.legislation.govt.nz/act/public/1993/0094/latest/DLM312895.html.

4. http://www.dia.govt.nz/Censorship-Objectionable-and-Restricted-Material.

5. http://www.bbc.com/news/uk-25430582; http://www.indexoncensorship.org/2014/01/uks-web-filtering-seems-blocking-common-sense/.

6. http://www.legislation.govt.nz/act/public/2011/0011/latest/DLM2764312.html#DLM2764327.

7. The Pirate Bay, Kicka** Torrents and Torrentz have Alexa rankings of 79, 103 and 153 respectively (http://au.ibtimes.com/articles/533033/20140106/pirate-bay-popular-torrent-site-top-10.htm).

8. http://www.cbc.ca/news/technology/mountie-hopes-web-initiative-could-cut-child-abuse-1.592081.

9. http://www.theglobeandmail.com/news/national/hate-speech-law-violates-charter-rights-tribunal-rules/article1273956/.

10. http://easylist.adblockplus.org/en/.

11. Note that some domains like isohunt.com and sumotorrent.com do not display their ads outside their own domain; they are ranked highly because of the high number of DMCA complaints against their site.

12. http://www.adweek.com/news/technology/ad-nets-step-self-regulation-147099.

13. http://www.jicwebs.org/agreed-principles/latest-news/133-jicwebs-approves-industry-principles-aimed-at-growing-safer-online-ad-placement.

14. http://www.theverge.com/2014/7/30/5951479/tor-says-unknown-attackers-compromised-hidden-services.

15. http://hackertarget.com/tor-exit-node-visualization/.

16. http://www.classification.gov.au/About/Pages/Review-Board.aspx.

17. http://www.bbc.com/news/uk-25430582.

18. http://www.indexoncensorship.org/2014/01/uks-web-filtering-seems-blocking-common-sense/.

 

References

M. Alazab, R. Layton, S. Venkataraman, and P. Watters, 2010. “Malware detection based on structural and behavioural features Of API calls,” Proceedings of the 1st International Cyber Resilience Conference (Edith Cowan University, Perth), at http://ro.ecu.edu.au/icr/1, accessed 31 December 2014.

A.H. Altinay, 2014. “Will Erdogan’s victory mark the rise of illiberal democracy?” New Perspectives Quarterly, volume 31, number 4, pp. 36-39.
doi: http://dx.doi.org/10.1111/npqu.11487, accessed 31 December 2014.

C.W. Anderson, 2013. “Towards a sociology of computational and algorithmic journalism,” New Media & Society, volume 15, number 7, pp. 1,005-1,021.
doi: http://dx.doi.org/10.1177/1461444812465137, accessed 31 December 2014.

L. Axworthy, 2001. “Human security and global governance: Putting people first,” Global Governance, volume 7, number 1, pp. 19-23.

L. Axworthy, 1997. “Canada and human security: The need for leadership,” International Journal, volume 52, number 2, pp. 183-196.

D. Bamman, B. O’Connor, and N. Smith, 2012. “Censorship and deletion practices in Chinese social media,” First Monday, volume 17, number 3, at http://firstmonday.org/article/view/3943/3169, accessed 31 December 2014.
doi: http://dx.doi.org/10.5210/fm.v17i3.3943, accessed 31 December 2014.

H.W. Briggs, 1939. “De facto and de jure recognition: The Arantzazu Mendi,” American Journal of International Law, volume 33, number 4, pp. 689-699.

M.B. Cook, 1977. “Censorship of violent motion pictures: A constitutional analysis,” Indiana Law Journal, volume 53, number 2, http://www.repository.law.indiana.edu/ilj/vol53/iss2/10/, accessed 31 December 2014.

B. Cossman, 2014. “Censor, resist, repeat: A history of censorship of gay and lesbian sexual representation in Canada,” Duke Journal of Gender Law & Policy, volume 21, number 1, pp. 45-66, and at http://scholarship.law.duke.edu/djglp/vol21/iss1/2/, accessed 31 December 2014.

R.J. Deibert, 2009. “The geopolitics of Internet control: Censorship, sovereignty, and cyberspace,” In: A. Chadwick and P.N. Howard (editors). Routledge handbook of Internet politics. London: Routledge, pp. 323-336, and at http://www.handbook-of-internet-politics.com/pdfs/chapter_23.pdf, accessed 31 December 2014.

D. Garland, 1996. “The limits of the sovereign state: Strategies of crime control in contemporary society,” British Journal of Criminology, volume 36, number 4, pp. 445-471.
doi: http://dx.doi.org/10.1093/oxfordjournals.bjc.a014105, accessed 31 December 2014.

A. Greenberg, 2007. “The Streisand Effect,” Forbes (11 May), at http://www.forbes.com/2007/05/10/streisand-digg-web-tech-cx_ag_0511streisand.html, accessed 31 December 2014.

M. Helft and D. Barboza, 2010. “Google shuts China site in dispute over censorship,” New York Times (22 March), at http://www.nytimes.com/2010/03/23/technology/23google.html, accessed 31 December 2014.

W.H. Ho and P.A. Watters, 2005. “Identifying and blocking pornographic content,” ICDEW ’05: Proceedings of the 21st International Conference on Data Engineering Workshops, p. 1,181.
doi: http://dx.doi.org/10.1109/ICDE.2005.227, accessed 31 December 2014.

A.R. Kegley, 1996. “Regulation of the Internet: The application of established constitutional law to dangerous electronic communication,” Kentucky Law Journal, volume 85, p. 997.

A.V. Kelarev, S. Brown, P.A. Watters, X.-W. Wu, and R. Dazeley, 2011. “Establishing reasoning communities of security experts for Internet commerce security,” In: J. Yearwood and A. Stranieri (editors). Technologies for supporting reasoning communities and collaborative decision making: Cooperative approaches. Hershey, Pa.: IGI Global, pp. 380-396.
doi: http://dx.doi.org/10.4018/978-1-60960-091-4.ch020, accessed 31 December 2014.

S. Khattak, M. Javed, S.A. Khayam, Z.A. Uzmi, and V. Paxson, 2014. “A look at the consequences of Internet censorship through an ISP lens,” IMC ’14: Proceedings of the 2014 Conference on Internet Measurement Conference, pp. 271-284.
doi: http://dx.doi.org/10.1145/2663716.2663750, accessed 31 December 2014.

M. McFadden, 2006. “The Australian Federal Police Drug Harm Index: A new methodology for quantifying success in combating drug use,” Australian Journal of Public Administration, volume 65, number 4, pp. 68-81.
doi: http://dx.doi.org/10.1111/j.1467-8500.2006.00505a.x, accessed 31 December 2014.

K.J. Mitchell and M. Wells, 2007. “Problematic Internet experiences: Primary or secondary presenting problems in persons seeking mental health care?” Social Science & Medicine, volume 65, number 6, pp. 1,136-1141.
doi: http://dx.doi.org/10.1016/j.socscimed.2007.05.015, accessed 31 December 2014.

R. Moon, 2000. The constitutional protection of freedom of expression. Toronto: University of Toronto Press.

Z. Nabi, 2014. “R̶e̶s̶i̶s̶t̶a̶n̶c̶e̶ censorship is futile,” First Monday, volume 19, number 11, at http://firstmonday.org/article/view/5525/4155, accessed 31 December 2014.
doi: http://dx.doi.org/10.5210/fm.v19i11.5525, accessed 31 December 2014.

H.H. Perritt, Jr., 1998. “The Internet is changing international law,” Chicago-Kent Law Review, volume 73, number 4, p. 997, and at http://scholarship.kentlaw.iit.edu/cklawreview/vol73/iss4/4/, accessed 31 December 2014.

J. Prichard, P.A. Watters, and C. Spiranovic, 2011. “Internet subcultures and pathways to the use of child pornography,” Computer Law & Security Review, volume 27, number 6, pp. 585-600.
doi: http://dx.doi.org/10.1016/j.clsr.2011.09.009, accessed 31 December 2014.

M. Rimm, 1994. “Marketing pornography on the information superhighway: A survey of 917,410 images, descriptions, short stories, and animations downloaded 8.5 million times by consumers in over 2000 cities in forty countries, provinces, and territories,” Georgetown Law Journal, volume 83, number 5, pp. 1,849–1,934.

R.I. Rotberg, 2002. “The new nature of nation–state failure,” Washington Quarterly, volume 25, number 3, pp. 85-96.
doi: http://dx.doi.org/10.1162/01636600260046253, accessed 31 December 2014.

B. Ryder, 1999. “Undercover censorship: Exploring the history of the regulation of publications in Canada,” In: K. Petersen and A.C. Hutchinson (editors). Interpreting censorship in Canada. Toronto: University of Toronto Press, pp. 129-156.

J. Taplin, 2013. “USC Annenberg Lab ad transparency report” (5 January), at http://www.annenberglab.com/sites/default/files/uploads/USCAnnenbergLab_AdReport_Jan2013.pdf, accessed 31 December 2014.

M. Taylor and E. Quayle, 2003. Child pornography: An internet crime. Hove, East Sussex: Brunner-Routledge.

O.O. Varol, forthcoming. “Stealth authoritarianism,” Iowa Law Review, volume 100.

T.M. Vestal, 1999. Ethiopia: A post-Cold War African state. Westport, Conn.: Praeger.

P.A. Watters, 2014. “A systematic approach to measuring advertising transparency online: An Australian case study,” Proceedings of the Second Australasian Web Conference, at http://crpit.com/confpapers/CRPITV155Watters.pdf, accessed 31 December 2014.

P.A. Watters and R. Layton, 2011. “Fake file detection in P2P networks by consensus and reputation,” 2011 First International Workshop on Complexity and Data Mining (IWCDM), pp. 80-83.
doi: http://dx.doi.org/10.1109/IWCDM.2011.26, accessed 31 December 2014.

P.A. Watters, A. Herps, R. Layton, and S. McCombie, 2013. “ICANN or ICANT: Is WHOIS an enabler of cybercrime?” 2013 Fourth Cybercrime and Trustworthy Computing Workshop (CTC), pp. 44-49.
doi: http://dx.doi.org/10.1109/CTC.2013.13, accessed 31 December 2014.

Ç. Yalkin, F. Kerrigan, and D. vom Lehn, 2014. “Legitimisation of the role of the nation state: Understanding of and reactions to Internet censorship in Turkey,” New Media & Society, volume 16, number 2, pp. 271-289.
doi: http://dx.doi.org/10.1177/1461444813479762, accessed 31 December 2014.

 


Editorial history

Received 7 December 2014; revised 29 December 2014; revised 31 December 2014; accepted 31 December 2014.


Copyright © 2015, First Monday.
Copyright © 2015, Paul A. Watters.

Censorship is f̶u̶t̶i̶l̶e̶ possible but difficult: A study in algorithmic ethnography
by Paul A. Watters.
First Monday, Volume 20, Number 1 - 5 January 2015
http://firstmonday.org/ojs/index.php/fm/article/view/5612/4202
doi: http://dx.doi.org/10.5210/fm.v20i1.5612





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2017. ISSN 1396-0466.