How to control the Internet: Comparative political implications of the Internet's engineering
First Monday

How to control the Internet: Comparative political implications of the Internet's engineering by Steven Lloyd Wilson

The spread of the Internet has had a profound impact on the social sciences, but understanding of how the engineering realities of the Internet’s construction shape its political effects still lags. This article presents a framework and rich examples from multiple countries in order to describe how national differences in the implementation of the Internet cause different strategic calculations by political actors. This paper’s goal is to provide a starting point for social scientists to treat the Internet as a strategic construction of regime and people, rather than a technical black box.


1. Introduction
2. Why it matters
3. The physical structure of the Internet
4. Controlling the Internet
5. Can’t stop the signal



1. Introduction

“The power to destroy a thing is the absolute control over it.” — Frank Herbert

In 1981, disaffected elements of the Spanish military seized the Parliament and Cabinet in an attempt to roll back five years of democratic transition and return the nation to Franco-style military rule. The plotters discounted King Juan Carlos I, who spent the evening making phone calls to various power players in the government. The overlooked King managed to create a provisional government composed of sub-cabinet personnel before arranging to appear on television in the middle of the night in order to condemn the coup and rally support for democracy. The coup collapsed within hours, and one of the arrested plotters noted that their deepest failure was in not cutting the King’s phone line before moving on the regime.

Thirty years later, during the Egyptian uprising of 2011, the regime identified social media tools such as Twitter and Facebook as a primary vector of organization for the protesters. For three days, the regime attempted to shutdown access to such sites in a piecemeal manner, but met with only limited success. Just after midnight on 28 January, Mubarak ordered that Egypt’s connection to the Internet be severed entirely. A single Egyptian Internet service provider (ISP) called Noor Data Networks remained connected to the rest of the world for three more days, until 31 January.

While traditional media was shut down or co-opted by the regime as soon as the crisis began, Egyptian Internet access proved resilient enough to survive in one way or another for three to seven days despite sustained efforts by the regime to take it down. On the other hand, the regime was able to shut it down eventually, proving that networks are not impervious to the efforts of a determined regime.

Very different sequences of events unfolded in other countries during the Arab Spring. While each government that sought to do so succeeded at immediately seizing control of traditional media, each country had a different experience with subverting or taking down the public’s access to the Internet. This variance hints at a greater underlying complexity involved with something that seems like a simple order for a regime to give. Why is it that shutting down the Internet is so much more difficult than seizing control of traditional media? If it were simply the fact that the decentralization and self-repairing nature of networks made control more difficult, then we would not see this variance in regime response. We would see regimes taking the same actions as each other, and see the Internet go down within some relatively predictable time frame after the order was given (as opposed to a same day effect with traditional media).

Instead, we see Tunisia kill domain name services to effectively and immediately limit access to the outside world, we see Egypt shut down the Ramses Internet Exchange to kill access after three days of trying other tactics, and we see Syria wait almost a year into their Civil War before taking the step of shutting down the BGP routing for the country, but only for 48 hours. These regimes are making different strategic decisions under similar circumstances with regard to the Internet because there is a great deal of variance from state to state with regard to what the Internet is, and how it can best be controlled.

And so while we might easily pinpoint a single overlooked phone line as having made the difference in 1981, the rise of the Internet has introduced a new set of strategic variables into the calculations over how to control media. Thus, this paper seeks to establish a framework for thinking about national variance in how the Internet can be controlled.



2. Why it matters

Literature on the Internet in the social sciences is still in its infancy and has largely attempted to use the Internet as an independent variable to explain variation in a variety of outcomes. Such outcomes in the developed world include political participation (Bimber, 2001; Gibson, 2001; Shah, et al., 2001), civic engagement (Xenos and Moy, 2007), political insularity (or homophily) (Gentzkow and Shapiro, 2010; Page, 2007; Sunstein, 2007; Wojcieszak and Mutz, 2009). The developing world has also received a fair share of attention, with a burgeoning literature on the Internet’s role in democratization (particularly the coverage of “liberation technology” in the Journal of Democracy), health education (Edejer, 2000), banking (Jaruwachirathanakul and Fink, 2005; Sukkar and Hasan, 2005), and economic development (Petrazzini and Kibati, 1999; Wilson, 2004). In addition, empirical work has explored the connection between the Internet and the events of the Arab Spring (Alqudsi-ghabra, et al., 2011; Hofheinz, 2005; Murphy, 2006; Sereghy, 2012; Stepanova, 2011; Zhuo, et al., 2011) and the Color Revolutions (Bunce and Wolchik, 2010; Chowdhury, 2008; Dyczok, 2005; Goldstein, 2007; Kyj, 2006).

Early work on the Internet and society during the eighties and nineties tended to focus on the utopian potentials of the technology, growing out of the often anarchist or libertarian ideologies that permeated computer culture of the time. This early work took the inability for the Internet to be centrally controlled as an independent variable driving all manner of social and economic transformation. By the end of the nineties, early optimism gave way to more complex interpretations, with Lessig spearheading the argument that not only was the Internet regulable, but that the decisions about how it would be regulated would determine what the Internet’s effect would be on society in the long run (Lessig, 2002, 2006).

While both the popular press and many academics have continued to focus on the pro-democratic potentials of new communications technologies, other authors have explored more complex interpretations, arguing that the enhanced ability to monitor populations has in fact stabilized authoritarian regimes (Morozov, 2011) and destabilized nascent democracies by making short term collective action easy at the expense of building institutions (Faris and Etling, 2008). For a relatively comprehensive overview of this literature, see Henry Farrell’s (2012) review article “The consequences of the Internet for politics” in 2012's Annual Review of Political Science.

The proposition that the Internet is different than previous media technologies is not wholly without controversy, and has generated its own body of literature. Clay Shirky (2008) has argued in several works that the declining transaction costs associated with new communications technologies are making it easier for individuals to organize in a variety of ways. Manuel Castells has made perhaps the most significant single contribution to date in this field, with his massive three volume treatise The information age, which argues that network technologies have subtly shifted nearly every aspect of society in a transition to what he calls network society (Castells, 2009). Castells has followed that text with additional work exploring why the Internet is so different, examining for example the unique ability of new communications technologies to communicate emotion on an empathetic level that makes them such a powerful tool of social movements and other traditionally “weak” societal actors (Castells, 2012; Castells, 2013). Such seminal works as Negroponte’s Being digital (1996) and McLuhan’s The Gutenberg galaxy (1962, though astoundingly prescient) cover additional theoretical bases on the Internet’s distinction from previous communications technologies.

Much of this work tends to problematically treat the Internet as a black box, in which the inner workings are rarely understood. This is not always a problem, especially with regard to topics for which the technical details of these technologies simply do not matter. For instance, work premised on theorizing and empirically testing how the reduction of transaction costs associated with new communications technologies affect various political and economic outcomes is not generally affected by how those transaction costs are reduced. The underlying independent variable of interest in these cases is transaction costs, and the expansion of Internet usage is simply the mechanism for lowering costs below a certain point. This applies to most of the theoretical literature on the Internet’s social and economic effects.

However, in many political contexts, the inner workings of Internet technology are critical to understanding how political actors are making decisions. For instance, in the midst of the Egyptian uprising of 2011, many commentators (both academic and journalistic) were at a loss to explain why Mubarak did not simply “pull the plug” on the Internet. That question can only be answered if one understands how the Internet works on a physical level, both in the general case, and in the particular details of its construction in Egypt.

A broad literature on technical governance has emerged within social science that does deal with the inner workings of the Internet, both in domestic and international contexts. This field has a variety of subtopics including censorship (Deibert, et al., 2008), the domain name system (Kuerbis and Mueller, 2007; Mueller, 2004), network neutrality (Wu, 2003; Yoo, 2004), the digital divide (Norris, 2001; Schlozman, et al., 2010), and the institutions governing and regulating new communications technologies (DeNardis, 2010; Mueller, 2010). However, this literature can also be problematic because it is relatively self-contained. That is, technically savvy researchers with an understanding of Internet technologies apply that knowledge to explain outcomes within technically related fields. It is difficult for a researcher without technical knowledge to glean from these works an understanding of the inner workings of the Internet sufficient to apply it to original research in a different context.

There is of course a great deal of technical literature on the structure and design of the Internet in engineering and computer science, which is simply beyond the necessary technical scope of the research of most social scientists. This paper is not a traditional research article per se, but provides the requisite background so that social scientists have a framework for understanding how the technical underpinnings of the Internet may affect their substantive research. It does so by answering three questions: What is physical structure of the Internet? How can it be controlled? How can we work to prevent that control from being co-opted?



3. The physical structure of the Internet

In broad terms, it is a service that connects computers to one another, some of which provide content that other computers in the network consume. But the provision of that service is a complex combination of many elements of private and state owned infrastructure, and the question of control depends on understanding how that infrastructure is constructed and who controls power over the different pieces of it.

Take an example from the American cable market that might be more familiar. One company, often with government subsidy, might pay to lay fiber optic cable into a neighborhood. Sometimes the company that pays to lay that cable retains ownership of it and provides service through it (whether it is Internet, television, telephony or a combination thereof). Other times, the company that paid for the installation of that infrastructure may be merely a man-in-the-middle who then sells or leases the use of that cable to other companies who sell service on it.

Things get even more complicated in some jurisdictions, in which courts or the legislature have ruled that owning the infrastructure should not allow a company a monopoly over the use of that infrastructure. The logic is similar to that of road construction, in that the inefficiency of building multiple competing roads that are otherwise identical, makes road systems a logical item to be provided as a public good. The fiber optic cables providing service into residential areas is similar. Telecommunications companies end up easily becoming miniature monopolies if transit rights aren’t guaranteed, since individual consumers always only have one physical line to choose from. So in those situations, one company might own the cables, while several others offer services along those same physical lines. In addition, consumers increasingly can access the Internet through smart phones via the cell phone networks, which at the consumer end bypass physical connections entirely.

As this example demonstrates, control of the network is an idea with many dimensions. There is physical control of the lines, legal control over whether they shutdown, and technical control over the traffic transiting the physical lines. Any of these dimensions is a potential route to seizure of ownership.

The Internet as an entity emerges out of three basic categories of components. First, is the node level, the individual computers, servers, and routers that connect to the network. Second is the network level, the connections between those nodes. Third is the application level, which is the software running on particular nodes that provide services to other nodes. Let’s examine each in turn.

3.1. Nodes

First is the node level. Most descriptions of the uniqueness of the Internet focus on the decentralization and self-repairing nature of the network. But this description of decentralization is misleading in its simplicity because no part of it is exclusive to computers. The basic concept of a self-repairing and decentralized network could just as easily apply to telephone or cable television lines. What makes the Internet an entirely different beast is not simply the self-repairing nature of its network, but the nature of the individual nodes in that network. Even with a self-repairing network, if you eliminate the switchboards, a telephone network will go down. The telephones themselves cannot maintain the network. You can build massive redundancy in the form of endless quantities of switchboards, but still if all the switchboards were removed, the network would collapse.

But in a computer network, the devices equivalent to switchboards such as servers, routers, and switches, are still computers underneath the hood. The individual nodes can step in to take the place of the specialized devices. Familial resemblance aside, the genetic difference between computer networks and telephone networks is that in the former every single device is a potential switchboard. Decentralization of the Internet is only partially a function of network design.

The genius of computers is not that they are ultra-efficient abacuses, or that they can individually be designed to do particular useful things. Their genius is that they are each designed to do anything that can be expressed in logical language. The computer chip that controls your thermostat, the one in your video game console, in your laptop, in your iPhone, and the one controlling the fuel injection in your car’s engine, each is the same basic machine underneath. Each can perform the tasks of each of the others, albeit at phenomenally different speeds. Even if they speak different computer languages, by being computer chips in the first place, they already have a shared grammar of logic. Their only functional difference is what particular software they are running at a particular time, and what inputs and outputs have been soldered onto them.

Humorous computer scientists have built surreal demonstrations of this: porting versions of the Linux operating system or Apache Web server software to everything from cell phones to car engines to light switches and coffee makers. If it has a computer chip, it can run code. The implication is that what makes the Internet distinct from other media is not something that can be designed or bred out of the systems at a fundamental level. This is not to attribute magical democratizing powers to the Internet, but to insist that attempts to treat it like a variation of traditional media are doomed to failure.

3.2. Network

Next is the network level. What evolved into the Internet began life as ARPANet, a communications network designed in the 1970s to survive outages by being intrinsically able to reroute communications around any number of destroyed connections. Although now apocryphally remembered as being intended to maintain communications in case of nuclear war, the need for such robustness was derived from the much more mundane reality of the highly unreliable telecommunications equipment of the time. Such a communications network cannot rely on centralized control, because anything centralized could be knocked out (whether by equipment failure or enemy missiles). The most resilient communications network possible in theoretical terms would be one in which all nodes in the network are equivalent, and independently connected to every other node. In such a network, communication could be maintained so long as there was another surviving node to communicate with.

Because building that many redundant lines was impossible even with Cold War levels of defense funding, ARPANet was built with an eye towards rerouting communications around any failure in the network, with minimal reliance on any centralization of any kind.

It is important to understand that the decentralization of the Internet is not simply something that has evolved, but is the entire point of the original design work. From the beginning, each building block of the technologies that form the current Internet has been designed for decentralization. For instance, the familiar TCP/IP protocol (as in “IP address”) that carries essentially all network traffic in the world, is the original protocol implemented for the U.S. Department of Defense in the 1970s. This is important, because understanding how the Internet is constructed illuminates how understanding who controls the Internet is a more complex question than any other form of mass media.

In physical terms there are four basic ways that nodes can connect to the larger network. Hard physical lines are the oldest and simplest way, and even in the modern world the fastest connections are physical lines. These come in a variety of forms: cable, DSL, and telephone lines are the most familiar. But what they all have in common, despite enormous differentials in speed, is the fact that they represent a physical cable running from one network node to another.

Second, are WiFi connections, which allow wireless access to the network over relatively short distances. Large numbers of wireless routers can be effectively daisy-chained together in order to provide an umbrella of wireless access over a large physical space, as is relatively common on many college campuses in the developed world. These connections are really only a variation on physical lines though, as WiFi connections always lead back to a hard physical connection that connects to the larger network.

Third, is Internet access provided through cell phone networks, which resembles WiFi in the sense that the node itself is not physically connected to a line. The technology is very different though, and integrates directly with the networks used to transmit calls to and from cell phones. While used in the developed world, particularly with the advent of smart phones, this technology has actually been more popular in the developing world where physical lines are rare and expensive. Skipping several generations of connection technology, it has been much cheaper in many developing countries to simply build cell phone towers and avoid running physical lines into individual homes and businesses. But these cellular networks always tie back into physical lines upstream as well.

The final type of access is via a satellite uplink. This is the rarest and most expensive form of network connection, and has issues with high latency because of the long distances involved, in addition to being highly vulnerable to interference from weather. Because of its expense, and the fact that for most individuals there are faster and cheaper local alternatives, satellite has remained the domain of specialty users and those in remote areas of the developed world.

3.3. Applications

Last is the application level. The application layer is the one in which functionality of the Internet is added, the parts that make the raw wires and electronics more than just an overly elaborate telephone or television network. An application layer is in the simplest sense the things that the network is used to do.

In fact, the old telephone and television networks contain an application layer as well, though we rarely think of them through that lens. A telephone network is for all intents and purposes a single-application network. The switchboards provide a single application: voice connection between two nodes. Voice mail, conference calls, and the like are additional related applications on the same network. Television is similarly arranged, with the single application of that network being the one-way transmission of video, and with related similar applications like pay-per-view or limited access channels.

From a certain point of view, the early public Internet evolved as merely part of the application layer of the existing telephone networks of the 1980s. The connectivity provided by the whirring and beeping of modems was an application overlaid atop the existing telephone networks.

Since that point, the digitization of existing media and communications networks has inverted that early reality. Today in many parts of the world, the telephone and television networks have been subsumed by the Internet’s network. And television and telephony have been reduced to being but a couple of applications running in the same network as everything we currently consider the Internet. This has been a largely invisible process. Transmitting stations have given way to video distributed on the same cables that provide Internet access. Telephony has been largely digitized, acres of switchboards giving way to PBX boxes, which are nothing but computer servers specialized to run that breed of applications. The so-called “universal machines” of computer engineering are now taking the place of the last widely deployed physically specialized devices of the electronic age. Individual networks of machines have become but additional applications running on a single universal network.

In some developing countries, entire generations of communications technology are being skipped such that on a network level there is no difference between the television, telephone, and Internet networks, because they have no legacy networks that are easier to leave in place despite their obsolescence.

The endless array of software offered on the Internet is implemented at the application level. Twitter, Facebook, e=mail, Web pages, all are applications built on top of the underlying network and nodes.



4. Controlling the Internet

How then can a state go about seizing control of or shutting down the Internet? As should be clear from the complexity outlined in the last section, there a number of available options, the feasibility of which depends greatly on the particular technical literacy of the regime, and the particular network infrastructure of the state. These options can be broken down into which component of the Internet they seek to attack.

4.1. Attacking the nodes

The individual nodes of the Internet, the individuals computers and devices of end users, are difficult to attack and control directly, but options do exist.

The first and simplest option for control of the nodes, is to simply legally limit access to Internet capable devices within the country. Containment has been a viable mechanism for limiting public communication of all varieties over the centuries. The classic example of authoritarian regimes making it illegal for private citizens to own printing presses and photocopiers at various intervals applies equally well to the Internet age. And a few states have opted for variations on this alternative, such as North Korea and Turkmenistan. However, those examples also point to why this is not an attractive strategy for most authoritarian regimes, even if it is feasible at face value. The economic implications of not allowing computers into a country in the modern age of globalization are staggering, and apparently do not represent a reasonable tradeoff for most regimes.

If a country has allowed Internet into the country, the possibility of directly asserting control over the individual nodes is problematic. A good analogy is that of dealing with an unwanted ant colony, in which we can think of the individual nodes as individual ants. Attacking those individual ants is usually the most ineffective strategy on the table. Containment of the entire nest, destroying lines, killing the queen, all of these strategies are far more efficient, and map reasonably well onto the universe of strategies for dealing with control of the Internet. It is more efficient to assert control over the connections between nodes, or target nodes of importance like particular servers, rather than individually squashing the elements of the swarm.

Highly technically savvy regimes may make an attempt to do this anyway, utilizing viruses, spyware, and other tools in order to directly target the nodes. The United Arab Emirates, for instance, was involved in a long dispute with RIM (the manufacturer of Blackberry), demanding direct access to all data sent to and from Blackberries by their citizens. While the dispute fizzled out without RIM giving in to the demands, the UAE passed legislation banning the use of encryption in Blackberry transmissions by small companies and individuals (Halliday, 2011). This has left a great deal of communication in the clear, able to be intercepted by the regime, due to legal restrictions on the individual nodes in the network. These attempts are not limited to authoritarian states, with India, Indonesia, and Lebanon all involved in similar disputes with RIM (Halliday, 2010).

Similar in spirit (though technically very distinct) is the use of customized malware in order to get an end user to download an altered version of a program without their knowledge. This tactic was used in Syria when a captured rebel’s Skype account was used to send chat requests to others in his contact list that contained a small virus that would install itself and make periodic call backs to the Syrian Telecommunications Establishment — the Syrian equivalent of a Ministry of Telecommunications (Hypponen, 2012). This exploit reported back physical locations, which allowed the regime to systematically track down and arrest additional dissidents (Chozick, 2012). The fact that this attack was waged via links sent through Skype meant that it was effective despite the fact that much of the Syrian resistance had switched to satellite Internet connections precisely to avoid regime control of the network (Fagerland, 2012).

A cottage industry of monitoring software has sprung up, with authoritarian regimes being the primary customers. Dozens of states are known to have purchased and deployed software designed to be downloaded and installed by users with a similar distribution vector to a worm or a virus. For example, Mozilla (maker of the popular Web browser Firefox) has led suit against Gamma International for the behavior of its FinFisher software package, which mimics a software update of Firefox in order to trick users into allowing the installation to continue (Gilbert, 2013). FinFisher has been sold to some 36 different states, including Syria and Bahrain, among others. The software is designed to record the users’ Skype, e-mail, instant messaging, and other sensitive uses, and transmit the recordings back to a central server along with the physical location of the computer. In addition, the software contains the ability to provide real-time surveillance on command via any cameras or microphones hooked up to the computer (along with the added utility of automatically disabling the standard status lights that indicate when a webcam is active) (Marquis-Boire, et al., 2013). These uses and capabilities are advertised on the company’s Web site, in addition to the contention that it is invisible to all known anti-virus systems. And of course Gamma International insists that the software was designed as an anti-terror tool intended for use by law enforcement.

Most attempts at controlling the node component of the Internet are accomplished by utilizing the application layer, and so I’ll discuss such attempts in the later section on controlling that layer.

4.2. Controlling the network

Cutting the physical lines is perhaps the simplest way to cut off Internet access at face value, and is surprisingly easy to do in a physical sense. For instance, one of the transoceanic lines that carries staggering quantities of data is only about an inch in diameter. Physically, a single individual with some heavy duty shears or a hacksaw could easily cut even the largest trunk lines. But as the joke goes about plumbers charging one cent for wear and tear on their hammer and $150 for knowing what to hit with it, the difficulty of physically cutting off the Internet takes some very precise knowledge as to where exactly those physical lines are located in order to actually cut them.

Cutting the lines can also have other consequences. For instance, in many cases the same transnational cables carrying data for Internet access are also carrying the international phone connections. It may be helpful to think of these lines as sort of massive DSL connections, in which the same fiber optic lines carry both data and voice. Even when authoritarian regimes want to cut off their population’s access to the Internet, they generally do not want to cut off their own access to international telephone networks. Recall from our discussion of networks that cell phones only provide the illusion of a wireless connection. Downstream, those calls are linked into the wired phone system, and thus are just as vulnerable to cut off from the outside world since they rely on the same international lines. While true satellite phones do exist, they are relatively rare.

The implications of cutting those lines is more complicated than just being a tradeoff between regime access to phones and public access to the Internet. The globalization of trade means that cutting those lines also means shutting down the entire financial sector for most countries, often including all domestic electronic transactions.

In addition, for a small number of states, the physical cutting of lines is even more problematic because they are transit states. Egypt for example, by virtue of the Suez Canal, is physically crossed by a half dozen high-capacity cables running from East Asia, along the Indian Ocean, through the Red Sea, and across the Mediterranean to Western Europe. Several of the lines come ashore at Suez and run overland to Cairo before plunging into the Mediterranean. This makes a physical cutting of cables very tricky for Egypt, because doing so runs the risk of taking down a significant portion of the network and telephone connections between Asia and Western Europe in addition to merely cutting off domestic access to the Internet and phones. While it’s not impossible to imagine such an action being taken, it does add a significant diplomatic concern to the strategic calculation.

Shutting down the network domestically can be a more viable strategic option. While in theory networks have a multiplicity of routes between different nodes in order to make them more fault tolerant, the compromises of real-world engineering have left systematic bottlenecks in the network underlying the Internet.

The Internet is extremely fault tolerant and by nature self-repairing, in the sense that when connections between nodes are broken, different connections can usually be found. But it still relies on some fairly significant hierarchical structures for the sake of efficiency. Take again the example of a home Internet connection. If the cable into your house is cut, your network connection is down. You have a single point of failure. The picture with wireless connections is a bit trickier. If the cable into your house is cut, the little wireless router you have will no longer have a connection to the Internet, but if your laptop can connect to the neighbor’s WiFi (leaving aside issues of privacy, security, etc.), you can still access the Internet. On the other hand, if the larger cable that connects your entire subdivision of houses is physically cut, it doesn’t matter if you can connect to your neighbor’s WiFi because upstream from both of your connections is a single point of failure. The further upstream you go, the larger and larger a number of network connections will be affected by a single failure.

These technical distinctions became important during the Gezi protests in Turkey in 2013. Widespread government control of the media combined with a technically literate population to provide a perfect storm of social media usage. Turkish citizens flooded Twitter at the height of the protests, with some two million tweets using Gezi related hash tags posted from within Turkey during the 12 most intense hours of the crisis. This led to President Erdoan labeling social media as “the worst menace to society” and the shutting down of cell phone networks. However, the Turkish regime was unwilling or unable to generally shut down the Internet further upstream from the cell phone towers, and so citizens with WiFi routers removed their passwords and security, allowing social media activity to continue unabated (Naidoo, 2013).

Of course, connections that are large enough or important enough (and at a certain point, the two become synonymous), will have multiple physically redundant connections in order to guarantee connectivity in the case of a single failure. However, this adds cost linearly rather than realizing diminishing costs of scaling. Diminishing costs are achieved largely by reuse, which is anathema to physical redundancy. Installing fully redundant connections will cost twice as much. Doubly redundant ones will costs three times as much as a single line. The benefit of having a redundant connection into houses cannot justify the doubling of the cost of Internet service on the market.

The same logic applies to why we do not reinforce the robustness of networks by connecting the end nodes with each other (i.e., a systematic and hardwired version of connecting to your neighbor’s WiFi). We technically could run lines between all of the houses in a subdivision, so that if any house’s network connection failed, it would have multiple other routes out to the larger network. This again comes back to cost though. First, the cost of laying a great deal of additional cable, and second the added cost of running more complex routers in each house in order to take advantage of the more complex network. But an additional problem is that these physically near nodes all tend to be within the same overall part of the network, such that a failure far enough upstream would take them all out anyway, rendering end-to-end connections moot.

These sub-areas of the Internet are called autonomous networks (ASNs), and are the chunks of the Internet that are self-contained, in the sense that each is controlled by a single entity [1] and all the nodes inside that section have the same limited set of routes out to the larger network. Most small and local ISPs tend to be a single autonomous network. Some large corporations in the developed world that were early adopters of the Internet have autonomous networks all to themselves as well. The connection of autonomous networks to the rest of the network is a bottleneck that is relatively easy to cut off or seize control over, and it represents a much smaller number of relatively centralized locations to strike at than individual nodes. Despite the billions of individual nodes that are part of the Internet at this point, there are only some 44,000 autonomous networks in the world, which is a far more tractable number for purposes of control (Bates, et al., 2013). At the beginning of the Egyptian uprising in 2011, the country only had 52 autonomous networks (Toonk, 2011).

The relative size of these autonomous networks can vary greatly. For instance, when the Egyptian Internet was shut down on 28 January, only 26 of the 52 autonomous networks dropped offline, but those ASNs represented 90 percent (2,576 of 2,903) of the subnetworks in Egypt. Over the following three days, 12 more ASNs were taken offline one by one. In legal terms, there are only 45 ISPs in Egypt controlling those 52 autonomous networks, which does not make for a particularly large number of entities for the regime to seize control of.

Cutting an ASN from the outside world can be accomplished by manipulating Border Gateway Protocol (BGP) tables. BGP tables are basically roadmaps of the internal network of an ASN, which the ASN broadcasts to the larger network, so that when someone tries to get to a particular network address, the BGP table tells it exactly how to get there. If an ISP shuts down the BGP service (or more typically adds a line of code announcing a meaningless route for any query, like directing traffic off of a digital cliff), nodes inside that ASN have no way to talk to each other or get to the outside world, and nodes on the outer network have no way of getting to nodes inside. So in order to use BGP to take the Egyptian Internet offline would have required action by several dozen physically distinct entities.

The Syrian experience is a stark example of just how centralized Internet access can be, and makes a good contrast with Egypt. Syria has about a quarter the population of Egypt (and approximately the same level of Internet use among the population, about 25 percent) but only has a tenth as many ASNs active in the country (5), and a fiftieth as many subnetworks (62). Four of those autonomous networks are foreign controlled and located outside of the country. The hubs of those ASNs are outside of Syria’s borders, with physical cables running into Syria’s territory from Turkey or Italy (via submarine cables). The single ASN originating within Syria is physically controlled by the Syrian Telecommunications Establishment (STE). While Egypt had nearly 3,000 subnetworks, Syria only has 62, and 56 of those are within the STE’s autonomous network (Toonk, 2012).

This made Syria different from Egypt in two ways. First, it was capable of cutting off the majority of the country’s Internet access very easily and essentially in-house by the STE, but the remainder of the access was relatively untouchable short of physically cutting cables at the border. Intriguingly, Syria has not taken advantage of this fact to shut down the country’s Internet access. We know that they are capable of doing do because they used BGP to take the country offine for 48 hours in November 2012 and for 12 hours in the first week of May 2013. In both cases, the Syrian regime announced that rebels had intentionally physically cut cables, but that claim does not survive the evidence of how network traffic dropped off in either instance, nor is it militarily feasible for the rebels to have been in a physical position to have done so. So, Syria definitely has demonstrated the technical capacity to take the majority of the country’s Internet offline at will, but has decided not to permanently do so for strategic reasons.

The method that Egypt ended up using to shutdown Internet access is quite interesting, because despite the apparent simplicity of seizing a few dozen companies, this was apparently beyond the capabilities of the regime in the midst of the crisis. During the Egyptian uprising of 2011, the several day delay in shutting down access to social media was due to a pièce de résistance of bureaucratic obstructionism by the Egyptian Telecommunications Ministry. As near as can be puzzled out, the Ministry was ordered to block access to Twitter and Facebook (among other sites), which it proceeded to do at the DNS level (a technique of attacking the application layer that I discuss in the next section). However, since according to the letter of the law, the Ministry’s mandate was to fix any interruptions of service, it then set up proxies to work around the problem they had themselves introduced. The mutual exclusivity of their orders and their charter led Ministry officials to blithely follow the letter of their orders with one hand while undermining the spirit of them with the other.

Once the state security services realized what was happening, it threatened the Ministry, and prepared to individually order each of the ISPs to shutdown access. When the Ministry did take down the entire Internet, it did so in what has been described as the most responsible way that it could. The Ministry ensured that it was taken down by simply powering down the Ramses Exchange (Woodcock, 2011). By doing so, once it became clear that the network was going to be cut no matter what they did, the Ministry ensured that the Internet could also be turned back on with minimal effort. Rather than allowing the state security apparatus to individually issue orders (and force compliance) with the 45 ISPs, the central shutdown ensured that if power was restored, network access for the entire country would be restored automatically and nearly instantaneously. It also ensured that ultimate control of whether that happened was still in the hands of the Ministry instead of troops at the various ISPs.

The Ramses Exchange is an Internet Exchange Point, which is a centralized hub through which different autonomous networks connect to each other. Internet service providers all hook into an Internet Exchange, much like an antiquated telephone switchboard, in order to allow network traffic from each ASN to reach the other ones. The danger, of course, is that Internet Exchanges are exceedingly vulnerable to centralized seizure, but the advantage is in cost savings. There are usually only a few per country, at most. Egypt only had two, although the Ramses Exchange carried an order of magnitude more traffic than the other. Russia only has 16 Exchanges, China only three, and Tunisia has a single exchange in its capital. Many smaller countries don’t have exchanges at all, with ISPs linking together upstream outside the country entirely. Syria is a good example of a state with no domestic Internet Exchange Point. Egypt’s Ramses Exchange handles a large portion of all Internet traffic in the larger Middle East in addition to domestic Egyptian traffic. When the Ministry shut down Egyptian access through the Ramses Exchange, it did so by simply cutting power, not by cutting any of the network lines going through the building. So international network traffic passing through was unaffected, but the routers that connected Egyptian ISPs to them were powered off.

Exchanges are not the only way that traffic can travel between ISPs. In more advanced states, large ISPs typically reach agreements to directly hook their ASNs together in order to ensure a multiplicity of network routes. It is the states of moderate development that rely the most on exchanges for linking their networks together, because of the cost effectiveness of such a design.

This is a warning of just how vulnerable a state like Egypt is to being taken off the Internet by shutting down a central node, of how easy it is for a technically capable regime to assert de facto ownership. But it is also a warning to authoritarian regimes of the power wielded by the branch of government responsible for regulating telecommunications. That particular ministry can end up playing a role similar to that of the military in authoritarian showdowns with the population, in the sense that it is an independent actor that the regime must either trust or co-opt in order to maintain control.

Had a similar situation arisen 20 years ago with television networks, the regime could have shut down access with almost trivial effort. Even with no knowledge of who owned the television transmitters or where they were, it would be child’s play to simply scan the skyline for large antennae in order to locate the source of the broadcasts. And even if a regime were so ham-handed that it could not take over the studios to broadcast their own content, it would take but a few well-placed explosives to simply blow up the ability to broadcast, with no deeper strategic concerns.

Finally, while satellite connections are the most difficult for a regime to control, seeing as they bypass any infrastructure controllable by the local government, they are not typically a major concern. First, they are rare in relation to more traditional connections because of their expense. Second, while such connections are impossible to directly co-opt, they are quite simple to shutdown entirely with a little planning due to their vulnerability to jamming. Jamming functions by blasting random noise at a very high amplitude onto a specific range of the electromagnetic spectrum. This noise overwhelms any attempts to broadcast meaningful data (be it radio, television, cellular, or Internet signals) along that particular range. Such jamming technology is relatively cheap (a $US6,500 device will suffice to blanket an radius of five kilometers) (Small Media, 2012). Iran, in particular, has built quite an arsenal of jamming equipment from Chinese sources, drawing the ire of the International Telecommunications Union in the last year for generating such intense jamming that satellite signals were disrupted on three continents (Broadcasting Board of Governors, 2012).

4.3. Controlling the application layer

The application layer of the Internet is why physically cutting lines may also not be sufficient to taking down the Internet in some countries. Remember, physically cutting lines at the borders is only sufficient to cut off access to the outside world. This means that users inside a state’s borders are unable to get to servers outside that state’s networks, but they still have physical access to servers inside the country. In smaller states with few Web sites hosted domestically, this may be an appropriate solution, but in larger states a different story emerges.

The United States, for example, contains a vast proportion of the Internet’s servers. If the lines at the borders were physically cut, most American users would not actually notice a difference in most of their services. Network routes would not only remain intact, they would remain unchanged for the most part. The United States is a special case in this regard, but the same logic applies to other countries as well. For example, the most used social media Web site in the former Soviet Union is Vkontakte, a Russified Facebook clone, with over a hundred million users. The service is hosted on a cluster of some ten thousand servers, all located in four data centers in Moscow and Saint Petersburg (Blinkov, 2012). If Russia were to sever its foreign connections, service to Vkontakte would largely be uninterrupted, though access to applications hosted outside the country (such as Facebook or Twitter), would be inaccessible.

On the other hand, a large proportion of Russia’s bloggers use LiveJournal, which although founded in the United States in 1999, is now wholly owned by the Russian media company SUP Media, and is physically headquartered in Moscow. But its servers are still located in the United States, in Southern California (Russell and Echchaibi, 2009). Thus LiveJournal’s Russian users can be cut off by physically cutting the lines to the outside world, despite being a Russian corporation.

The physical presence of servers inside Russia makes Vkontakte far more vulnerable to direct physical action by the regime, but also immune to the regime simply cutting physical lines, while LiveJournal is in the opposite situation, vulnerable to cut off and action against the corporation but not to physical seizure of the servers.

This leads to a sort of tension in the decision-making of server placement, if a strategic actor cares about keeping access to certain Internet services available to the population of a state. Keeping the servers outside that state makes the connections to the outside world the vulnerable point, while keeping the servers inside the country makes the servers themselves the vulnerable point of failure.

China is of particular interest in this regard. While the “Great Firewall” of China is legendary for limiting the access of Chinese citizens to the larger Internet, one reason it works is because of China’s ability to construct native alternatives to many applications that citizens otherwise might want to use from the external world. Various domestic alternatives exist to Facebook (Renren), Twitter (Sina Weibo), YouTube (Tudou, Youku), Google (Baidu), blogging (Sina blogs), each with the advantage (from the regime’s point of view) of being physically located within China itself. By blocking access to much of the external Internet, and providing domestic alternatives for applications, the Chinese regime has seized a sweet spot in which they can gain many of the benefits of the Internet without giving up control.

However, this is an extremely expensive strategy requiring both a strong state and one with high levels of technical knowledge, which simply puts it beyond the reach of most regimes. In addition, China had the advantage of building these applications gradually over time, rather than attempting to construct an entire Internet of applications wholesale and dropping it into users’ laps at once. An illustrative example is Iran. In the wake of the Green Revolution’s extensive use of social media, Iran launched a project to create a “national Internet” in order to cut the country completely off from the external Internet in the long run. In the short term, it relatively crudely blocked access to individual Web sites it deemed objectionable (such as YouTube). However, the blocking of Gmail in September 2012 had to be rolled back after only a week because Gmail is officially used by most branches of the Iranian government, sparking complaints on the floor of the legislature that none of the legislators could access their governmental e-mail (Torbati, 2012). Despite boasting a highly educated population and the twenty-fifth largest economy in the world, Iran has failed after four years to even roll out a domestic replacement for e-mail, one of the most fundamental applications provided by the Internet.

Iran’s experience also highlights an additional strategy to controlling the application layer of the Internet: simple blockage of particular Web sites, a step that requires the cooperation (voluntary or otherwise) of ISPs within the country. This is technically accomplished at the Domain Name Server (DNS) level. DNS is the system that translates Web addresses into IP addresses that locates nodes on the network. Whenever a Web browser is directed to a domain, it first asks a DNS server to what IP address that name corresponds, similar to the way that a person can look a name up in a phone book and find the corresponding number to dial on the telephone. Because DNS servers are operated at the ISP level, they are physically and legally vulnerable to control by a regime. Regimes usually handle this by ordering the Ministry of Telecommunications (or equivalent branch of the national government) to block access to a list of URLs. The ministry then issues orders to the ISPs.

This is relatively easy to do if a country only has a limited number of ISPs operating within its borders, but becomes phenomenally more difficult to enforce as the number of providers goes up. It also becomes unwieldy if the regime has a large number of Web sites that it wants banned. Another drawback to this approach from the regime’s point of view is that it is relatively easy even for casual computer users to get around such restrictions through the use of proxies, which are simply servers set up outside the blocking zone that pass information between the user and the blocked site. This leads to a race between regime and users, in which the regime must add ever more lists of banned URLs in order to keep the users from getting to an ever growing list of proxies.

DNS level blocking worked in the case of Tunisia, which during the Arab Spring was able to shutdown access to Twitter and other social media sites it deemed threatening almost immediately. On the other hand, it can be ineffectual if the Ministry of Telecommunications cannot be trusted. As discussed earlier, this is exactly what happened in Egypt, leading to a several day delay in the takedown of the Internet.

A novel way of subtly attacking the application layer is apparent in Russia, with the leveling of massive Distributed Denial of Service (DDOS) attacks against LiveJournal and related sites at intervals over the last two years. DDOS attacks work by having many computers (often ones that have been compromised in advance through computer viruses or similar technology) send requests to a single application at the same time. The sheer weight of Internet traffic makes the particular application unavailable to anyone else. Such attacks are difficult to defend against because the “attacking” nodes are not doing anything that they shouldn’t be allowed to do under normal circumstances, other than the fact that they are doing it repeatedly and in great numbers. It is widely rumored that the Russian regime is responsible for the DDOS attacks on LiveJournal and other sites, although there is no hard evidence. The move would make strategic sense in light of the physical location of LiveJournal’s servers. Direct action by the regime could cause friction with the United States over freedom of speech concerns, while tactics like DDOS allow plausible deniability.

A final category of attacks at the application level is the use of the application layer against itself in the forms of viruses, worms, and other malware. For instance, in Iran there have been reports of man-in-the-middle attacks, which are the interjection of software between the user and server in order to monitor activity. The software watches the back and forth of communication, and while it passes it along, it also makes note of the activity and potentially takes action as a result. The classic example of a man-in-the-middle attack is used to target banking transactions. The malware is positioned on a server between the banking server and the end users, and when a user issues a transaction (for instance to move funds), the malware adjusts the transaction before passing it on to the bank in order to redirect the funds elsewhere, and then modifies the response from the bank back to the user to reflect what the user thought would happen rather than what really happened.

Often this is used for automated censorship, in order to screen out particular phrases or words, while not entirely shutting down access to a particular domain. Both Tunisia and Egypt attempted to use this approach in order to capture user login information for external social media sites, and to automatically screen out anti-regime messages. Iran has also used this approach over the last few years, capturing access to dissidents’ e-mail and social media accounts.

Man-in-the-middle attacks and customized malware require a very high level of technical sophistication, much higher than what is required to simply use DNS blocking, and certainly more than physical attacks on the network infrastructure. This leads back to that key component of control: a regime’s capacity for control of the Internet is contingent on its fluency of Internet technologies.



5. Can’t stop the signal

There is much work to be done on what the effect of the Internet is on the possibilities of democratization, but that remains beyond the scope of this paper. Instead, let us posit simply that freedom of speech is a good thing in and of itself, and recognize that attempts to subvert open access to the Internet work counter to this good. So knowing what we now know about who owns the Internet, and what the strategic landscape looks like for those who would control and co-opt communication, the question remains: what can be done to co-opt the cooptation?

There are several vulnerabilities in network architecture that make it possible to assert control over the Internet, as discussed in the last section. As should be clear, the best way to make it more difficult to do so is to add enough network redundancy that centralized locations cannot be seized. The problem, of course, is that redundancy is expensive. However, it may be possible to use the extraordinary properties of the individual nodes to engineer a solution to this problem.

When your cell phone makes a call, it connects to the nearest cell tower, which then passes the data onto the hardwired telephone networks. Which means that taking down the cell tower takes down connectivity. However, it is technically possible to design a network in which there are no centralized nodes at all, or at least they matter less. In this case, when your cell phone makes a call, it connects to another cell phone in someone else’s pocket, which passes the connection to another, and so on, until the connection reaches the destination cell phone. The data is being passed back and forth along that chain of other phones. The beauty of a system like this, which is a pure peer-to-peer system, is that there is no centralized infrastructure that can be shut down. The same logic applies to WiFi enabled devices. With appropriate software, any laptop or tablet can be a WiFi router, and any device that has a connection to both WiFi and the cellular network can act as a network bridge between the two. Such a system has been described as an “Internet of Things”.

The cost of such a system is a great deal of efficiency in the connections, in addition to requiring significantly more computing power in each individual device. However, the latter concern is less and less relevant, especially with the advent of smart phones. Thanks to the exponential growth of processing power over the last few decades, the average smart phone floating around today has somewhere around a thousand times the processing power of high-end desktop computers or servers of 25 years ago.

As we discussed earlier, the nodes of the Internet are interchangeable on a theoretical level. Because everything is implemented with software, there is no absolute reason why we must use centralized servers and exchanges. There is no technical reason why you cannot host a Web site on your cell phone, just as there is no technical reason why other cell phones cannot act as routers and servers instead of the centralized systems we currently use for those purposes. Someone just needs to port existing applications to do so.

Rather than network traffic funneling up through isolatable autonomous networks, traffic could traverse any route hopping among devices that can talk to each other. And so long as a single cell phone sitting near the border still has a connection to a foreign cell phone, access to the external Internet will remain intact, even if at a very slow speed.

The efficiencies of the current system need not be jettisoned with the deployment of an Internet of Things. Such a deployment would always be supplemental, with any protocol preferring the shortest and most efficient route, using device hopping as a mechanism of last resort. But if that last resort is technically in place and automated as part of the normal routing of network traffic, then the incentive for a regime to seize control of centralized facilities will be far less strategically valuable since it will slow the network but not shut it down in any meaningful way. Work on this sort of backstop system has ready civilian applications in the developed world as well, given its potential for securing communication in the case of other events that can knock out central facilities, such as natural disasters.

While this has the potential to ensure that internal network connectivity can be maintained, it does not solve the problem of regimes capable of controlling the physical lines that leave the country and cutting off their populations from the outside world in that manner. One solution is deployable satellite connections, which do exist in some capacity already in the military. The construction of backpack-sized network relays keyed to provide an automated connection to the satellite network, would allow the United States to provide a backstop to cutting of connections to any population around the world. This step would be particularly powerful in the case of an Internet of Things, but even under the current sorts of network architecture would provide a tool to keep any WiFi or cellular device within range connected to the network, regardless of local seizure of communications.

In addition, policy changes are critical to prevent exports of particular products used exclusively for the purposes discussed in this paper. In addition, software designed explicitly to spy surreptitiously upon users should be added to restriction lists as well. Even commercial applications that do exist for this software (spying upon one’s own employees) are an ethically gray area at best, and the international market for such software depends almost exclusively on authoritarian regimes. While we may not be able to prevent such software from being developed in the long run by dictators with their own software industries, there is absolutely no reason to do the work for them.

To combat the use of software against populations, a proactive approach would be best. A working group, perhaps within the framework of DARPA, should be founded in order to do a specific form of anti-virus work. This group should take the same role that anti-virus software manufacturers take with regard to systematically finding threats and deploying solutions, except applying it specifically to software designed for monitoring and tracking users by regimes.

In addition, the experience of Egypt with its less than cooperative Ministry of Telecommunications poses a compelling opportunity. America has a long history of providing training programs with officer corps from around the world, which exist at least partially on the hope that working with American officers will plant the seeds of liberal values that might make a difference in a crisis situation down the road. The same sorts of programs could be particularly effective with technical experts. Utilizing the existing intensely anti-authoritarian community of computer experts in the West to make connections with computer experts from developing countries could duplicate such attitudes among the very people upon whose technical expertise authoritarian regimes rely for maintaining control of the Internet. This could be a double-edged sword, just as it has been for military training, but represents a unique way to disrupt authoritarian control over communication.

The sine qua non of such a process is the requirement of a fundamental change in American policy thinking toward tracking on the Internet, especially in the wake of the Snowden affair. Every few years legislation is introduced that attempts to mandate the creation of monitoring systems for the purpose of tracking down the identities of those who pirate copyrighted material online. Arguments in favor of such legislation tend to call on the specter of terrorism for support as well, arguing that re-engineering the Internet to make such tracking possible will also make tracking terrorists much easier. Such technology though is indistinguishable from technology for tracking political dissidents. At the moment, through both intentional design and accidents of old technical decisions, we have inherited an Internet that allows cheap and anonymous communication to the masses both in the developing and developed world. The long term prospects of grassroots revolutions should not be sacrificed for the short-term profits of private interests. End of article


About the author

Steven Wilson is a Ph.D. candidate in political science at the University of Wisconsin-Madison. His work specializes on the effects of the Internet on authoritarian regimes, with a particular focus on the states of the former Soviet Bloc.
E-mail: slwilson4 [at] wisc [dot] edu



1. In certain situations multiple entities may have joint control over an autonomous network, but the distinction for our purposes is unimportant.



Taghreed M. Alqudsi-ghabra, Talal Al-bannai, and Mohammad Al-bahrani, 2011. “The Internet in the Arab Gulf Cooperation Council (AGCC): Vehicle of change,” International Journal of Internet Science, volume 6, number 1, pp. 44–67, and at, accessed 19 January 2015.

Tony Bates, Philip Smith, and Geoff Huston, 2013. “Classless inter-domain routing report,” at, accessed 19 January 2015.

Bruce Bimber, 2001. “Information and political engagement in America: The search for effects of information technology at the individual level,” Political Research Quarterly, volume 54, number 1, pp. 53–67.
doi:, accessed 19 January 2015.

Ivan Blinkov, 2012. “Vkontakte architecture,” at, accessed 4 April 2013.

Broadcasting Board of Governors, 2012. “Iranian jamming disrupts U.S. international broadcasting across several continents” (4 October), at, accessed 19 January 2015.

Valerie J. Bunce and Sharon L. Wolchik, 2010. “Defeating dictators: Electoral change and stability in competitive authoritarian regimes,” World Politics, volume 62, number 1, pp. 43–86.
doi:, accessed 19 January 2015.

Manuel Castells, 2013. Communication power. New York: Oxford University Press.

Manuel Castells, 2012. Networks of outrage and hope: Social movements in the Internet age. Cambridge: Polity.

Manuel Castells, 2009. The information age: Economy, society, and culture. Second edition. Malden, Mass.: Wiley-Blackwell.

Mridul Chowdhury, 2008. “The role of the Internet in Burma’s Saffron Revolution,” Berkman Center for Internet & Society, Harvard University (28 September), at, accessed 19 January 2015.

Amy Chozick, 2012. “For Syria’s rebel movement, Skype is a useful and increasingly dangerous tool,” New York Times (30 November), at, accessed 19 January 2015.

Ronald Deibert, John Palfrey, Rafal Rohozinski and Jonathan Zittrain (editors), 2008. Access denied: The practice and policy of global Internet filtering. Cambridge, Mass.: MIT Press.

Laura DeNardis, 2010. “The emerging field of Internet governance,” Yale Information Society Project, Working Paper Series, pp. 1–21.
doi:, accessed 19 January 2015.

Marta Dyczok, 2005. “Breaking through the information blockade: Election and revolution in Ukraine 2004,” Canadian Slavonic Papers, volume 47, numbers 3–4, pp. 241–264.

Tessa Tan-Torres Edejer, 2000. “Disseminating health information in developing countries: The role of the Internet, BMJ: British Medical Journal, at, accessed 19 January 2015.
doi:, accessed 19 January 2015.

Snorre Fagerland, 2012. “Syrian spyware,” at

Robert Faris and Bruce Etling, 2008. “Madison and the smart mob: The promise and limitations of the Internet for democracy,” Fletcher Forum of World Affairs, volume 32, number 2, pp. 65–85, at, accessed 19 January 2015.

Henry Farrell, 2012. “The consequences of the Internet for politics,” Annual Review of Political Science, volume 15, pp. 35–52.
doi:, accessed 19 January 2015.

Matthew Gentzkow and Jesse M. Shapiro, 2010. “Ideological segregation online and offine,” National Bureau of Economic Research (NBER) Working Paper, number 15916, at, accessed 19 January 2015.
doi:, accessed 19 January 2015.

Rachel Gibson, 2001. “Elections online: Assessing Internet voting in light of the Arizona Democratic primary,” Political Science Quarterly, volume 116, number 4, pp. 561–583.
doi:, accessed 19 January 2015.

David Gilbert, 2013. “Spy software use increases to monitor dissidents, activists and journalists,” International Business Times (1 May), at, accessed 19 January 2015.

Joshua Goldstein, 2007. “The role of digital networked technologies in the Ukrainian Orange Revolution,” Berkman Center Research Publication, 2007-14, at, accessed 19 January 2015.

Josh Halliday, 2011. “UAE to tighten BlackBerry restrictions,” Guardian (18 April), at, accessed 19 January 2015.

Josh Halliday, 2010. “India rejects limited access to BlackBerry data as struggle with RIM continues,” Guardian (1 October), at, accessed 19 January 2015.

Albrecht Hofheinz, 2005. “The Internet in the Arab world: Playground for political liberalization,” International Politics and Society, pp. 78-96, at, accessed 19 January 2015.

Mikko Hypponen, 2012. “Targeted attacks in Syria,” F-Secure: News From the Lab (3 May), at, accessed 19 January 2015.

Bussakorn Jaruwachirathanakul and Dieter Fink, 2005. “Internet banking adoption strategies for a developing country: The case of Thailand,” Internet Research, volume 15, number 3, pp. 295-311.
doi:, accessed 19 January 2015.

Brenden Kuerbis and Milton Mueller, 2007. “Securing the root: A proposal for distributing signing authority” Internet Governance Project (1 May), at, accessed 19 January 2015.

Myroslaw J. Kyj, 2006. “Internet use in Ukraine’s Orange Revolution,” Business Horizons, volume 49, number 1, pp. 71–80.
doi:, accessed 19 January 2015.

Lawrence Lessig, 2006. Code: Version 2.0. New York: Basic Books.

Lawrence Lessig, 2002. The future of ideas: The fate of the commons in a connected world. New York: Vintage.

Morgan Marquis-Boire, Bill Marczak, Claudio Guarnieri, and John Scott-Railton, 2013. “For their eyes only: The commercialization of digital spying,” Munk School of Global Affairs, University of Toronto (16 September), at, accessed 19 January 2015.

Evgeny Morozov, 2011. The net delusion: The dark side of Internet freedom. New York: Public Affairs.

Milton L. Mueller, 2010. Networks and states: the global politics of Internet governance. Cambridge, Mass.: MIT Press.

Milton Mueller, 2004. Ruling the root: Internet governance and the taming of cyberspace. Cambridge, Mass.: MIT Press.

Emma C. Murphy, 2006. “Agency and space: The political impact of information technologies in the Gulf Arab states,” Third World Quarterly, volume 27, number 6, pp. 1,059–1,083.
doi:, accessed 19 January 2015.

Kumi Naidoo, 2013. “The last tree or the final straw?” (1 June), at, accessed 19 January 2015.

Pippa Norris, 2001. Digital divide: Civic engagement, information poverty, and the Internet worldwide. New York: Cambridge University Press.

Scott E. Page, 2007. The difference: How the power of diversity creates better groups, firms, schools, and societies. Princeton, N.J.: Princeton University Press.

Ben Petrazzini and Mugo Kibati, 1999. “The Internet in developing countries,” Communications of the ACM, volume 42, number 6, pp. 31–36.
doi:, accessed 19 January 2015.

Adrienne Russell and Nabil Echchaibi (editors), 2009. International blogging: Identity, politics and networked publics. New York: Peter Lang.

Kay Lehman Schlozman, Sidney Verba, and Henry E. Brady, 2010. “Weapon of the strong? Participatory inequality and the Internet,” Perspectives on Politics, volume 8, number 2, pp. 487–509.
doi:, accessed 19 January 2015.

Zsolt Sereghy, 2012. The Arab revolutions: Reflections on the role of civil society, human rights and new media in the transformation processes. Stadtschlaining: Österreichisches Studienzentrum für Frieden und Konfliktlösung.

Dhavan V. Shah, Nojin Kwak, and R. Lance Holbert, 2001. “‘Connecting’ and ‘disconnecting’ with civic life: Patterns of Internet use and the production of social capital,” Political Communication, volume 18, number 2, pp. 141–162.
doi:, accessed 19 January 2015.

Clay Shirky, 2008. Here comes everybody: The power of organizing without organizations. New York: Penguin Press.

Small Media, 2012. “Satellite jamming in Iran: A war over airwaves,” at, accessed 19 January 2015.

Ekaterina Stepanova, 2011. “The role of information communication technologies in the ‘Arab Spring’,” PONARS Eurasia, Eurasia Policy Memo, number 159, at, accessed 19 January 2015.

Ahmad Al Sukkar and Helen Hasan, 2005. “Toward a model for the acceptance of Internet banking in developing countries,” Information Technology for Development, volume 11, number 4, pp. 381–398.
doi:, accessed 19 January 2015.

Cass R. Sunstein, 2007. 2.0. Princeton, N.J.: Princeton University Press.

Andree Toonk, 2012. “Syria shuts down the Internet,” Border Gateway Protocol Monitoring (29 November), at, accessed 19 January 2015.

Andree Toonk, 2011. “Internet in Egypt offline,” Border Gateway Protocol Monitoring (28 January), at, accessed 19 January 2015.

Yeganeh Torbati, 2012. “Iran unblocks Google email again after officials complain,” Reuters (1 October), at, accessed 19 January 2015.

Ernest J. Wilson, 2004. The information revolution and developing countries. Cambridge, Mass.: MIT Press.

Magdalena E. Wojcieszak and Diana C. Mutz, 2009. “Online groups and political discourse: Do online discussion spaces facilitate exposure to political disagreement?” Journal of Communication, volume 59, number 1, pp. 40–56.
doi:, accessed 19 January 2015.

Bill Woodcock, 2011. “Overview of the Egyptian Internet Shutdown,&Rdquo; Department of Homeland Security’s InfoSec Technology Transition Council (February), at, accessed 19 January 2015.

Tim Wu, 2003. “Network neutrality, broadband discrimination,” Journal of Telecommunications and High Technology Law, volume 2, number 1, pp. 141–175, and at, accessed 19 January 2015.

Michael Xenos and Patricia Moy, 2007. “Direct and differential effects of the Internet on political and civic engagement,” Journal of Communication, volume 57, number 4, pp. 704–718.
doi:, accessed 19 January 2015.

Christopher S. Yoo, 2004. “Would mandating broadband network neutrality help or hurt competition? A comment on the end-to-end debate,” Journal of Telecommunications and High Technology Law, volume 3, number 1, pp. 23–68, and at, accessed 19 January 2015.

Xiaolin Zhuo, Barry Wellman, and Justine Yu, 2011. “Egypt: The first Internet revolt?” Peace Magazine, volume 27, number 3, pp. 6–10, and at, accessed 19 January 2015.


Editorial history

Received 12 February 2014; revised 11 October 2014; accepted 19 January 2015.

Creative Commons License
This paper is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

How to control the Internet: Comparative political implications of the Internet’s engineering
by Steven Lloyd Wilson.
First Monday, Volume 20, Number 2 - 2 February 2015

A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2017. ISSN 1396-0466.