Cloud computing — the creation of large data centers that can be dynamically provisioned, configured, and reconfigured to deliver services in a scalable manner — places enormous capacity and power in the hands of users. As an emerging new technology, however, cloud computing also raises significant questions about resources, economics, the environment, and the law. Many of these questions relate to geographical considerations related to the data centers that underlie the clouds: physical location, available resources, and jurisdiction. While the metaphor of the cloud evokes images of dispersion, cloud computing actually represents centralization of information and computing resources in data centers, raising the specter of the potential for corporate or government control over information if there is insufficient consideration of these geographical issues, especially jurisdiction. This paper explores the interrelationships between the geography of cloud computing, its users, its providers, and governments.
2. What is the cloud?
3. Who uses the cloud?
4. Where is the cloud?
5. What rules govern the cloud?
6. Conclusion: Clouds without borders?
Cloud computing refers to an emerging model of computing where machines in large data centers can be dynamically provisioned, configured, and reconfigured to deliver services in a scalable manner, for needs ranging from scientific research to video sharing to e–mail. The speed at which cloud computing has permeated Internet activities is astonishing. Recently, the Pew Internet and the American Life project released a survey of users’ attitudes toward cloud computing services (Horrigan, 2008). Although many users may not be familiar with the term, the reality is that most users (69 percent, according to the Pew study) are already taking advantage of cloud computing through Web–based software applications and online data storage services. The popularity and penetration of cloud computing has also been demonstrated through a recent Google search term analysis using Google Trends (Buyya, et al., 2008).
The seemingly–paradoxical combination of the novelty of cloud computing and the rapid near–ubiquity of cloud computing services have created uncertainty about its implications for individual users as well as corporations and governments. This article frames an analysis of cloud computing in terms of four basic questions:
What is the cloud?
Who uses the cloud?
Where is the cloud?
What rules govern the cloud?
The primary contribution of this analysis lies in pondering the implications of the third and fourth questions. Although, quite obviously, “the cloud” refers to machines in large data centers, this simple answer raises a host of interesting and unanswered questions about geography, economics, and jurisdiction — and an accompanying swirl of policy issues — that will shape the future of cloud computing and the level of services available to users around the globe. The main thesis of this article is that cloud computing represents centralization of information and computing resources — quite contrary to the imagery that the label evokes. Centralized resources, by their very nature, are easy to control, by corporations that own them and governments whose jurisdictions they are under. This less–discussed fact represents a “darker” or “stormier” side of cloud computing and presents a danger to open information–based societies if the issues are not carefully considered.
Cloud computing is expanding rapidly as a service used by a great many individuals and organizations internationally. Users, however, may not be aware that they are using a cloud service. Cloud providers already offer a variety of services, with users employing cloud computing for storing and sharing information, database management and mining, and deploying Web services, which can range from processing vast datasets for complicated scientific problems to using clouds to manage and provide access to medical records (Hand, 2007). Further, at the level of data available in the cloud — the petabyte scale — entire new approaches to correlation–based data analysis are made possible by the sheer volume of information and processing capacity (Anderson, 2008). For example, with cloud computing, the New York Times was able to turn former First Lady Hillary Clinton’s schedule of more than 17,000 pages into a searchable database in less than 24 hours (Economist, 2008a). Cloud computing opens up the possibility that a major cloud provider like Google could ultimately become “the world’s primary computer.” 
When discussing cloud computing it is important to note that there are several different components that it encompasses: cloud infrastructure, cloud platform, and cloud application. Cloud infrastructure, or infrastructure as a service, refers to providing a computer infrastructure as a service. An example of this would be Amazon’s Elastic Compute Cloud (EC2). Cloud infrastructure not only includes computational resources, such as infrastructure as a service, but also storage such Amazon’s S3 (Youseff, et al., 2008). Cloud platform, or platform as a service, refers to providing a computer platform or software stack as a service. Platform is a higher level of abstraction than infrastructure. An example of a platform would be Google’s App Engine or Salesforce. Cloud applications are Web services that run on top of a cloud computing component.
Many individual users already regularly use services that run on a cloud computing component — including e–mail services (e.g., Gmail, Hotmail, and Yahoo!), photo and video services (e.g., Flickr and YouTube), and online applications (e.g., Google Docs and Adobe Express), as well as more targeted services like storing data files and processing large volumes of data. In fact, 69 percent of people online use at least one cloud computing–based service, whether or not they realize it is a cloud–based service (Horrigan, 2008). At the corporate level, cloud computing services are already available from Google, Amazon, Yahoo, Salesforce, desktop Two, Zimdesk, and Sun Secure Global Desktop, among others (Delaney and Vara, 2007; Gilder, 2007; Ma, 2007; Naone, 2007). For example, in January 2008, Amazon Web Services was storing 14 billion units of data, varying in size from a couple of bytes to five gigabytes, and handling 30,000 requests to its database per second (Hardy, 2008). In late 2008, Google announced that they were able to sort one petabyte of data in roughly six hours using 4,000 computers, via their cloud computing architecture (Czajkowski, 2008). There are also educational uses of cloud computing being researched, with an academic–industrial collaboration spearheaded by Google and IBM, in conjunction with six major research universities in the United States, through which the companies are providing faculty and students with access to clouds for research and education (Lohr, 2007).
2. What is the cloud?
The tremendous growth of the Web over the last decade has given rise to a new class of “Web–scale” problems — challenges such as supporting thousands of concurrent e–commercial transactions or millions of search queries a day. In response, technology companies have built increasingly large data centers, which consolidate a great number of servers (hundreds, if not thousands) with associated infrastructure for storage, networking, and cooling, to handle this ever–increasing demand. Cloud computing can also serve as a means of delivering “utility computing” services, in which computing capacity is treated like any other metered, pay–as–you–go utility service. Over the years, technology companies, especially Internet companies such as Google, Amazon, eBay, or Yahoo, have acquired a tremendous amount of expertise in operating these large data centers in terms of technology development, physical infrastructure, process management, and other intangibles.
Cloud computing represents a commercialization of these developments. Prior to cloud computing, acquiring such resources — the initial capital investment in purchasing the computers themselves and the significant resources devoted to maintaining the infrastructure — was an expensive and unlikely proposition for organizations and simply impossible for individuals. Now, cloud computing has the potential to benefit both providers and users. Cloud providers gain additional sources of revenue and are able to commercialize their large data centers and the expertise of large–scale data management. Overall cost is reduced through consolidation, while capital investment in physical infrastructure is amortized across many customers. Individual cloud users can store, access, and share information in previously inconceivable ways. Organizational cloud users no longer have to worry about purchasing, configuring, administering, and maintaining their own computing infrastructure, which allows them to focus on their core competencies.
The advent of cloud computing matches the need created by the tremendous amount of information available in electronic format today, which creates data and processing intensive problems for a wide variety of organizations and individuals. Financial companies maintain mountains of information about clients; genomics research involves huge volumes of sequence data; and even the serious hobbyist may have more video footage than can be reasonably processed by available machines. However, the most commonly used forms of cloud computing are much more personal, most notably extremely popular Web–based e–mail services and information sharing services. Yet, all these scenarios involve the need of large amounts of processing power that cloud computing provides.
3. Who uses the cloud?
In general, there are three primary ways in which a cloud can be used. In one mode, the cloud simply hosts a user’s application, which is typically provided as a Web service accessible to anyone with an Internet connection. In such cases, the cloud provider takes over the task of maintaining and running a company’s inventory database or transaction processing system. The second mode may be thought of as batch processing, in which the user transfers a large amount of data over to the cloud along with associated application codes for manipulating the data. The cloud cluster executes the application code and returns the results to the user. An example of this would be the recent usage of cloud computing services by the New York Times, which processed and converted an archive of 11 million articles in less than a day and for a fraction of the anticipated cost (Gottfried, 2007). The third mode is the temporary use of cloud computing services in conjunction with existing IT infrastructures. This method has been referred to as cloud bursting and is beneficial to handle temporary peaks or seasonal peaks in traffic.
In all cases, it is extremely important to remember that the user’s data and applications reside (at least for some time) on the cloud cluster, which is owned and maintained by the cloud provider. “Some content may be stored locally on your machine, while other content — content that you in some powerful sense think belongs to you — will be stored remotely. Where actually? You won’t have a clue.”  This unique nature of cloud computing creates many of the economic, geographic, and jurisdictional questions related to this vital new set of services.
Individual users of cloud computing services reflect such questions in their attitudes toward cloud computing. Among users of cloud services, 51 percent find them easy and convenient, 41 percent appreciate the ability to access their data from any computer, and 39 percent appreciate the ease of sharing information (Horrigan, 2008). However, users also have concerns about cloud providers use of their information, 90 percent would be very concerned if their data were sold to other companies, 80 percent if their data were used in marketing campaigns, 68 percent if their data were used to display ads related to their files, 63 percent if their data were kept after they tried to delete, and 49 percent if their data were given to law enforcement agencies (Horrigan, 2008). Some of these attitudes indicate a clear lack of understanding of the nature of cloud services. Google’s extremely popular and widely–used Gmail targets ads directly based on the content of a user’s e–mail, for example, yet 68 percent of users are very concerned by such activities. Further, another different study found 59 percent of users still concerned about targeted ads based on user content when told about the direct ads in Gmail (Harris Interactive, 2008). And while neither of these studies surveyed organizational cloud users, likely some of these same issues are cause for concern to them as well.
A reasonable question underlying all of these issues is: Where is “the cloud” located? The cloud service as an application can be anywhere and everywhere one has access to a computer. However, the cloud is actually comprised of the networked computers and servers and related infrastructure, so they physically have to be somewhere. One of the assumptions of cloud computing is that location does not matter. That is both correct and incorrect. From a cloud user’s point of view, location is irrelevant in many circumstances; from a cloud provider’s point of view, it can be of critical importance in terms of issues of geography, economics, and jurisdiction. As such, data centers must be somewhere and they can be anywhere so long as physical geography is right, but national geography is not the first consideration in many cases.
4. Where is the cloud?
When asked this question, a technologist will surely chuckle and reply something akin to, “The location of the cloud is irrelevant. Anyone will be able to tap into the power of the cloud from anywhere.” This answer, while technically accurate, misses an important set of issues. The main thesis of this article is that cloud computing represents centralization of information and computing resources, which can be easily controlled by corporations and governments. We address the issue of control in the next section, and focus here on the literal answer to “where is the cloud?”
Cloud computing is, of course, a metaphor, whose origins begin with computer diagrams. The cloud itself is an abstraction and is used to represent the Internet and all its complexity. When network administrators construct diagrams of computer networks the image of a cloud is used to reference the Internet as a resource without needlessly illustrating its complexity. Therefore, the “cloud” in cloud computing represents a complex and powerful resource, which is obfuscated to its users. As the term cloud came from mapping and diagramming and was used as an abstraction, it is highly ironic that the implementation of cloud computing leads us back to talking about locations.
The power of the clouds derives from the countless computer servers that comprise it. Google, for example, is reported to own over one million servers spread across the globe to power everything from search to Web–based applications (Baker, 2007). Due to the benefits gained from economies of scale, the majority of these servers are concentrated in a handful of large data centers, each hosting tens if not hundreds of thousands of machines. And by no means is Google alone in maintaining these vast “server farms:” Yahoo, Microsoft, IBM, Amazon — essentially any company that has a significant presence on the Internet — all own and operate large data centers. It is no exaggeration to claim that these data centers represent the largest concentration of information and computing resources that the world has ever seen. In the United States alone, there are an estimated 7,000 data centers (Economist, 2008a).
The placement of data centers, each of which represents a significant investment, is a major issue for Internet companies as well as for the organizations and individuals that rely on the services run through those centers. “Data centers are essential to nearly every industry and have become as vital to the functions of society as power stations are.”  There are four primary considerations of where a data center is constructed:
Suitable physical space in which to construct the warehouse–sized buildings
Proximity to high–capacity Internet connections
The abundance of affordable electricity and other energy resources
The laws, policies, and regulations of the jurisdiction
The first three are straightforward to understand and to a large extent governed by physical constraints based on environmental variables. Considerations of physical space can include:
basic physical geography (finding a suitably flat surface space, or in some cases, appropriate underground locations);
climate and weather (limiting the risks of natural disasters);
energy–saving natural features (leveraging perhaps water, geothermal, or wind to provide cooling and power);
safety (locating an area that is low in crime, far from likely terrorist targets, and easy to guard against corporate espionage).
Beyond physical considerations, proximity to high–capacity Internet connections is also important, since a data center’s value is measured in the number of users that can rapidly tap into its power. Thus, it is desirable to place data centers close to the “Internet backbone,” or the main “trunks” of network that carry most of its traffic; although some companies may elect to build their own network links. Finally, since data centers consume vast amounts of energy (for powering and cooling the servers), locations with cheap energy are highly attractive. The final consideration, that of the laws, policies, and regulations of the jurisdiction, will be discussed in detail below.
As a result, many data centers are being built in locations with plentiful land, favorable corporate tax rates, and affordable electricity, often from natural resources (Foley, 2008; Gilder, 2007). Rural Iowa with its widespread wind power and rural Oregon and Washington with their ample hydroelectric power, exemplify these types of locations. Internationally, data centers can be found in abandoned mines, old missile bunkers, empty shopping malls, underground facilities, and in places as far flung as Iceland and Siberia to save on energy costs (Economist, 2008b). The key consideration for a data center location is often based around energy consumption.
During the dot.com boom of the 1990s, data centers consumed on average one to two megawatts (Katz, 2009). Now, a larger data center consumes as much power individually as an aluminum smelter foundry, with one Microsoft facility in Chicago needing three electrical substations to fuel its constant need for 200 megawatts of power (Economist, 2008b). The aluminum smelter comparison is particularly significant. Foundries can typically only smelt aluminum by electricity, meaning that the amount of electricity used is absolutely massive in comparison to other similar types of factories.
Collectively, the data centers in the U.S. consume electricity on level with a sizeable city — equal to the energy consumption of Las Vegas. Data centers consumed one percent of the world’s electricity in 2005, and the carbon footprint of data centers will surpass that of air travel and many other traditional industries before 2020 (Ahmed, 2008). By that time, networked computing may consume half of the world’s electricity (Gilder, 2007). Therefore, access to and cost of electricity is an important factor for data center locations as the price per kilowatt–hour can widely differ from region to region (Armbrust, et al., 2009).
As much as eight to nine percent of energy is lost just transferring energy to the servers themselves (Katz, 2009). Which means energy cost and energy efficiency are important aspects in data center management and ultimately towards a cloud provider’s bottom line. Many cloud computing providers focusing on harness technology to help reduce their data center’s power usage effectiveness (PUE), which is an energy efficiency metric for data centers. A PUE value of 1.0 means that the data center is completely optimal and losses almost no energy in either cooling systems or in the distribution of electricity. Currently most data centers average a PUE value of 2.0 or more (Katz, 2009). To help reduce their PUE, data centers often look to green technology or even their surrounding location to harness the local environment as a mechanism for either improved distribution or cooling (Katz, 2009).
In reaction to the energy costs and environmental impacts of data centers, cloud providers are exploring a number of alternatives to the traditional geography of a data center. The emphasis on a more environmentally conscious provision of data through cloud computing is sometimes called “following the moon” — becoming more in tune with nature can be more efficient and better for society (Perry, 2008). As examples, energy costs can be lower in the night and cooling equipment might use less energy, meaning that routing traffic to data centers where it is night could save energy, reducing costs and environmental impacts.
Another alternative is the portable or easily assembled data center. Sun has developed a modular datacenter, formally named “Project Blackbox,” which is a data center in a large shipping container (Sun Microsystems, 2007; Reimer 2007). In addition, Google and IBM are also currently exploring this concept (Jones, 2007). The key to this kind of data center is the use of specially–designed super powerful servers that can dramatically reduce the number of servers and power capacity needed for a data center. This kind of data center can be flown to where it is needed, installed, and online in potentially less than a day.
The most unique alternative approach to data centers thus far, however, has recently been proposed. Google has submitted a patent for water–based data centers that would use energy generated by the ocean to power and to cool the servers, with the ships housing these data centers positioned in international waters (Ahmed, 2008; Vance, 2008). This innovative approach has been already nicknamed “the Google Navy.” Beyond the savings of energy costs and the reduced environmental impacts, Google believes that one of the big advantages of such an approach is that the ships could be repositioned to serve in the response to a natural disaster or other kind of catastrophe (Vance, 2008). Though this idea is certainly intriguing, it seems to fail to consider some practical issues like how to defend the floating data centers from pirates, who may be drawn to pilfer the vast amounts of technology. Ironically the idea of a Google Navy was first predicted by the satirical newspaper The Onion (see http://www.theonion.com/content/radio_news/google_steps_in_to_help_u_s).
5. What rules govern the cloud?
The other major set of factors affecting the location of cloud computing data centers revolve around jurisdictional issues. The laws, policies, and regulations of a particular jurisdiction can have a significant impact on the cloud provider and the cloud user. Governments — through law, policy, and regulation — can either stifle or promote the development of cloud computing within a particular jurisdiction.
There are many law and policy problems raised by cloud computing that could become problematic for cloud providers and cloud users (Jaeger, et al., 2008). For users, these issues and expectations include:
Access — users will expect to be able to access and use the cloud where and when they wish without hindrance from the cloud provider or third parties.
Reliability — users will expect the cloud to be a reliable resource, especially if a cloud provider takes over the task of running “mission–critical” applications (Jaeger, et al., 2008; Armbrust, et al., 2009).
Security — users will expect that the cloud provider will prevent unauthorized access to both data and code, and that sensitive data will remain secure (Jaeger, et al., 2008).
Data confidentiality and privacy — users will expect that the cloud provider, other third parties, and governments will not monitor their activities, except when cloud providers selectively monitor usage for quality control purposes (Armbrust, et al., 2009).
Liability — users will expect clear delineation of liability if serious problems occur.
Intellectual property — users and third party content providers will expect that their intellectual property rights will be upheld (Jaeger, et al., 2008).
Ownership of data — users will expect to be able regulate and control the information that is created and modified using those services (Jaeger, et al., 2008; Armbrust, et al., 2009).
Fungibility — users will expect that data and resources stored in one aspect of the cloud can be easily moved or transferred to another similar service with little or no effort, i.e., a high expectation of data portability.
Auditability — users, particularly corporate, will expect that providers will comply with regulations or at least be able to provide them the ability to be audited per regulation requirements (Armbrust, et al., 2009).
The failure to address these issues can cause resistance to a service among users. Lingering mistrust and fear of governmental snooping is already having a negative backlash on certain Google services that sort vast amounts of user information (Avery, 2008).
And while all of these issues clearly are also of concern to cloud providers, they will also evaluate a jurisdiction based on factors such as:
Legal jurisdiction — in cases involving the cloud provider, where will the cases be adjudicated? How favorable is that jurisdiction to the cloud provider’s interests?
Government intervention — how intrusive can the government be under the law or under accepted local practices?
Costs of doing business — how high is the financial burden of taxes, insurance, and regulations (safety, environmental, industrial, etc.)? Is there sufficient work force available? How favorable is the business climate?
Balancing these factors — along with those detailed in the previous section — will shape where a data center is located.
In individual jurisdictions, the approaches to cloud policy will vary greatly, depending on the priorities of the location. Some jurisdictions have recently created or expanded tax breaks to encourage the construction of data centers — one of the key reasons many data centers are being constructed in Iowa is the hefty tax breaks given to data centers (Foley, 2008). On a larger scale, entire nations may provide tax breaks to companies like IBM and Google to provide incentive for construction of data centers outside the United States. Jurisdictions, however, must weigh the advantages of having data centers with the sizeable environmental impacts.
Many policy questions will continue to be issues even after the data center is constructed. The largest challenges to existing providers will likely be tied to issues of security and privacy of the users. Since most data centers are located in the United States, many of these concerns are focused on the USA PATRIOT Act, the Homeland Security Act, and other intelligence–gathering instruments like National Security Letters that can be employed by the federal government to compel the release of information. Perhaps the most chilling aspect of these policies is that a cloud provider would have to comply with a subpoena for a user’s information without telling the user about the subpoena (Ma, 2007). In addition to national security, other policy issues exist that can impact the development of cloud computing including HIPAA (Health Insurance Portability and Accountability Act), Sarbanes–Oxley Act, various privacy laws, Gramm–Leach–Bliley Act, Stored Communications Act, federal disclosure laws, and issues of e–discovery in Federal Rules of Civil Procedures. These laws have different types of impacts on providers. Amazon, for example, is attempting to find ways to comply with Sarbanes–Oxley (see http://s3.amazonaws.com/aws_blog/AWS_Security_Whitepaper_2008_09.pdf), but HIPAA, in its current form, makes cloud computing services in the medical community very difficult (Joshi, 2008).
A number of attempts are already being made to avoid the reach of such laws. The Canadian government has a policy forbidding public–sector IT projects from using U.S.–based hosting services to avoid U.S. laws like the USA PATRIOT Act (Thompson, 2008). Further, neutral countries are being viewed as ideal locations for data centers by some companies in order to prevent the data from being reachable by the United States government (Economist, 2008a). For example, SWIFT, an international banking organization, is looking to be a data center in Switzerland for this very reason (Economist, 2008c). However, these types of approaches are of limited benefit in attempting to avoid law enforcements entanglements. The laws of any nation where a data center is located will apply, and many nations do not have nearly the civil rights safeguards that the United States does (Thompson, 2008). Placing data centers in other countries may ultimately result in more legal complications for providers and users.
In spite of these issues of law and policy, few attempts have been made to address the thorny legal issues raised by cloud computing (Jaeger, et al., 2008). The failure to create policies that adequately balance the needs of cloud providers, cloud users, and jurisdictions could have sizeable consequences on where the data centers of the future are located. Simply put, without good policy, one jurisdiction — no matter what the other advantages of the location may be — will loose cloud providers and their data centers to other jurisdictions. Of course, the fact that a cloud consists of many data centers in many different jurisdictions, there may be very practical limits on jurisdiction shopping. Perhaps the most intriguing unanswered policy questions about cloud computing is whether a cloud will be considered to legally be in one designated location (and therefore beholden to the laws, policies, and regulations of one place) or in every location that has a data center that is part of the cloud.
Jurisdiction shopping and the provision of incentives to locate in certain jurisdictions raise several major concerns for users of cloud computing. For example, if certain jurisdictions are too eager for the economic benefits of data centers, they may give away too many legal protections of users and content, granting a great deal of control to the providers. Conversely, providers may be wooed by economic incentives from jurisdictions that have a negative legal environment in terms of data and user protection, giving the government a great deal of power over the provider, users, and content. Even when providers suggest unique responses to such jurisdictional concerns, there are still major potential problems.
Though it is being presented as a solution to issues of energy and environmental conservation, the Google Navy can also been seen as a response to these complex jurisdictional issues. At the most basic level, data centers on ships in international waters would not have to pay property taxes. More significantly, it also raises major questions about the legal jurisdiction of such seafaring data centers. Could a National Security Letter be enforced against a server in a boat in the middle of the Pacific Ocean? The Google Navy may indicate that existing jurisdictional issues are so unappealing to cloud providers that they are looking to the sea for relief.
In another case of fiction predicting future, organizations are looking to regulation–less safe havens to build their data centers. In Stephenson’s classic science fiction novel Cryptonomicon, the fictional island of Kinakuta is used to traffic data to avoid the regulations of the world (Economist, 2008c). While the quixotic attempts to turn an actual abandoned oil platform in the North Sea into a recognized nation (“Sealand”) not bound by the conventions of actual nations have primarily been fruitless, the financial, social, and political muscle of an entity like Google may prove much more successful in establishing their own floating jurisdictions. In such a case, Google would have the world’s largest repository of data beyond government reach or protection for the owners and users of data contained therein.
As such, individual and corporate user rights and protections, provider interests, and government duties must be extremely carefully considered and balanced as cloud computing edges closer to ubiquity. While jurisdictional concerns will not likely lead to a mass discontinuation of use of cloud services, the way data centers are established under law in the near future will have long–term ramifications for users, providers, and governments, as well as for the control of the Internet itself.
6. Conclusion: Clouds without borders?
Cloud computing only works if the cloud is massive and contiguous — data must be able to flow efficiently and effectively from the user to the cloud, then perhaps within the cloud, and then back to the user. If geographic and political borders fracture the cloud into smaller groupings, the real advantage of the cloud dissipates into the ether. The ultimate goal, therefore, of examining the questions of the location of the cloud is to understand how best to address these issues. Focusing on the areas of policy and education offer some hope in addressing these issues.
One significant reason for the lack of focus on policy issues about cloud computing in the United States, and many other nations, is the lack of a political infrastructure that reacts deftly to rapid technological change (Jaeger, et al., 2008). The technology is simply moving too fast — and creating previously unthinkable legal challenges — for the policy–making process to response adequately.
It has been suggested that the best means to address these issues may be through international organizations, like the United Nations drafting a cloud computing rights statement (Thompson, 2008). However, while the World Summit for the Information Society (http://www.itu.int/wsis/index.html) has encouraged efforts to use cloud computing to promote collaboration and reduce deficits of scientific knowledge in certain regions of the world, intergovernmental collaboration on cloud computing standards seems not to have been explored yet. The idea of international cooperation to create international cloud computing standards seems particularly unlikely given the great chasm between European Union member nations and the United States in definitions of privacy and variations in types of privacy protection available (Sunosky, 2000).
Further, the new economy still lacks a political infrastructure in the United States — older industries have much more sway over Congress due to established connections and much better organized lobbying efforts (Graff, 2007). This situation is further complicated by the fact that many members of Congress do not relate to the new technologies as well as older industries or simply do not grasp the policy implications of major technological changes (Graff, 2007). It has been suggested that one way to advance government attention to cloud computing as a policy issue would be to change government regulations to embrace cloud computing in procurement, giving incentive to deal with the policy issues (Gross, 2008). Organizations like the Center for Democracy and Technology have argued that cloud computing simply demonstrates how our current “electronic” policy is outdated (Sanchez, 2008).
Others have suggested that the new Obama Administration should embrace the use of cloud computing in government, which will in turn help to spur the creation of proper cloud regulation (NPR, 2008). The federal government has already begun to implement the use of cloud computing. In early 2009, the federal government announced that the primary e–government portals — USA.gov and its Spanish–language companion site, GobiernoUSA.gov — would be supported by cloud computing (Kash, 2009).
As such, it is imperative for cloud providers and cloud users to promulgate initiatives to promote awareness of these issues among government officials and to bring these issues before the proper legislative bodies. The United States will only likely remain the home of the majority of the world’s data centers if it crafts sufficiently supportive policies. Smaller jurisdictions, which are often far more agile, may more quickly respond to these challenges, drawing more data centers to such jurisdictions. However, given the worldwide reach of the clouds and globe-spanning expanse of data centers, the scope of these challenges are truly worldwide and must be addressed with a broad perspective and across jurisdictions. Industry experts have estimated the current market for cloud computing to be worth US$160 billion (Buyya, et al., 2008). Therefore, location and placement of cloud computing data centers have clear economic ramifications.
Education of the general population of Internet users is also important. The ways the governments respond to these issues will depend heavily on how well Internet users educate themselves and articulate how they believe these jurisdictional issues should be approached. As part of educational efforts, cloud providers would be well served to consider providing clearer explanations of these issues, particularly user rights and provider responsibilities. Currently, most cloud–related documentation from providers avoids any unsettled issues (Jaeger, et al., 2008). Cloud providers often provide information through contractual documents such as Terms of Services (TOS) or Service Level Agreements (SLA). While these types of documents can be informative (Gomulkiewicz and Williamson, 1996), they are often not read by users because they are inaccessible due to their overall poor readability (Dathathri and Atangana, 2007; Grossklags and Good, 2008; Fox, 2005). Further, the potentially complex and potentially obfuscated interplay between the various governing documents of a provider can make understanding the sum total of regulations, responsibilities, and rights of user and provider challenging (Grimes, et al., 2008; Jaeger, et al., 2008). As one approach to these issues, the proprietors of Wesabe, a popular cloud application for financial information, have chosen to inform and educate users through a document they call their “data bill of rights.” This document contains easy to read simple bullet points which summarizing their security and privacy stances (see http://www.wesabe.com/page/security).
Including considerations and questions such as those raised in this paper in the education of future developers of cloud services will also help providers to contribute positively in finding solutions. At this time, only a few universities have implemented cloud computing courses. The University of Washington has already successfully designed and implemented undergraduate computer science courses in 2007 on cloud computing (Kimball, et al., 2008). Ongoing efforts at the University of Maryland include a joint research and education initiative that pairs undergraduate students with Ph.D. students in working on cutting–edge research problems in text processing, such as statistical machine translation and analysis of e–mail archives (Lin, 2008). As these universities expand their efforts and as more universities add educational opportunities in cloud computing, broadening the scope beyond the technological issues to include the geographic, economic, and jurisdictional issues will prepare future cloud developers to deal with the cloud–scale problems that surround cloud computing.
Cloud computing offers previously unimaginable capacity in using technology to connect people across vast distances, analyze data at truly massive levels, and store and share information in ways that provide access virtually anywhere. Yet, there are also clearly sizeable attendant issues of location, cost, environmental impacts, and law that must be addressed for cloud computing to provide the benefits it is capable of. The manner chosen to address these issues must not be allowed to place disproportionate control over the capabilities of the Internet firmly in the possession of corporations or governments. Cloud providers, individual and corporate Internet users, and governments all have clear incentives to work together to resolve these issues so that the benefits of cloud computing — and of the Internet itself — can be maximized for all.
About the authors
Paul T. Jaeger (pjaeger [at] umd [dot] edu) is Assistant Professor and Director of the Center for Information Policy and Electronic Government in the College of Information Studies at the University of Maryland. Jimmy Lin (jimmylin [at] umd [dot] edu) is Associate Professor in the College of Information Studies at the University of Maryland. Justin M. Grimes (jgrimes2 [at] umd [dot] edu) is a doctoral student and Research Associate of the Center for Information Policy and Electronic Government in the College of Information Studies at the University of Maryland. Shannon N. Simmons (simmons5 [at] umd [dot] edu) is a doctoral student and Research Associate of the Center for Information Policy and Electronic Government in the College of Information Studies at the University of Maryland.
1. Baker, 2007, para. 5.
2. Picker, 2008, p. 6.
3. Economist, 2008b, para. 3.
M. Ahmed, 2008. “Google search finds seafaring solution,” TimesOnline (15 September), at http://technology.timesonline.co.uk/tol/news/tech_and_web/the_web/article4753389.ece, accessed 27 February 2009.
C. Anderson, 2008. “The end of theory: The data deluge makes the scientific method obsolete,” Wired (23 June), at http://www.wired.com/science/discoveries/magazine/16-07/pb_theory, accessed 27 February 2009.
M. Armbrust, A. Fox. R. Griffith, A.D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, 2009. “Above the clouds: A Berkeley view of cloud computing,” at http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-28.html, accessed 27 February 2009.
S. Avery, 2008. “Patriot Act haunts Google service,” Globe and Mail (24 March), at http://www.theglobeandmail.com/servlet/story/RTGAM.20080324.wrgoogle24/BNStory/Technology/home, accessed 27 February 2009.
S. Baker, 2007. “Google and the wisdom of the clouds,” Business Week (14 December), at http://www.msnbc.msn.com/id/22261846/, accessed 27 February 2009.
R. Buyya, C.S. Yeo, and S. Venugol, 2008. “Market–oriented cloud computing: Vision, hype, and reality for delivering IT services as computing utilities,” Proceedings of the 10th IEEE International Conference on High Performance Computing and Communications, pp. 5–13.
G. Czajkowski, 2008. “Sorting 1PB with MapReduce” (21 November), at http://googleblog.blogspot.com/2008/11/sorting-1pb-with-mapreduce.html, accessed 27 February 2009.
A. Dathathri and J.L. Atangana, 2007. “Countering privacy–invasive software (PIS) by end user license agreement analysis,” Master Thesis, Computer Science, Thesis no: MCS–2007:20, Department of Interaction and System Design, School of Engineering, Blekinge Institute of Technology (Ronneby, Sweden).
K.J. Delaney and V. Vara, 2007. “Google plans services to store users’ data,” Wall Street Journal (27 November), at http://online.wsj.com/article/SB119612660573504716.html?mod=hps_us_whats_news, accessed 27 February 2009.
Economist, 2008a. “Where the cloud meets the ground,” Economist (23 October), at http://www.economist.com/, accessed 27 February 2009.
Economist, 2008b. “Down on the server farm,” Economist (22 May), at http://www.economist.com/, accessed 27 February 2009.
Economist, 2008c. “Computers without borders,” Economist (23 October), at http://www.economist.com/, accessed 27 February 2009.
J. Foley, 2008. “Why Google and Microsoft are building data centers in Iowa,” Information Week (4 August), at http://www.informationweek.com/blog/main/archives/2008/08/google_and_micr_1.html, accessed 27 February 2009.
S. Fox, 2005. Spyware: The threat of unwanted software programs is changing the way people use the Internet. Washington, D.C.: Pew Internet and the American Life Project, at http://www.pewinternet.org, accessed 27 February 2009.
G. Gilder, 2007. “The information factories,” Wired, volume 14, number 10, at http://www.wired.com/wired/archive/14.10/cloudware_pr.html, accessed 27 February 2009.
R. Gomulkiewicz and M. Williamson, 1996. “A brief defense of mass market software license agreements,” Rutgers Computer and Technology Law Journal, volume 22, pp. 335–367.
G.M. Graff, 2007. “Don’t know their Yahoo from their YouTube,” Washington Post (2 December), p. B01.
J.M. Grimes, P.T. Jaeger, and K.R. Fleischmann, 2008. “Obfuscatocracy: Contractual frameworks in the governance of virtual worlds,” First Monday, volume 13, number 9 (September), at http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2153/2029, accessed 27 February 2009.
G. Gross, 2008. “Cloud computing may draw government action,” InfoWorld (12 September), at http://www.infoworld.com/article/08/09/12/Cloud_computing_may_draw_government_action_1.html, accessed 27 February 2009.
J. Grossklags and N. Good, 2008. “Empirical studies on software notices to inform policy makers and usability designers,” Lecture Notes in Computer Science, volume 4886, pp. 341–355.http://dx.doi.org/10.1007/978-3-540-77366-5_31
E. Hand, 2007. “Head in the clouds,” Nature, volume 449, number 7165 (25 October), p. 963.
Q. Hardy, 2008. “The death of hardware,” Forbes (11 February), at http://www.forbes.com/technology/forbes/2008/0211/036.html, accessed 27 February 2009.
J.B. Horrigan, 2008. Use of cloud computing applications and services. Washington D.C.: Pew Internet and the American Life Project, at http://www.pewinternet.org, accessed 27 February 2009.
P.T. Jaeger, J. Lin, and J. Grimes, 2008. “Cloud computing and information policy: Computing in a policy cloud?” Journal of Information Technology & Politics, volume 5, number 3, at http://www.jitp.net/m_archive.php?p=7, accessed 17 April 2009.
K.C. Jones, 2007. “Google wins patent for data center in a box; trouble for sun, rackable IBM?” Information Week (10 October), at http://www.informationweek.com/news/storage/showArticle.jhtml?articleID=202400961, accessed 27 February 2009.
S. Joshi, 2008. “HIPAA, HIPAA, hooray? Current challenges and initiatives in health informatics in the United States,” Medical Informatics Insights, volume 1, pp. 41–45.
W. Kash, 2009. “USA.gov, GobiernoUSA.gov move into the Internet cloud,” Government Computer News (23 February), at http://gcn.com/articles/2009/02/23/gsa-sites-to-move-to-the-cloud.aspx?s=gcndaily_240209, accessed 27 February 2009.
R.H. Katz, 2009. “Tech titans building boom,” IEEE Spectrum (February), at http://www.spectrum.ieee.org/feb09/7327, accessed 23 February 2009.
A. Kimball, S. Michels–Slettvet, and C. Bisciglia, 2008. “Cluster computing for Web–scale data processing,” Proceedings of the 39th ACM Technical Symposium on Computer Science Education (SIGCSE 2008), Portland, Ore., pp. 116–120.
J. Lin, 2008. “Exploring large–data issues in the curriculum: A case study with MapReduce,” Proceedings of the Third Workshop on Issues in Teaching Computational Linguistics at ACL 2008, Columbus, Ohio, pp. 54–61.
S. Lohr, 2007. “Google and IBM join in ‘cloud computing’ research,” New York Times (8 October), at http://www.nytimes.com/2007/10/08/technology/08cloud.html, accessed 27 February 2009.
W. Ma, 2007. “Google’s Gdrive (and its ad potential) raise privacy concerns,” Popular Mechanics (29 November), at http://www.popularmechanics.com/technology/industry/4234444.html, accessed 27 February 2009.
E. Naone, 2007. “Computer in the cloud,” Technology Review (18 September), at http://www.technologyreview.com/Infotech/19397/?a=f, accessed 27 February 2009.
NPR, 2008. “Will cloud computing work in the White House?” National Public Radio (21 December), at http://www.npr.org/templates/story/story.php?storyId=98578519, accessed 27 February 2009.
G. Perry, 2008. “On clouds, the sun and the moon” (21 June), at http://gigaom.com/2008/06/21/on-clouds-the-sun-and-the-moon/, accessed 27 February 2009.
R.C. Picker, 2008. “Competition and privacy in Web 2.0 and the cloud,” Chicago Working Paper Series, number 414, at http://www.law.uchicago.edu/Lawecon/index.html, accessed 27 February 2009.
J. Reimer, 2007. “The power of Sun in a big blackbox” (3 April), at http://arstechnica.com/articles/paedia/hardware/project-blackbox.ars, accessed 27 February 2009.
J. Sanchez, 2008. “CDT to Obama: Advent of ‘the cloud’ makes privacy laws dated,” Ars Technica (11 December), at http://arstechnica.com/old/content/2008/12/cdt-open-government-privacy-must-be-top-obama-priorities.ars, accessed 17 April 2009.
J.T. Sunosky, 2000. “Privacy online: A primer on the European Union’s Directive and the United States’ Safe Harbor privacy principles,” Currents: International Trade Law Journal, volume 9, pp. 80-88.
Sun Microsystems, 2007. “Sun Modular Datacenter’ (12 December), at http://www.sun.com/products/sunmd/s20/, accessed 27 February 2009.
B. Thompson, 2008. “Storm warning for cloud computing,’ BBC News (27 May), at http://news.bbc.co.uk/2/hi/technology/7421099.stm, accessed 27 February 2009.
A. Vance, 2008. “Google’s search goes out to sea,’ New York Times (7 September), at http://bits.blogs.nytimes.com/2008/09/07/googles-search-goes-out-to-sea/?apage=1, accessed 27 February 2009.
L. Youseff, M. Butrico, and D. Da Silva, 2008. “Toward a unified ontology of cloud computing,’ Proceedings of Grid Computing Environments Workshop at GCE 2008, Austin, Texas, pp. 1–10.
Paper received 12 March 2009; accepted 17 April 2009.
Copyright © 2009, First Monday.
Copyright © 2009, Paul T. Jaeger, Jimmy Lin, Justin M. Grimes, and Shannon N. Simmons.
Where is the cloud? Geography, economics, environment, and jurisdiction in cloud computing
by Paul T. Jaeger, Jimmy Lin, Justin M. Grimes, and Shannon N. Simmons
First Monday, Volume 14, Number 5 - 4 May 2009