Much recent work on cyberinfrastructure or escience, by the National Science Foundation and others, has emphasized its newness. In such accounts, cyberinfrastructure is about new ways of organizing the practice of science, drawing on new computational resources, enabling new collaborative and organizational forms, and ultimately new forms of discovery and learning. There is undoubtedly much to recommend this framing, and none of what follows is intended to refute it. But it is also possible to place cyberinfrastructure on a different timeline (that of the long now, explained below) and in a different category (that of general infrastructure) in which the emphasis is not on novelty but continuity and consistency with the past. Doing so has certain advantages, not least of which is to open up a terrain of comparative learning which we believe can help guide policy and practice around the making of cyberinfrastructure. That, roughly, is the strategy of this paper.
The theoretical approaches, examples, and some of the arguments offered here draw heavily from the field of science and technology studies (STS), building on what are now several decades of work in the sociology, history, philosophy, anthropology, communication and governance of science and technology. More immediately, they reflect the findings of an NSFsponsored workshop, History and Theory of Infrastructure: Lessons for New Scientific Cyberinfrastructures, organized by the authors in Ann Arbor, Michigan during fall 2006. Many of the examples and arguments advanced here may be found in more developed form in the final workshop report, Understanding Infrastructure: Dynamics, Tensions, and Design . As the workshop and later report argued, while historical and comparative studies of infrastructure are unlikely to deliver anything as neat as a blueprint for action, they can — and indeed should shape and guide thinking about present efforts at infrastructural development, in the sciences as elsewhere. This at least is what we have sought to wrest from historical and comparative study: not rules, but heuristics; not a map, but principles of navigation.
There are three main purposes of this paper: first, to argue for the ongoing relevance of history, even (and perhaps especially) in the context of seemingly revolutionary or historybreaking technologies; second, to point to some of the specific lessons that parallel efforts at infrastructure building, both past and present, can offer us; and third, to begin to distill from these some rough heuristics, or rules for the road, that presentday cyberinfrastructure developers and users might do well to keep in mind as they go about their work.
From the vantage point of the present, many of the infrastructures that support and govern modern lives, societies, and work practices will appear dull, flat, and still. The more settled the infrastructure, the truer this feels: we think about roads until we can drive easily on them, and then promptly forget (until prompted by accidents, construction, and traffic jams to think again). We drink from the municipal water supply until we can’t, then think once again about water. Once here, effective infrastructures appear as timeless, unthought, even natural features of contemporary life. This sort of naturalization and forgetting is central to the effectiveness and deep value of infrastructure, and is indeed one of its highest aspirations. But it also makes it challenging to recall what is at stake with infrastructure (which turns out to be quite a lot), or to chart the processes by which infrastructures grow and change. This is an academic problem for professional historians and social scientists; for wouldbe builders of infrastructure, it is arguably something more.
In this section, we review a growing body of evidence pointing to patterns or dynamics common to the development of many infrastructures over many times and places. From this we distill three general arguments. First, effective infrastructures are above all accomplishments of scale, growing as locally constructed, centrally controlled systems are linked or assembled into networks and internetworks governed by distributed control and coordination processes. Second, the extension of infrastructure typically follows a complex path of transfer or translation from one location or domain to another. To achieve this, multiple things may be required to move or change: technologies themselves (as indicated in the term technology transfer), but also social, cultural, organizational, legal, and institutional practices. Fitting these disparate elements together requires significant and rarely straightforward processes of adaptation and mutual adjustment. Third, the assembling and effective cohesion of working infrastructure is frequently accomplished by means of gateways, i.e. material or social technologies (e.g. standards and protocols) that permit the linking of heterogeneous systems into networks and internetworks. These features in turn lead to a number of common features and patterns in the history of infrastructural development, including dynamic effects such as path dependency, momentum, and reverse saliency. Each of these points is explained and discussed in the paragraphs that follow.
One of the most careful and suggestive accounts of infrastructural development to date comes from historian Thomas Hughes Networks of Power, an analysis of the early development of electrical power in the United States and Western Europe . Hughes work, and that of the Large Technical Systems (LTS) school that followed , provides a compelling account of the way in which technical systems (as opposed to isolated technologies) are brought into being, stabilized, and extended over time. Key to this process are systembuilders individuals, teams, or in some cases institutions capable not only of producing groundbreaking inventions, but also imagining and bringing into being the the large ensembles of techniques, practices, institutions, and other technologies needed to support and sustain them. The range of this systembuilding work demands skill and care within multiple registers: technical, but also organizational, social, institutional, etc. Successful systembuilders must therefore act as heterogeneous engineers, working together not only technologies and the material world, but also people, organizations, values, knowledge, and expectations . A canonical example here is Thomas Edisons role in the history of electricity. Other inventors had already hit upon light bulbs; what set Edison apart was his conception of a comprehensive lighting system, including generators, cables, and light bulbs, dedicated above all to the provision of an integrated system of electrical lighting. Parallel examples may be found in the early role of companies such as Univac and IBM in producing not just digital computers, but an integrated data processing system, built around a suite of input, output, and storage devices, but also software, training, and a variety of customer services. (This history is reflected in IBMs recent embrace of services as the center of its business model).
Once established locally, successful systems may undergo complex processes of transfer, adaptation, and growth as they are extended to other places, domains, and communities of use. This rarely if ever takes the form of a wholesale transplant or simple copying. Instead, systems go through subtle transformations as they extend to new legal, institutional, social, and cultural environments, producing variations in what Hughes calls technological style (roughly, the distinctive look and feel of the same technical system as it appears in differing local and national contexts). Moments of technology transfer are also marked by the appearance of new challenges and constituencies. Prominent among the former are challenges of scale, as systems designed and imagined within discrete limits are called upon to sustain activities of an undesignedfor size, scope, nature, and intensity. At the same time, transfer often brings into sharp relief conflicts and incompatibilities with neighboring or alternative systems, and may be the site of particularly intense battles over institutional and commercial standing, community norms and expectations, and the definition and scope of standards. The transfer process also sees the cast of stakeholders expand dramatically, including the rise of new user classes, both real and potential, who begin to play an important role in defining the future development or nondevelopment of the system.
These processes of system formation and transfer may eventually lead to what historians in the LTS tradition have identified as consolidation, marked by an eventual merger or rapprochement between systems that allows smooth, reliable, and relatively robust interoperation across the breadth of the technologies and social worlds in question. In rare cases, this is achieved through the outright victory of one system over another. More commonly, consolidation is achieved through the development of strategic intermediaries, or gateways: technologies, organizational solutions, and/or protocols for interconnection that allow for mobility, conversation, and traffic between otherwise incompatible systems. Examples of technical gateways may be found in the adaptors and converters that allow appliances designed for one part of the world to work with the voltages and plug sizes found in others. Standardization in its various guises (formal and informal, topdown and bottomup) is perhaps the leading example of a gateway technology on the social/organizational side, and is a crucial site or moment in infrastructural development more generally . It is at this point of heterogeneous connection among systems that the eventual power, scope, and worldbuilding quality of infrastructure begins to take shape.
Embedded in the rough path traced above are a number of specific dynamics of note. The first of these is the existence and significance of what Hughes has termed reverse salients: the particularly intractable challenges, limits, or sticking points on which broadscale system development grounds and stalls. These may be technical in nature (e.g. the apparent scarcity of wavelengths available for efficient overtheair signal transmission; the lossiness of longdistance power transmission). But they may also be organizational, social, or legal (e.g. challenges in assigning credit within vastly distributed scientific enterprises; the innovationconstraining effects of patent thickets (see Clarkson, this issue). Reverse salients shape infrastructural dynamics in at least two important ways. First, they may help explain and predict alterations in pace, between periods of slow or incremental change (where unresolved reverse salients obstruct broadscale development) and periods of rapid and multifaceted development as the friction or stickiness of a particular reverse salient is released. Second, reverse salients may act as key deflection points in particular infrastructural histories, pointing systems in significantly different directions according to the manner of their resolution.
The cumulative nature of infrastructural development, together with the number and depth of its ties to the technical and social worlds around it (think here of the number and range of connections needed for an operating system to fit the hardware profiles, applications, institutional structures, and user needs and competencies around it) means that once set in place or in motion, infrastructures take on distinctive inertial qualities. Historians have periodically referred to this under the language of momentum, trajectories, or path dependencies, pointing to the fact that, once established, systems tend to continue in particular directions, making reversals or wholesale leaps to alternative approaches costly, difficult, and in some cases impossible. Because of this, early technical choices (including some relatively casual or arbitrary ones) have a tendency to get reinforced as subsequent system elements are built around or on top of them. An oftcited example of this is the case of the QWERTY keyboard, putatively inferior to other keyboard layouts and typing systems (e.g. DVORAK), but held in place for more than a century (and through multiple generations of keyboarding machines, including modern computers) through a variety of externalities and network effects . A broadly parallel story can be told of the way in which early decisions by computer programmers around the efficient coding of dates led to what would eventually be the massively inefficient Y2K problem. In this way, system elements (and later infrastructures) may become locked in by the gathering weight of the system itself; once grooved, infrastructures become hard to shift or displace.
Finally, it should be noted that many infrastructures (including most of those in the cyberinfrastructure and broader IT realm) are themselves deeply embedded within and dependent on other infrastructures. A classic example here is the ultimate dependence of IT systems on a reliable electrical grid. This connection, often (though in our opinion incorrectly) taken for granted in a North American context, is experienced as an acute challenge by, for example, wouldbe IT systembuilders in large parts of Africa. In this regard, infrastructures frequently exhibit a layered quality, with secondorder systems or virtual infrastructures built on top of prior or established infrastructures. Key examples here would include the way in which the World Wide Web sits on top of the Internet; the continued dependence of cellular telephony on significant aspects of the landline grid; and the intertwined histories of rail and telegraph networks from the midnineteenth century on. Such backandforth connections between adjacent and/or supportive systems may function as an additional source and shaper of infrastructural dynamics.
In addition to the patterns and dynamics noted above, infrastructures of all types have encountered and provoked some deeply felt tensions. In contrast to the placid appearance of settled infrastructures, infrastructures in their moments of formation can be sites of intense conflict, through which the identity and status of relevant stakeholders, the distribution of benefits and losses, and the general rules of the game are all being worked out simultaneously. In such periods, infrastructures appear as distinctly agonistic phenomena, imagined, produced, challenged and refined in an uneven and deeply conflictual field.
To begin, across virtually every type and class of emergent infrastructure we can identify provisional winners and losers those whose positions, programs, work experiences, or general qualities of life are enhanced (or conversely, challenged and undermined) by the developing infrastructure. Clear examples can be found in the nineteenthcentury towns through which rail lines did and didnt pass, the former rising to prominence in the reorganized economic geography of the American West, the latter fading to shadowy reminders of past importance. Or again, the variable experiences of twentieth century factory automation (and later, computerization) strategies, through which managerial and technical groups gain new control over the production process, while certain classes of trade and unskilled workers see their workplace power and employment prospects shrink. These and other examples remind us that emergent infrastructures will often have important distributional consequences, reorganizing resource flows and opportunities for action across scales ranging from the local workplace to the global economy. Shortterm experiences and longterm expectations of gain and loss will shape the incentive structures of individuals and institutions tasked with responding to infrastructural change. This in turn will shape the social and institutional climates in which infrastructures struggle to emerge: broadly receptive, with allies adding support and innovation at every turn? Or openly or covertly hostile, with stakeholders and prospective user groups dragging their heels, actively opposing, advancing counterprojects, or simply refusing to play along?
The uneven distributional consequences of infrastructural change are matched, not surprisingly, by discrepancies in the fundamental experience and vision of infrastructure. In the study and practice of infrastructural development to date, there has been an unfortunate tendency to emphasize what may be the excessively neatened and orderly views of systembuilders, often to the exclusion of other, more partial, perspectives. An example can be found in the uneasy relation between design assumptions and user expectations, which has occasionally led to questioning, opening up, and/or user revolts around particular kinds of infrastructure. More often, the designuse disconnect is evidenced in neglect, as ambitious and wellintentioned systems languish on the shelves or desktops of users opting for alternative (perhaps local, perhaps kluged) solutions.
Additional tensions may be identified where changing infrastructures bump up against the constraints of political economy, in the form of investment models, property regimes, and competing policy objectives (the subject of several of the contributions to this issue). The pervasive and foundational character of many modern infrastructures (e.g. road, rail, water, and energy systems) has often been associated with a commonslike or quasipublic good status, leading them to be undertaken on the basis of public investment. More recently, especially but not exclusively in the United States, public investment models have come under some attack, and there is increasing pressure to constrain spending and/or partner with industry in ways argued to promote efficiency and innovation. Geographers Graham and Marvin have referred to this as the splintering of the modern infrastructural ideal . At the same time, new and highly distributed development models (such as the open source movement) have appeared, offering what some have argued are attractive alternatives to centralized and topdown development forms. While such models should be carefully considered and explored, it should be recalled that most nowmature infrastructures in the U.S. and elsewhere were built through collective investments oriented to a public good logic. Sometimes this was achieved through strategic pairings of private ownership and regulatory oversight (for example, the system of regulated monopoly and the old AT&T, remarkably successful at extending the infrastructure of telephony across the U.S. in the early to midtwentieth century). In other cases, largescale infrastructure was funded, shaped, and driven directly by the state, often in response to demands of national security and/or economic competitiveness (for example, the Internet, dependent through its formative years on an almost exclusive diet of DARPA and later NSF money). To the extent that targeted public monies remain an important spark and catalyst for infrastructural development, a key longterm challenge for American CI proponents will be to articulate a compelling and forward thinking public investment rationale for cyberinfrastructure.
Developing infrastructures can face similar challenges and disconnects visàvis existing institutional, legal, and property regimes. A classic case of the former is the way in which the problems faced in organizing a continental rail network challenged existing business and legal forms, and ultimately gave rise to the first rough template of the modern corporation. In a more contemporary vein, practices of data handling, sharing, and the extended collaborative forms pursued under the NSF cyberinfrastructure vision may pose new challenges to the regimes of intellectual property operative within science. Beyond tensions tied to the internal cultures and career structures of science (as explored below), sorting out formal questions of ownership in vastly distributed projects may be an acute source of tension, particularly in fields where the commercialization of research results is common place. Who (if anyone) is to own the results of deeply collaborative work, and by what mechanisms can or should downstream revenues from such work be distributed? How far should property in raw data extend, when the reworking of community repositories leads to new results? Such concerns are likely to multiply with the advent of the increasingly networked and collaborative forms of research called for under the NSF cyberinfrastructure vision.
An additional class of tensions may be found in competing policy interests at the national/transnational junction. As the above discussion suggests, the nationstate has historically been the single most important container for the development of infrastructure: its most common geographic scale, its principal financier, and in almost all cases the ultimate source of its governance. At the same time, a good deal of the power of infrastructure lies in its ability to connect above or beyond the level of the state (note that many of the gateways described above are designed to bridge the gulf between nationallydefined systems). This sets up a potential conflict, frequently realized, between the objectives of national advantage and those of transnational connection. In some cases, discrepancies between national infrastructures are more or less accidents of history, the product of parallel but disconnected development rather than any particular conscious or strategic intent. In others, the disconnect or decoupling is consciously and strategically pursued, often for reasons of national security (e.g. the varying rail gauges of Europe, designed in part to thwart the advance of potential enemy armies) or economic advantage (e.g. the enduring division between North American (NTSC), panEuropean (PAL), and French (SECAM) color television standards). A broadly parallel set of tensions can be seen in the relationship between national science policy objectives and the transnational pull of science. Put simply, where broadscale national policy interests (in economic competitiveness, security, global scientific leadership, etc.) stop at the borders of the nationstate, the practice of science spills into the world at large, connecting researchers and communities from multiple institutional and political locations. To the extent that cyberinfrastructure enables an expanded suite of transnational research collaborations, such national/transnational tensions may get picked up and replayed at the project level.
In addition to such general tensions of infrastructure, we may identify certain tensions endemic to the world of scientific infrastructure. As a matter of daily practice, such tensions are very often played out at the level of data. Data represents the front line of cyberinfrastructure development: its main site of operation, its most tangible output, and in some regards the target of its highest ambitions. From this perspective, cyberinfrastructure is principally about data: how to get it, how to share it, how to store it, and how to leverage it into the major downstream products (knowledge, discoveries, learning, applications, etc.) we want our sciences to produce. At the same time, there is significant variation (both within and across disciplines) as to what counts as data. For some, data is first and foremost a question of things: samples, specimens, collections. For others, data is what comes out of a model or perhaps the model itself. Data may be tactile, visual, textual, numeric, tabular, classificatory, or statistical. Data may be an intermediate outcome, a step on the road to higherorder products of science (publications, patents, etc.). Or data may be the product itself. Where a discipline or research project fits within this spectrum will have enormous consequences for its positioning visàvis cyberinfrastructure. This specificity alone guarantees that cyberinfrastructure should and assuredly never will be a singular or unified thing.
Additional tensions center on the problem of storage, preservation, and effective curation of data. In some sciences, the sheer volume of data created on an ongoing basis makes effective data retention and backup a challenge of the highest order. This raises important questions of form and granularity. How much data, and in what form, must one reasonably preserve and document? The answer to this is tied in turn to questions of short and longterm audience and purpose. Is the data meant only to support the work in progress of a distinct team of researchers (what the NSFs Cyberinfrastructure Vision document defines as a research collection)? Is it intended for a larger, perhaps domainlevel community, and for use over a moderately extended period (a resource collection)? Is its aim wider still, pointing to vast and multidisciplinary teams over long spans of time (a reference collection)?  As this progression suggests, questions of preservation become steadily more complicated as prospects for reuse beyond the immediate context of data production are considered. Here the thorny problem of metadata emerges: how much data about data is needed to support future use and interpretation? Historical solutions to this problem have been distinctly human: beyond the thin accounting of journal articles and project reports, scientists come to nuanced assessments of the techniques and findings of their colleagues by correspondence (now, typically, email), by hallway or dinnertime conversations during site visits or academic conferences, by assessments of personal and institutional reputation, and through the circulation of graduate students, postdocs, and faculty colleagues. For years now, the NSF and other funders have exhorted their grantees to collect and preserve metadata a prescription that has, for the same number of years, been routinely ignored or underperformed. The metadata conundrum represents a classic mismatch of incentives: while of clear value to the larger community, metadata may offer little to nothing to those tasked with producing it and may prove costly and timeintensive to boot. Until metadata and robust support for reuse achieves a more secure place within the credit system of science, this dynamic will prove difficult to reverse.
Problems of metadata and reuse are closely linked to tensions around data sharing, within and across disciplines a further feature and goal of the cyberinfrastructure vision. An important class of tensions can be traced to the sheer data diversity cited above: how does one design tools with the range and ability to accommodate and translate between the distinctly different data needs of the various domain communities? Even where technical solutions can be devised, how can participants from one disciplinary community make sense of data produced under the very different procedures and understandings of another? As work in the field of science and technology studies (STS) has demonstrated, data are the product of working epistemologies that are very often particular to disciplinary, geographic, or institutional locations. Data oriented to the needs, practices, and cultures of the ocean sciences might not be easily or automatically translatable into the idiom and usages of atmospheric science (though as this example suggests, relations between fields converging on common problems can be built over time). Beyond such issues of recognition and fit, questions of trust loom large. Can I trust those I share my data with to make reasonable and appropriate use of it, and on a timeline which doesnt jeopardize my own interests around publication, credit, and priority? Or conversely: can I trust the data Im getting, particularly as collaborative webs widen and my firsthand knowledge of the data and its producers recedes? In domain fields with long and robust histories of collaboration, norms of sharing may be well advanced, widespread, and highly structured. In others, the collaborative terrain may be more uneven, and norms and procedures for sharing relatively illdefined. Where uncertainty exists, and where two data cultures collide, even the bestintentioned efforts to promote sharing via technical or organizational fixes are unlikely to succeed.
The point of the above review of infrastructural tensions, both of the general variety and those endemic to scientific practice, is not that scientific cyberinfrastructure is in itself an impossibility, or that the more specific goals of the NSFs cyberinfrastructure program are unattainable or misplaced. It is rather to point to distinctive classes or types of tension suggested by comparative and historical experience as a means of informing early decision-making around the imagination, planning, and implementation of cyberinfrastructure. It is also to identify tensions as one of the chief sources of infrastructural change, growth, and learning over time. From this perspective, tensions ought to be seen as both barriers and resources for infrastructural development, and leveraged for their contributions to longterm qualities of infrastructural fit, equity, and sustainability.
In the discussions so far, we have addressed infrastructure as a thing (or class of things) defined by certain qualities and characteristics: scale, scope, durability (or resilience), accessibility, and a certain kind of reach over time, space, and a range of human and institutional activities. A useful summary definition in this vein is offered by Star and Ruhleder, who define infrastructure as being:
- Embedded in other structures, social arrangements, and technologies;
- Transparent (and largely invisible) once established, reappearing only at moments of upheaval or breakdown;
- Defined by its reach beyond particular spatial or temporal locations;
- Learned as a part of membership within particular professional, social, or cultural communities;
- Deeply linked with conventions of practice and other forms of routinized social action;
- Built on, shaped and constrained by its relationship to an already installed base;
- Fixed and changed in modular increments, through complex processes of negotiation and mutual adjustment with adjacent systems, structures, and practices .
We believe this approach to defining and understanding infrastructure is analytically powerful and transports across a variety of scientific and nonscientific domains.
But if infrastructure can be usefully described as a thing, it can also, we believe, describe a sensibility: a way of thinking and acting in the world capable of moving between the separate registers of technical and social action. From this point of view, the world is largely (though not infinitely) substitutable. Technology can, under the right conditions, stand in for what might otherwise be accomplished through human work. Conversely, human norms and interactions can substitute for technical fixes, sometimes with extraordinary efficiency. A beautiful example of this variability is offered by Bruno Latours story of the sleeping policeman. Those desiring to control excessive speeds on the road (say, within a neighborhood or around a school) may construct an elaborate system of signage, speed limits, monitoring, and enforcement (e.g. police) backed by some form of sanction (fines, or in the extreme, jail time). Or they may opt to simply install a speed bump (in France, sleeping policeman). These represent two significantly different paths to a common goal, one heavily social in nature (involving laws, courts, police, etc.), the other more purely technical (the speed bump) . A very similar story can be told about the relation between legal and technical responses to reputed copyright infringement, as seen in the current Digital Rights Management (DRM) controversies . The point is not to favor one of these over another in a global way (e.g. that technical solutions are always better than social ones, or vice versa), but rather to recognize their interchangeability. Put differently, the boundaries between the social and technological are fluid, and can often be shifted in either direction. The particular quality of thought required to recognize and act on this we call the infrastructural imagination: envisioning the fulfillment of functions by linking heterogeneous systems (some new, others yet to be built), including human actors, institutions, and procedures, moving between the technical and the social as needed to achieve (and reenvision) the goal.
What does all this mean for those engaged in making and responding to policy around cyberinfrastructure? As a first point, many of the histories, dynamics, and approaches outlined above stand in partial (but we think productive) tension with aspirations to design. There is a common tendency to speak of designing or building cyberinfrastructure, as if infrastructures (cyber or otherwise) can be built strictly from plan, in a highly conscious, carefully controlled, and fully directed sort of way. A careful and historicallyinformed study of infrastructural dynamics and tensions weighs against this. If cyberinfrastructure is to follow anything like the histories of electricity, railroads, the Internet, etc. its eventual ends and forms will not be fully contained in its beginnings, but rather subject to change through the intricacies of scaling, transfer, consolidation, etc. Borrowing the language of complex systems, cyberinfrastructure is best thought of as an emergent phenomenon, taking on properties (some of them surprising) as the system develops. This shifting or emergent quality fits uneasily with an architectural or planning vision of the world (seen in language and metaphors around design, structure, blueprints, etc.). An alternative and we think better language would build on different concepts and metaphors, perhaps those of ecology (nurturing, growing, etc.), perhaps those of exploration. Our personal favorite comes from Michel Serres, who argues that moving from the sciences to the social sciences and humanities (but we might substitute moving between social organization and technical infrastructure) is like crossing the Northwest Passage: seasonal shifts in ice mean that the voyage can be made, but never in the same way twice. Under such conditions, what is needed are not rigid maps, but flexible and creative principles of navigation .
Against this, we may note a second finding from history: namely, that initial choices do matter, and can continue to reverberate long after the initial conditions which shaped them have passed. This is the lesson of path dependency or momentum. For present policy around cyberinfrastructure, this points to the importance of early choices, and the need to get first steps right (or as right as we know within the limits of present knowledge). Once again, this rightness applies across the range of cyberinfrastructure activity, from technologies, to organizations, to institutional regimes, to norms and practices.
A third lesson can be found in the importance of gateways, as local systems and practices scale towards the level of infrastructure. This is arguably the current state of cyberinfrastructure: a set of diverse and in some instances highly innovative and robust local systems with as yet weak capacities for higherorder connection, coordination, and interoperability. Under such circumstances, gateways become centrally important: technologies, organizations, and people capable of bridging between disparate systems, practices, and worlds. Support for such work, which doesnt always line up easily or automatically with existing institutional and career structures, will be crucial.
A fourth and final lesson has to do with the inevitable and productive role of tensions in the infrastructural development process. The growth of infrastructure is a powerful and potentially transformative process, not least because of its redistributive nature: in making some things easier, infrastructures will frequently make others harder (or impossible); in advantaging the work or life worlds of some, it may alter, threaten, or degrade those of others. Depending on their form, scale, and the manner of their organization, such tensions can slow, alter, or substantially derail the development of infrastructure, for reasons both good and bad. At the same time, in the absence of reliable maps or blueprints, tensions can become a chief site and source of infrastructural change, innovation, and learning over time. For such learning to take place, reliable systems for surfacing and dealing with tensions need to be put in place. Such systems may once again represent an uneasy fit with existing institutional structures and incentives.
On 31 December 1999, a prototype of the clock of the long now was struck for the first time. Eventually to occupy a limestone cliff in eastern Nevada and set to chime every 10,000 years, the allmechanical clock is the brainchild of supercomputer designer Danny Hillis and the signal project of the San Franciscobased Long Now Foundation. Its goal (like the Foundations practice of adding a digit to the standard notation of dates, e.g. 02007) is to promote longterm thinking, responsibility, and a deeper sense of connectedness over time. A parallel but more modest sensibility has motivated much of our recent work and this paper in particular: namely, to relocate cyberinfrastructure in its own long now, and to distill from that history, and the history of infrastructure more generally, some rough guides to pragmatic and responsible action moving forward.
About the authors
Steven J. Jackson is an Assistant Professor and Coordinator of the Information Policy Specialization in the School of Information at the University of Michigan.
Email: sjackso [at] umich [dot] edu
Paul N. Edwards is an Associate Professor in the School of Information and past Director of the Science, Technology, and Society Program at the University of Michigan.
Geoffrey C. Bowker is Regis and Dianne McKenna Professor and Executive Director of the Center for Science, Technology, and Society at Santa Clara University.
Cory P. Knobel is a Ph.D. student in the School of Information at the University of Michigan.
1. Paul N. Edwards, Steven J. Jackson, Geoffrey C. Bowker, and Cory P. Knobel, Understanding Infrastructure: Dynamics, Tensions, and Design. (Ann Arbor: DeepBlue, 2007), http://hdl.handle.net/2027.42/49353.
This material is based upon work supported by the National Science Foundation under grant #0630263.
2. Thomas P. Hughes, Networks of Power: Electrification in Western Society, 18801930. (Baltimore, Md.: Johns Hopkins University Press, 1983).
3. We refer here to a looseknit group of historians and sociologists from the 1980s through present who have studied a variety of large technical systems, ranging from telephones and railroads to air traffic control networks. Prominent examples include Jane Summerton (editor), Changing Large Technical Systems. (Boulder Colo.: Westview Press, 1994); Olivier Coutard, The Governance of Large Technical Systems. (New York: Routledge, 1999); Renate Mayntz and Thomas P. Hughes, (editors), The Development of Large Technical Systems. (Boulder Colo.: Westview Press, 1988); and many of the contributions to Wieber Bijker and John Law, (editors), Shaping Technology/Building Society: Studies in Sociotechnical Change. (Cambridge Mass.: MIT Press, 1992).
4. The term is from John Law, Technology and Heterogeneous Engineering: The case of Portugese Expansion, In: Wiebe Bijker, Thomas Hughes, and Trevor Pinch, (editors), The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. (Cambridge Mass.: MIT Press, 1987).
5. Tineke Egyedi, Infrastructure Flexibility Created by Standardized Gateways: The Cases of XML and the ISO Container, Knowledge, Technology, and Policy, volume 14, number 3 (2001), pp. 4154; Tineke Egyedi, Standards and Sustainable Infrastructures: Matching Compatibility Strategies With System Flexibility Objectives, In: Sh. Bolin (editor), The Standards Edge: Unifier or Divider. (Menlo Park, Calif.: Bolin Communications, 2006); Geoffrey Bowker and Susan Leigh Star, Sorting Things Out: Classification and its Consequences. (Cambridge Mass.: MIT Press, 1999); Brian Kahin and Janet Abbate (editors). Standards Policy for Information Infrastructure. (Cambridge, Mass.: MIT Press, 1995).
6. Paul David, Clio and the Economics of QWERTY, American Economic Review, volume 75 (1985), pp. 332337.
7. Stephen Graham and Simon Marvin, Splintering Urbanism: Networked Infrastructures, Technological Mobilities and the Urban Condition. (New York: Routledge, 2001).
9. Susan Leigh Star and Karen Ruhleder, Steps Toward an Ecology of Infrastructure: Design and Access for Large Information Spaces, Information Systems Research, volume 7, number 1 (1996), pp. 111134; see also Geoffrey C. Bowker and Susan Leigh Star, Sorting Things Out: Classification and its Consequences. (Cambridge Mass.: MIT Press, 1999).
10. Bruno Latour, On Technical Mediation Philosophy, Sociology, Genealogy, Common Knowledge, volume 2 (1993), pp. 2964.
11. See for example Tarleton Gillespie, Wired Shut: Copyright and the Realignment of Digital Culture. (Cambridge Mass.: MIT Press, 2007).
12. Michel Serres, Le Passage du NordOuest. Paris: Editions de Minuit, 1980.
Copyright ©2007, First Monday.
Copyright ©2007, Steven J. Jackson, Paul N. Edwards, Geoffrey C. Bowker, and Cory P. Knobel.
Understanding Infrastructure: History, Heuristics, and Cyberinfrastructure Policy by Steven J. Jackson, Paul N. Edwards, Geoffrey C. Bowker, and Cory P. Knobel
First Monday, volume 12, number 6 (June 2007),
A Great Cities Initiative of the University of Illinois at Chicago University Library.
© First Monday, 1995-2013.