Animating the archive
First Monday

Animating the archive by Jeffrey Schnapp

Derived from ancient Greek αρχειου (“government”), the late Latin word “archive” has come in the modern era to refer not just to public records but also to the entire corpus of material remains that the past has bequeathed to the present: artifacts, writings, books, works of art, personal documents, and the like. It also refers to the institutions that house and preserve such remains, be they museums, libraries, or archives proper. In all of these meanings, archive connotes a past that is dead, that has severed its ties with the present, that has entered the crypt of history. The essay explores the ways in which Internet 2.0 offers new possibilities for institutions of memory: novel approaches to conservation and preservation based not upon limiting but multiplying access to the remains of the past; participatory models of content production and curatorship; mixed reality approaches to programming and informal education that expand traditional library and museum audiences; and enhanced means for vivifying and for promoting active modes of engagement with the past.


Archive You
Augmented virtualities
Memory palaces with porous walls



Derived from ancient Greek αρχειου (“government”) and the late Latin word archivum, the English derivative archive has come in the modern era to refer not just to public records but also to the entire corpus of material remains that the past, whether distant or close, has bequeathed to the present: artifacts, writings, books, works of art, personal documents, and the like. Its semantic field also encompasses the institutions that house and preserve such remains, be they museums, libraries, or archives proper. In all of these meanings, archive connotes a past that is dead, that has severed its ties with the present, and that has entered the crypt of history only to resurface under controlled conditions.

These brief reflections explore the ways in which the emerging media domains and practices loosely grouped under the umbrella Web 2.0 offer new challenges and possibilities for institutions of memory like libraries and museums: novel approaches to conservation and preservation based not upon restricting but multiplying access to the remains of the past; participatory models of content production, research, and curatorship; mixed reality approaches to programming and informal education that promise to alter and reshape traditional library and museum audiences; and, enhanced means for vivifying and for promoting active or experientially augmented modes of engagement with both past and present. The past was never really dead, of course; it always already belonged to the present. And Web 2.0 and toolkits that lie in the space between 2.0 and 3.0, including virtual worlds, Web3D, and the Semantic Web, provide some distinctive avenues for investing the present’s ownership of the past with the attributes of life. In short, they hold out the promise of animating the archive.

Embedded within the constellation of possibilities just evoked is a sort of Copernican revolution with respect to the roles performed by libraries and museums in the modern era. The latter institutions have long led a double existence. On the one hand, they have served as repositories entrusted with responsibilities of storage, preservation, and stewardship: tasks they have accomplished by placing historical objects at a remove — in the vault, in storage, under glass, at or beyond arm’s length. On the other hand, they have no less nobly served as sites of access and presentation, with the latter missions subordinated to the higher calling of conservation and with distinctions drawn regarding the degrees of access granted specialists vs. non–specialists, insiders vs. outsiders. Their institutional identities have long been built around the notion that physical presence is the norm: the belief that a unique aura that emanates from the objects that they grant access to, whether originals or multiples; that they are defined by the physical edifice that supports their storage, retrieval, display, and educational activities; that work carried out and programming experienced on–site are primary. Their models of community have, likewise, been based upon the sociability of the reference desk, the reading room, the café, the gallery, the after–hours event, the bookstore. Their models of service typically privilege local and regional ties, whether in the form of memberships or use statistics, as primary indicators of institutional impact.

None of these roles or models of institutional identity formation will vanish with the wave of a digital wand branded 2.0 or 3.0. Indeed they may well come away reinforced (as we heard yesterday). But there can be little doubt that they are already undergoing substantial modification and that Web 2.0 is bringing in its train challenges to conventional ideas of ownership, restricted use, storage and display, content creation and curatorial control.

Web 1.0 represented a revolution in its own right. Yet it allowed museums to imagine Web sites as the functional double of the paper remains with which they have traditionally supported on–site visitor experiences: gallery guides; the pamphlets that lure visitors to shows; the catalogues or postcards that they take home. It allowed libraries to develop electronic descendants of paper–based card catalogues, while expanding their outreach. But as the Internet footprint of these institutions expanded to meet the upsurge in Web visitor numbers and the exponential growth in all sorts of rival (virtual) programming and repositories, it began to chip away at the solidity of library and museum walls.

Web 2.0 accelerates the process and radicalizes the consequences. It shifts the focus from data retrieval and what I would call “top–down billboarding” to bottom–up working or reworking of content, whether in the form of texts, still or moving images, or audio. Not only does it place every library and museum adjacent to one another on a public square as big or small — your choice — as the entire world, but it also marks the beginning of an inversion which some will welcome and others will decry: namely, of the relative priority granted to the physical over the virtual. Whereas the virtual was once subordinated to and cast in a supporting role with respect to the physical, Web 2.0 points towards new couplings in which an institution’s virtual footprint tends to predominate over its physical edifice and the community that it serves is potentially world–wide, overlapping only in small part with potential or actual physical visitor/user populations. In the Web 2.0 era, every public institution has already been transformed into a glocal enterprise, local and global at the same time [1]. Glocalization will only accelerate over coming decades.

It is easier to theorize such a future that to describe its workings. But concrete examples offer the best instrument for exploring some of its potential contours. So, I will be structuring these remarks around specifics that are, so to speak, close to home: experiments connected with the Stanford Humanities Lab, the hybrid new media/technology lab and arts/humanities research center that I founded in the year 2000 [2]. Neither comprehensive nor necessarily representative, this sampling will point towards five broad conclusions that I here anticipate so as to devote the second half of my remarks to a few case studies:

first, that due to such factors as the media heterogeneity and sheer quantitative abundance of archival materials from our own era, the pervasiveness of copying, transforming, and sharing devices, and expanding world–wide access to bandwidths that promote the rapid and unrestricted circulation of data, many of the changes I have already alluded to will impose themselves on bricks–and–mortar libraries and museums, much as file sharing, remixing, and mash–ups practices have imposed themselves on the music industry. By this I mean to say that, irrespective of how libraries and museums shape their own Web 2.0–3.0 policies and practices, they are likely to find themselves operating in an environment in which: a) open source resources and cultural repositories are increasingly the norm; b) audiences expect ever increasing degrees of off–site access as well as freedom to distribute, use, and modify materials within the shifting topography of the multiple online communities to which they belong; and, c) on–site visitors require and/or expect augmented modes of access to and experience of cultural objects, whether familiar or remote.

second, that the participatory media grouped under the umbrella of Web 2.0–3.0 stand not only for an expanded notion of community and service on the part of museums and libraries, but also for a research, development, and communicational landscape in which collaboration within and across institutions is likely to become increasingly central. This implies a shift away from top–down models of content ownership, authorship, and management towards flatter organizational structures: structures that knit together far more closely so–called “core” activities with outreach, communications, and education and involve parties that no longer share the same physical space or time zone. Under these altered operating conditions, for instance, the act of processing of an archive may now become identical with its publication; the staging of an exhibition with the opening of a wiki space in which on– and off–site audiences interact directly with artists, scholars, and curators, not to mention with one another. Platform sharing, repository building, and programming across institutions (pioneered long ago by research libraries), whether on a regional or a global scale, represents the logical corollary, particularly as www footprints expand in scale and cost. There is nothing “natural” about the coupling between a given “location” on the Web and a physical edifice or institutional brand name. Nor is there any reason why physical exhibitions separated by continents and oceans can’t become coterminous via a virtual world.

third, the emerging informational landscape associated with Web 2.0–3.0 will be defined at least as much by the substitution of physical experiences by virtual or remote ones, as by interminglings of the physical with the virtual and the virtual with the physical. So–called “mixed” or “augmented” reality, in other words, will prove at least as significant as the purely virtual to the future of museums and libraries. And it demands modes of innovation that exploit medium–specific features of both the physical and the digital in ways that enrich experiences of physical artifacts, rather than distracting or impoverishing. It seems to me that the best way to defend the distinctive magic of experiencing original objects in real time and space — I am a believer in this magic — is to attack questions like the following ones: What can one do with a digital object that one will never be able to do with a physical counterpart and vice versa? Rotate and scale a 3d scan of Michelangelo’s David up and down in order to view him from angles barely visible even to the sculptor? Peer into a canvas to see the invisible layerings that compose it and then flip it on its back to examine the stretcher and support? Survey three million books in a matter of seconds to embed a recondite reference in an essay you are writing? Be present in multiple locations and media at once instead of visiting them sequentially? These are more or less straightforward instances where, though perhaps sacrificing certain qualitative aspects of physical experience, the digital has the edge.

But no less interesting is a world in which a visitor looking at the actual glass–encased papyrus remains of Demosthenes’ Oration on the Crown can also maneuver a digital double and/or elect to see a non–intrusive pop–up overlay with a transcription and translation of the orator’s text as well as live, ongoing debates among members of the global community of expert papyrologists regarding the meaning of every word, including words that are missing. Or a world in which a visitor experiences the landscape of contemporary Rome with a hand–held time machine that allows for the viewing of rigorously researched visualizations of ancient, Byzantine, medieval, and Renaissance Rome layered over the contemporary cityscape. Or a world in which photographic archives of the WPA can be directly annotated by individuals who were directly involved in the WPA’s unfolding, with every photograph pinned to the landscape of Google Earth, inserted into interactive timelines, and situated in a learning environment, all supported by a large–scale show of vintage photographs? Imagine 1,000 regional history projects structured by professional historians and coordinated by regional museums and research libraries, but in which the vast bulk of contents are assembled and piped in by members of the public only to then be organized into constellations of touring micro–exhibitions such that the entire community becomes an extended classroom.

My point is perhaps a too obvious one: that the memory palaces of the 21st century will have much more permeable walls than their 19th and 20th century predecessors. This is also to say that they will be much bigger both from the standpoint of the physical territory that they must cover and the corpora of information that they must harbor. Thanks to mirror worlds like Google Earth and 3d virtual world counterparts, thanks to ubiquitous computing devices equipped with GPS technologies that can calculate locations within inches, thanks to the ever increasing availability of wireless bandwidth, the future of knowledge, culture, and social and political practice will emphasize embeddings of the virtual within the real, actual physical landscapes curated just as if they were an art gallery, the collaborative and distributed building of annotations on and overlays of the physical world. This is a future that is already with us. The challenge for museums and libraries? To build their physical platforms and collections out into these and other domains of intersection between the virtual and the physical in ways that reinforce not only access and outreach but also establish new models of imagination, quality, and rigor.

fourth, whether by tradition or by inclination, libraries and museums have tended to be more product than process–oriented when it comes to delivering programming and content to the public. The finality and finitude of print have therefore tended to suit them better than the volatility and infinite expansibility of digital data. Opening oneself up to the sorts of distributed or multidirectional content–generation and sharing models enabled by Web 2.0 implies a stronger emphasis upon process and a loosening of attitudes towards ownership and content control, the boundary line between inside and outside. It implies a culture that is less risk averse and greater challenges as regards maintaining high standards of quality control. But I believe the payoffs could be considerable.

A process orientation means a number of different things: placing a research project in public view even as it is underway; transforming the development process of exhibitions into sites for market testing as well as learning by exposing the process to online commentary and critique; allowing for resources to be built collaboratively within and outside a single institution’s walls. The common thread here is that a turn towards process both multiplies opportunities for building bridges between the intra– and extramural realms, and expands the nature of programming. It contributes to the transformation of institutions of memory into not just producers and “deliverers” of finalized contents, but also into laboratories where, much as at the San Francisco Exploratorium, “stuff” is always happening that anybody can watch: stuff that invites observation and participation — thinking, commentary, conversation, construction, play.

fifth: finally, though perhaps self–evident, it bears repeating that, like any toolkit, the technologies grouped under the Web 2.0 umbrella and, more broadly, everything from wikis to virtual worlds to immersive caves to semantic Webs, in and of themselves, provide few if any answers as regards the present or future of institutions of memory. One can do more or less rigorous or sloppy things, things that replicate the roles performed by print supports or that fundamentally alter them, that expand knowledge and enrich experience or that contract and impoverish both. The burden of placing the toolkits readily available to us to interesting and innovative uses is ours alone: as historians, museum and library directors, curators, artists, and communicators.

I work with technologists a great deal these days and greatly admire their powers of mind. But I am routinely struck by their surprise at the sorts of demands that artists and scholars make of the widgets that they create. You want a screen to convey the tactile qualities of parchment or want digital pages to turn with the friction and sound of 16th century paper? You want to be able to record, archive, and study the body language of avatars interacting with physical persons in an art installation? You want to be able to preserve an entire virtual world so that it can be visited one hundred years from now? Yes, I want all of the above because each is interesting from the distinctive vantage point of a cultural historian. As it happens, each also poses distinctive technological challenges that are also interesting from the perspective of computer science. In short, tools are not something separate, whether ahead or behind, above or below, culture and learning, but are themselves so socioculturally embedded that it is we who, each in our own domains of expertise, must devise appropriate uses for them, uses that are only infrequently those for which they were designed.

Apologies for this bit of abstract philosophizing. Let’s now come down from on high and settle into three examples. I would like to place these under the rubrics: Archive You, Augmented virtualities, and Memory palaces with porous walls.



Archive You

What can one do with 100,000 political posters gathered from all over the world over the course of a century? The traditional answer would be to put them into deep storage and then process them one by one as a precondition to access: at least a decade’s work, even for an army of expert cataloguers, given that you know next to nothing about at least 85,000. But how to justify such an investment when, though historically significant, the objects themselves are neither unique nor exceptionally valuable?

How to deal with vast seas of audio recordings and film and video footage encompassing everything from industrial training films to historical events captured from multiple viewing positions to outakes in every conceivable format? What about archives composed of computers and video games? What about repositories of site–specific artworks, documentation of Happenings, interactive avatars from the early history of artificial intelligence and digital art?

To organize such corpora into boxes and stash them away for hypothetical future use neither solves the conservation issues nor addresses the probability that much of the knowledge that renders these objects intelligible or interesting: a) lies outside the usual communities of expertise (universities, research institutions, etc.); and, b) that the sheer abundance of materials being produced and collected means that traditional processing and conservation approaches must be necessarily channeled towards very limited subsets of documents, cultural records, and data.

In my view, one promising approach is to build open architecture resources like the collaborative timeline that my former colleague Casey Alt devised for the Stanford Humanities Lab “How They Got Game” project back in 2000 [3]. The Project was developed around an actual physical archive, the Cabrinety collection — the world’s largest collection of video game software and hardware — and it was accompanied by two physical exhibitions: one at the Iris and B. Gerald Cantor Center for the Visual Arts on narrative in/and videogames; the other at the Yerba Buena Center for the Arts entitled Bang the Machine.

The collaborative timeline was to represent an archive in its own right. It sought to involve the community that participated in the genesis and afterlife of this collection of objects — technologists, animators, storyboarders, managers, game modders, even player groups and students — by means of a data–driven, Web–based interface that permitted collaborative mappings of historical events within a multiplicity of categories. The events in question could be generated, edited, documented (by means of uploaded files), and annotated by any member of the user community. The resulting timeline was fully searchable on all levels, with colored links indicating relationships among persons, events, and individual artifacts. As you can see from a demo preserved at, the timeline in question was never completed. But the core idea remains sound and anticipates some of the open source tools being developed at the time of writing: to allow the communities of practitioners and end–users to write and document their own histories within information architectures framed and maintained by expert researchers.

The same bidirectional approach was carried over to the development of a second tool (also documented at, this one brought to completion: a collaborative genealogy of the Stanford biochemistry department that visualizes relationships among researchers by means of a Flash interface that relays XML requests to Java servlets that communicate in turn with a free–standing MySQL database composed of PubMed data that has been automatically linked to faculty–authored abstracts and full–text versions of publications. Fully searchable, the tool provides the framework for a professional community to narrate and reflect upon its own history in kaleidoscopic form, but allows for a multitude of visualizations of vertical and horizontal interconnections.

In other words, the name of the game here is participatory archiving: archive yourself or, as I’m calling it, archive you.



Augmented virtualities

I noted in passing the special challenges that non–object based forms of art pose with respect to preservation and presentation. It was precisely such challenges that captured my and my colleagues’ imaginations when the archive of the contemporary artist Lynn Hershman was acquired by the Stanford University Libraries in 2003. Members of the Lab had already been experimenting within Second Life as a development platform and several of us were personally acquainted with Hershman whose early work had consisted in some of the very first site–specific installations. (Hershman went on to become a prominent digital artist known for interactive avatars and for film projects such as Teknolust.) A partnership between the artist and Lab was born that gave rise to a quite literal animated archive in the form a virtual reconstruction of Hershman’s very first installation artwork: the Dante Hotel. This reconstruction was then embedded in the physical gallery spaces of Montreal’s Museum of Fine Arts, which, in turn, were re–embedded in the animated archive, so as to allow for audience/avatar interactions in spaces that mirrored one another. The experiment, entitled Life2, is scheduled to reopen at SFMOMA in October 2008; several European runs are also being negotiated. It is documented in a five–minute film that is available for viewing on the SHL Web page ( as well as on YouTube.



Memory palaces with porous walls

By way of a conclusion, I would invite the reader to revisit Life2, so to speak, “live” on the SHL island in Second Life (Hotgates), where a virtual replica of the Montreal Museum of Fine Arts installation sits in the company of an open air theater and cinema, several virtual–only galleries of artist built bots and documentary photographs, and an overall critical apparatus that situates the Dante Hotel within Hershman’s larger oeuvre. This same location has been the site of recurring as well as exceptional simultaneous live first/Second Life events, including an absolute first at the 2006 Sundance film festival: the premiere of the Hershman documentary film Strange Culture with the Second Life audience sitting face–to–face with the festival audience and participating in a live post–screening discussion with the artist. Along with the NASA–sponsored International Spaceflight Museum, SHL’s island is one of four sites selected by the Library of Congress for its “Preserving Virtual Worlds” project.


Figure 1: Postcard of Second Life showing of Strange Culture
Figure 1: Postcard of Second Life showing of Strange Culture.


But the Lab’s island is also a perpetual work in progress and test bed where nothing is permanent. So I would like to close by launching the reader beyond Life Squared towards the rough beginnings of a collaborative venture with the Canadian Center for Architecture, the Wolfsonian–FIU, and the Børnholms–Kunstmuseum in Denmark, currently limited to a mockup of the galleries of the CCA. The project in question has recently received seed funding from the Danish Ministry of Science, and consists in a mixed reality exhibition entitled SPEED limits, concerned with the pivotal role played by speed in modern life: from art to architecture to graphics and design to the material culture of the eras of industry and information. It is intended to mark the centenary of the foundation of the Italian Futurist movement, whose inaugural manifesto famously proclaimed “that the world’s magnificence has been enriched by a new beauty: the beauty of speed.”


Figure 2: Snapshot of the Library of Congress
Figure 2: Snapshot of the Library of Congress.


Whereas shows at MOMA, the Pompidou Center, and the MART (Museo di Arte Moderna di Rovereto e Trento) will be commemorative in spirit and tightly focused on Futurist production in the visual arts, SPEED limits will instead be critical and speculative. Broadly exploring a single Futurist thematic, it will weave physical exhibitions in distant locations into a single comprehensive virtual platform consisting in:

a) virtual interpretations of the three physical exhibitions that can be navigated either in bodily avatars or vehicles from any location in the world.

b) a virtual workshop furnished with “press kits” (including guidelines for the development of visitor–generated content [technical specs, genres, curatorial aims] and the rules governing a series of design competitions open to the public); renderings of all physical objects that are likely to be included in the show; and an in–world modeling toolkit that allows for the import and export of materials in all standard document, image, and media file formats.

c) ten virtual galleries that have no physical equivalent, five curated by artists, critics, and scholars; five reserved for visitor–generated content and visitor–generated curatorial concepts on the basis of the design competitions. All ten will be designed not according to architectural conventions, but as experiments with the very notion of the “virtual gallery” (i.e., a container unbounded by the temporality or spatiality of physical exhibition spaces in which the physical behavior of anything and everything is up to the curator).

Both the process and product are experimental, not to mention tentative, but both seem well suited to the recollection of a movement whose foundation stone was the destruction of all foundation stones. “We will destroy the museums, libraries, and academies of every kind,” F.T. Marinetti fulminated from the front page of the Parisian daily Le Figaro nearly a century ago [4].

I am certain that few today would embrace such a fiercely purgative credo. But I feel no less certain that the cause of animating museums, libraries, and academies will find more than a handful of advocates in many real or virtual assembly halls. End of article


About the author

Jeffrey T. Schnapp has been the director of the Stanford Humanities Lab ( since its foundation in 2000. He occupies the Pierotti Chair in Italian Studies at Stanford University where he is professor of French & Italian, Comparative Literature, and German Studies. He has played a pioneering role in several areas of transdisciplinary research and led the development of a new wave of digital humanities work. His research interests extend from antiquity to the present, encompassing such domains as the material history of literature, the history of design and architecture, and the cultural history of engineering. He is the author or editor of eighteen books and over one hundred essays on authors such as Virgil, Dante, Hildegard of Bingen, Petrarch, and Machiavelli, and on topics such as late antique patchwork poetry, futurist and Dadaist visual poetics, the cultural history of coffee consumption, glass architecture, and the iconography of the pipe in modern art.
E–mail: schnapp [at] stanford [dot] edu



Study participants Bill Dutton and Nancy Schwartz provided helpful insights and comments.



1. The neologism glocal was apparently coined by Manfred Lange who, in his work for the May 1990 Global Change Exhibition, sought to capture the complex interplay between the local, the regional, and the world–wide.

2. SHL was founded in 2000 with startup monies provided by Stanford’s then president Gerhard Casper and was directed by myself between 2000–2005. In order to broaden its reach, in 2005 my colleagues Michael Shanks and Henry Lowood joined the leadership group; as of 2008–2009, John Willinsky, the leader of the Public Knowledge Project has also joined the leadership circle.

3. Between 2000 and 2005, the Project was led by Timothy Lenoir and Henry Lowood. Since 2006 it has become “How They Got Game II,” received a new round of external funding, and is led by Henry Lowood. For further information, consult

4. “Fondation et Manifèste du Futurisme,” Le Figaro (20 February 1909), p. 1. My translation.


Editorial history

Paper received 21 July 2008.

Copyright © 2008, First Monday.

Copyright © 2008, Jeffrey Schnapp.

Animating the archive
by Jeffrey Schnapp
First Monday, Volume 13 Number 8 - 4 August 2008

A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2016.