[Cross-posted from my current class blog.]
These are just examples. Can you think of others?
Update 2/1: Video now online.
The title here plays on an old essay of Jerome McGann’s, “Shall These Bones Live?” In a more recent publication he delineates what he terms “the scholar’s art” : “Scholarship is a service vocation. Not only are Sappho and Shakespeare primary, irreducible concerns for the scholar, so is any least part of our cultural inheritance that might call for attention. And to the scholarly mind, every smallest datum of that inheritance has a right to make its call. When the call is heard, the scholar is obliged to answer it accurately, meticulously, candidly, thoroughly” (ix). Last week I spent two days in Berkeley, attending the New Media and Social Memory symposium and associated project meetings at the Berkeley Art Museum/Pacific Film Archive, sponsored by the NEA-funded Archiving the Avant-Garde project, part of the Variable Media Network. The central problem addressed by the symposium was the documentation, preservation, and recreation of digital, or more precisely variable works of art—works, in other words, not embodied exclusively in physical artifacts, but which are either digital and therefore possessed of a logical ontology separate and distinct from the work’s instantiation in any one particular hardware system, or digital and physical hybrids. What follows is something of a trip report, based on my notes, and also a little of my own thinking about the scholar’s art in a world where bits, not fossilized artifacts, are increasingly the basis of inherited cultural data.
The program was held in the theater at BAM/PFA. It’s a nice space, but power outlets were in conspicuous short supply—I was lucky enough to have gotten there early and so I camped the only one I could find, which was down near the front of the room. By the time things got underway a little after 10:00 I’d guess there were 60-70 people in the room. More arrived over the course of the day. The first keynote was Stewart Brand, introduced by WIRED’s Jane Metcalfe (who described the preservation problem as how to derive a collectible artifact from variable media art). Brand, of course, is one of the old-time West coast digeratti, speaking here in his capacity as guiding light of the Long Now Foundation, best known for projects such as the 10,000 Year Clock and the Rosetta Project. Brand mentioned the origin of his own interest in digital preservation as the Getty-funded Time and Bits meeting, held back in 1998, pointing out that ten years from now we’ll surely still be meeting to address these same questions (a throwaway line, but to me raising the interesting question not just of how much practical progress has been made, but how stable or contiguous our underlying conception of the preservation task has actually been). Brand then proceeded to reread a paper entitled “Creating Creating,” originally delivered in 1992 at the Passadena Cyberarts festival, a tactical choice intended to make the point that the more things change the more they stay the same. In the paper, he characterized new media art as a movement with “no tradition, no masters” (I’d take issue with that); he said that play looks like invention, and that the artist can catch a “free ride on novelty of medium.” The flip side of this is that the cutting edge is “all blade, no handle”; that is, artistic identity is constantly at risk of being subsumed and consumed by technology (the example here was Steve Benton and other artists at the MIT Media Lab, names mostly now forgotten, who pioneered white light holography, the ubiquitous technique found on credit cards and other mass-produced media). Brand then discussed a couple of examples of digital preservation, such as Spacewar! which one can play either on a working PDP-1 one at the Computer History Museum (a unique artifact, and akin to traveling to a distant archive to view a manuscript or original artwork) or via emulation software at various sites around the Web (such as this one). His point was that the latter was far more practical and achievable. (This marked the earliest manifestation of what was to become a fairly consistent theme throughout the day, namely that preservation consists in saving software, code, and logical structures or behaviors, while hardware is expendable. I understand the basis for this argument, but I would want to insist that playing Spacewar! on the PDP-1 exposes the user to material affordances that have vanished in the Web-based emulations of the game. Which is not to say that the emulations aren’t useful and desirable, only that preservation is a spectrum of activities with no one solution ever being entirely adequate.) In the same vein, he mentioned Jaron Lanier’s failed attempts to reconstitute his Commodore-64 game “Moondust” using original hardware, and the work’s subsequent availability in emulation; he did not, however, mention a coda to that story which I’ve heard repeated several times, namely that the emulation initially got the timing of the game wrong, resulting in a much faster pace of play on modern processors and leading Lanier to exclaim “That’s not my game at all!” To the question of why bother to preserve new media art, Brand concluded “it’s hard, and so it’s worth trying; and if you fail, it’s only art.” I have to admit I like that.
Next up was a panel featuring Brand along with Kevin Kelly (WIRED) and Jon Ippolito (an artist and formerly a curator at the Guggenheim, now at the University of Maine). Ippolito kicked things off with remarks and slides covering a range of points about the preservation enterprise. Storage, according to Ippolito, is the canonical way of preserving things in the art world; that is, you put things in a climate controlled box. Museums, built of stone, reify the assumptions at the heart of their cultural mission: safety, stability, solidity, and stolidity. But what if we looked at change not as an obstacle to survival, but as a means of survival? What if what we valued was not the material stuff of art, but its experience? Such a perspective may sound strange, but Ippolito pointed out the oldest and most enduring works of art we “possess” (if that’s the word) are performances (ritual, dance), not artifacts. To that end he offered the example of the Seeing Double, a 2004 exhibition at the Guggenheim, which paired “original” works of art with their simultaneous and parallel recreations. A second example is provided by the “renewal” of the Erl King, an early interactive video work (1982) still operable (just barely) on the original equipment (a Sony Z-80 computer, among other devices). A team of artists, archivists, and computer scientists built a successful emulation of the complex, multimedia installation, using radically different material but preserving the essence of the experience, as confirmed by the original artists who consulted on the emulation (full write-up here, PDF). (Jeff Rothenberg discussed the same project at greater length later in the day, see below.) This lead Ippolito into a thumbnail survey of various digital preservation approaches: migration, which involves transferring and updating data from older formats and media to newer formats and media (manually or automatically); emulation, which involves using the abstraction inherent in multiple layers of computational processing to make a newer machine impersonate the formal behaviors of an older one; and reinterpretation, which Ippolito glossed as “adding something without taking away,” that is reconfiguring or reperforming the work without abandoning the integrity of its original identity. This last approach is particuarly compatible with the remix/participatory practices increasingly characteristic of “Web 2.0.” Emulation, however, is also an opportunity for remixing, as in the example of Linux Wars, an emulation of the classic Space Invaders game which he demonstrated; it features a MS Windows banner attempting to stave off (ultimately futilely of course) wave after wave of alien “invaders” iconified as the various Linux distributions. Ippolito concluded by showing The Pool (still in beta, not currently available to the public), a project of his which creates a relational space for displaying works of digital art that highlights the derivation and appropriation of components and behaviors among them—thereby underscoring variation over time as essential to the practice of preservation, and deflating Brand’s notion of new media art as all individual talent and no tradition.
WIRED’s Kevin Kelly followed, beginning with some remarks about time capsules. According to William Jarvis (who is, according to Kelly, the world’s “foremost expert on time capsules,” good to know someone has that job), many time capsules are forgotten soon after they are buried (there apparently exists a 10 most wanted list of missing time capsules); and most of the rest are relatively uninteresting when opened. For Kelly, this raises the question: are we always going to be saving the wrong things? The key problem for preservation is that of attention scarcity. Anything to which we pay attention, suggested Kelly, stands a pretty good chance of getting preserved; but how do we know what to pay attention to now, in the present (consider this in light of McGann’s dictum about the comprehensive purview of the scholar’s art, see above). He briefly mentioned garbage and landfills as examples of “anaerobic preservation,” an intriguing idea (as a sidenote, garbage and landfills are a central theme of Don DeLillo’s millennial opus Underworld). At this point Henry Gladney spoke up from the audience, imploring archivists to put tools for preservation into the hands of the public so that we can all save the digital heirlooms in our data attics; later societies can mine the vast cultural store for that which interests them. Stewart Brand offered a distinction between continuous and discontinuous attention, describing a “valley” between an original act of creation and the interest of posterity; how do we cross that valley in the digital age, when the only technological vehicles for doing so are volatile and short-lived? Brand briefly speculated on the advent of hives of “robot enthusiasts” whose tireless algorithmic attention spans would conspire to preserve some particular digital object, an obscure game, say (or again, any of McGann’s cultural datum). Ippolito intervened to suggest that saving doesn’t mean “keeping,” it means “keeping alive”—again endorsing change over stasis. At this point the discussion began to wind down. Connections between bioinformatics and preservation were briefly raised: what if you encoded the Library of Congress as self-propagating E. coli, something theoretically possible given the platform independence of encoded representation? Kelly asked whether once everything is saved (assuming the near- or mid-term advent of infinite digital storage capabilities) does the artist’s job become one of creating works that disappear? He offered the example of William Gibson’s electronic poem “Agrippa” but neglected to mention the text’s online afterlife, available in hundreds of locales via a simple Google search (something I’ve written about at length in my book Mechanisms, forthcoming from MIT Press). Ippolito said that digital conditions give us a both/and option for preserving originals and their subsequent reinterpretations. Someone in the room from Pixar observed that there is no original in film-making today since each medium/platform/format has its own independent “master”; in practical terms, a digital film production is a series of reinterpretations by an acknowledged team of originists. Someone else from the audience pointed out the barriers of copyright, citing the instance of original Lazy Sunday video now gone from YouTube, though its derivatives remain available. Ippolito reminded us that open source has its cultural other in digital encryption, which guarantees trust and access.
And so to lunch (for me, tandoori at the curry place on the first floor of my nearby hotel).
First up after lunch was a lively panel with Alexander Rose and Kurt Bollaker, Executive Director and Digital Research Director respectively for the aforementioned Long Now Foundation. Bollaker led off by distinguishing between the preservation of bits, accessibility of bits, and comprehensability of bits. I can’t remember if this had a more specific grounding or not, but he employed the image of cultural layers, pointing out that digital technology is a fast layer and preservation is a slow one, and that “it’s hard to build slow layers on top of fast ones.” Nature, he pointed out, preserves and perpetuates itself—perhaps we can draw our lessons from nature rather than culture. From here Bollaker explored some ideas about data preservation through mobility, distribution, and diversity, using the idea of “moveage” as opposed to storage. Rose interrupted to note that Kezaar and other p2p networks are already doing this, and that the killer app of p2p may well turn out to be archiving. Bollaker then introduced a plea for preservation as a social and communal activity, pointing out that there is a vast array of motivated people—albeit possessed of uneven skills and resources and varied intentions—who should be viewed as a resource for the preservation enterprise. Collaboration is key here, as is distribution of archives and their redundant, distributed curation. He offered some examples of successful community based preservation: video games, especially MAME; Gnash for Flash; open languages like Processing; and finally, internet pornography. The key here, said Bollaker, is to teach and engage the public; digital art is less likely to die if it is frequently accessed by a diverse user base; DRM media, by contrast, will whither and die without access. Bollaker ended by showing formatexchange.org, a Wiki site which collects conversion paths for file formats.
Alexaner Rose framed his remarks in terms of “making (art)ifacts for archaeologists.” Even the most basic assumptions of digital art-making are up for grabs; as any American travelling abroad knows, even 120v/60hz power is something we already routinely emulate (via adaptors). How then do we preserve for the long-term, when the most basic affordances of objects may be obsolete because of the evolution of the human hand? (Such is the perspective of the Long Now.) He noted that language is itself always a potentially obsolescent platform. The discourse then became more concrete, as Rose discussed a durable process of micro-etching onto metal: direct a galium ion beam into silicon, plate it into nickel, and this lasts thousands of years (using this technique you can store 300,000 pages on a disk if you’re willing to read it with an electron microscope; the technology came out of Los Alamos). This is the technology used in the aforementioned Rosetta Project. The critical threshold here is that between legibility and illegibility: had Napoleon’s soldiers not known what they had found was worthy of attention, the Rosetta Stone would have been lost forever. Thus the Rosetta disks begin with marks visible and legible to the naked eye which then spiral down to the microscopic. Rose next described a visit to the Mormon’s massive store of genealogical archives; the microfiche archives themselves are housed in a concrete bunker buried in the side of a mountain in Utah; the index to the archive, however, is in an Oracle database, which no one knows how to preserve. A similar example is provided by the government’s nuclear waste storage facilities: the lesson here is that you can’t solve a 10,000 year problem all at once, but you can plan beyond your own lifespan. In this sense, the Clock of the Long Now is as much about now as later; you weren’t thinking in terms of 10,000 years before you’d heard of it. From the audience, the Pixar representative said something like, there are 50 of us who know how to make the films, but only when 10 of us are in the room together. Rose highlighted the problem of the corporate destruction of memory, where corporate policy routinely dictates that email is deleted after 30 days for fear of liability. (A problem being explored by my Maryland colleague David Kirsch.) At the same time, however, corporate memory (and military memory) are perhaps the most urgent venues for digital preservation; consider Boeing’s documentation for its 747s, planes whose active service life is measured in decades, or a recent Popular Mechanics article pointing out that the Navy’s CAD diagrams are becoming increasingly difficult to access as the software is upgraded every 18 months or so. To me, this yet again underscores the fundamentally social rather than technologically deterministic nature of preservation—itself one of the key themes to emerge form the symposium—because the CAD software could be designed in such a way as to accommodate an upgrade path.
Next up was a panel from Rhizome’s Marisa Olsen and Michael Katchen, chief archivist for Franklin Furnace. This was a more hands-on session than the others, as Olsen demonstrated the folksonomic structures soon to be incorporated into Rhizome’s ArtBase, and Katchen demonstrated the Franklin Furnace database which manages documentation of ephemeral forms of performance and installation art. In the course of this, Olsen observed that we are and always have been a culture of repetition. Much discussion about taxonomy, hegemony, and control vocabularies, with Jeff Rothenberg (from the audience) invoking the Borges story about the catalog of things in the world whose root level was birds which do or do not belong to the emperor. From the audience came the observation that language is the ultimate control vocabulary, and what are the implications of all our metadata taxonomies being in English?
Jeff Rothenberg, a computer scientist from RAND (well known in the preservation community as a proponent of emulation), and Richard Rinehart, Director of Digital Media and Adjunct Curator at BAM/PFA (and organizer of the symposium) shared the next panel. Rothenberg delivered the most structured presentation of the day, offering a detailed account of the Erl King renewal project (mentioned earlier). Full details are in this paper (PDF), so I won’t recapitulate here. “Renewal,” however, is a word that was used deliberately, as an alternative to “preservation” or even “reinterpretation.” The objective was to renew the project without changing its behaviors; the team explicitly did not want to create a new version of the work. He offered a variation of the famous parable of Odysseus’s ship, which asks how much of an original artifact’s material can be replaced before it has become a new artifact. Rothenberg was careful to note that the project team had started with a number of advantages, which included: the fact that the original hardware and software still ran; they had both source and object code in hand; they had excellent documentation of same; and the artist and original programmers were available for consultation. Rothenberg spoke at some length about the distinction between source and object code, as well as emulation and interpretation as preservation strategies. Both were options here, though the team eventually opted to employ a source code interpreter. Rothenberg noted that parts of the CP/M operating system had to be emulated (as opposed to interpreted) as did the installation’s various periperhals though this wasn’t especially challenging. The video content had to be converted and migrated to a contemporary format, also not a significant challenge. The reuslt was that the renwed Erl King’s behavior was virtually identical to the original. Conclusions: running original code preserves behavior (whether by emulation or source code interpretation); mixed analog/digital works can be preserved effectively, albeit at the cost of non-trivial effort.
Rinehart spoke next. Digital media, he said, are causing us to revisit what and how we remember as a civilization. The significance of digital art for the museum and preservation community is that it provides the most complicated possible case study of a more general problem. Where is the art? How do you define boundaries of a work from its network and environment? Do we want to preserve the work or keep it alive? (These are not the same objectives.) Rinehart then proceeded to develop an extended analogy between new media art and music, held together by the observation that both are performances that can be documented by a scripted notation scheme, that is a score. According to Rinehart, variability and performativity are as essential to new media art as its look and feel. Every experience of digital art is after all a live act (performance) of computation. A score, Rinehart said, is a specific kind of documentation about a work of art. It is intended to guide people in the future in recreating that work. (Some might first assume that the score for a digital work is its source code; but code for digital art is not a sufficient score; code is too platform dependent.) Musical notation is a robust vehicle for a score precisely because it is not dependent on any particular platform. With this Rinehart introduced his work on MANS, the Media Art Notation System (documentation here (PDF); appendices here (PDF); paper here; see also the Electronic Literature Organization’s Born Again Bits report, which independently arrives at many of the same conclusions). MANS is an XML vocabulary derived from MPEG-21, which was created by industry concerns as a way of displaying complex media objects across heterogeneous devices (excellent paper on MPEG-21 here). MANS assumes, and indeed embraces the variability inherent in digital media. The notation of a digital work as a score assumes there is no such thing as a unique computer; instead the separation of logical from physical which makes computer a universal machine is a strength to be exploited, not a liability to be offset; similarly, variation over time is not corruption or compromise but a distillation of the essence of the medium. The key move here is thus a kind of jujitsu, which takes everything that is difficult about saving variable media art—its formal, algorithmic, and computational ontology—and makes these self-same qualities the basis for reconceiving what it in fact means to “preserve” the work. Every digital art work of the future, Rinehart opined, will be a remix; research will consist in downloading art and taking it apart. He ended with offering Wikipedia and Second Life as models for a museum of the future, by which he meant their participatory, open source, user-contributed content.
Finally, Bruce Sterling. If you’ve never been in the audience for a Bruce Sterling speech, you’ve missed a signature digeratti experience. Bruce, in addition to his popular work as a novelist and journalist, is the co-founder of the Dead Media Project. He began, for some five minutes on end—a long time when you’re sitting and listening to a speech—by reciting a litany of dead media technologies, starting with magic lanterns and moving image technology from the 19th century and ending with the home computing era. It was devastatingly effective, and there was a strange poetry in the recitation of the vanished technologies. The exemplary dead media story, explained Sterling, could be embodied by one Thaddeus Cahill, inventor of something called the Teleharmonium, a 1906(!) attempt to distribute electronic music via telephone. It is exemplary because it was daring, colossal, inventive, widely publicized, and then vanished utterly. There is nothing left of teleharmonium; no one even knows what it sounded like. (Sterling wondered if the ghost of Cahill is to be found in celluar ringtones.) He mentioned a lost poem by Sappho, recently discovered. He used the phrase “pace layer after pace layer of exquisite instabilities.” He pointed out that no archival medium for bits exists, and that bits themselves are real atomic things, small, tiny, vulnerable (something I’ve written about). The centerpiece of the talk was what he termed the milk products theory of dead media. Why do some media live and some die? Media, you see, are like milk. We need milk and we make milk. Milk is also a lot more heterogeneous than most people give it credit for: there are all kinds of milk products, supported and sustained by entire industrial and technological histories. But milk is never the determining factor of dairy products; media, similarly, is never its own determining factor; media die for the same reason we no longer have milk delivered by horse drawn carts; the magic lantern died because it no longer made sense to have a kerosene steam thing in your living room. (If you’re lost, this essentially boils down to a rejection of technological determinism.) He then invoked Lev Manovich’s work, which observes that film is now a hybrid media; analog distinctions mean nothing to a teenager or young professional; digital media is not about convergence but convergement; engulfment; a creolization of media; a meta-medium. The Iliad and the Odyssey are preservation’s “victory conditions,” meaning that the fact that these works, oral in their original incarnations, have survived the ages is about the best we can hope to achieve. While all of this was going down, Sterling had Jon Ippolito running his (Bruce’s) digital art piece Embrace the Decay. Pretty neat.
So ended the symposium. There was a reception afterwards but I didn’t stay long, having hit the wall from East coast time. I’m told the panels and talks will be distributed via Podcast and will post the link for that here when it’s available. The next day I stuck around for project meetings with some of the principals above, during which I briefly talked about the Electronic Literature Organization’s work on PAD, and got an introduction to Forging the Future, the NEH-funded continuation (directed by Ippolito) of Archiving the Avant-Garde on which I’ll be doing some consulting.
Earlier I had asked whether the underlying assumptions of the preservation enterprise change, alongside of the particulars of various technical challenges. If the speakers and audience assembled for New Media and Social Memory are any kind of indicator I think I can discern some real shifts. Here are some words that would loom large in any tag cloud of the event: social, variation, change, notation, reinterpretation, participatory, renewal, remix. Museums and other forms of cultural authority were generally seen as suspect; emulation and originality, despite Rothenberg’s detailed presentation, were afforded less air time than variation and renewal. The embrace of variability is, in my opinion, a powerful move, one that harnesses precisely the characteristics of digital constructs which make them inimical to traditional preservation means. Variation and renewal are also compatible with some of the Web 2.0 cant that inevitably made its way into the proceedings, powerful ideas again, but I’m not so sure I’m ready to dispense with authority altogether. Culture is not a popularity contest, and the scholar’s art does not consist in vox populi. Hardware and material affordances are not always expendable; there is a difference between playing Spacewar! on a PDP-1 and playing it on my laptop’s emulator and that difference is worth preserving along with the ability to execute the game’s source code. Likewise, there is a difference between Googling for “Agrippa” and experiencing the text as part of the original Agrippa artifact, a much more complex work. Yet it’s possible for the pendulum to swing too far in this direction as well. In textual editing, the trend has been very much toward documentary and photographic facsimile editing, the equivalent of emulation for printed matter. As Kari Kraus has argued at length, that emphasis has obscured our sensitivity towards the very textual and linguistic affordances that emerge when we detach linguistic codes from their embodied representation in artifacts, precisely the potential for abstraction that notation systems have traditionally exploited. Microsoft, meanwhile, has apparently been hard at work on something they call immortal computing, which apparently consists in storing bits in physical artifacts such as tombstones and urns, “to be preserved and revealed to future generations, and maybe even to future civilizations.” Immortality, unlike renewal, is stone cold; shall these bits live on is (to me) a less interesting question than shall these bits live again.
Just got the dust jacket language for Mechanisms: New Media and the Forensic Imagination, my forthcoming book from MIT Press:
In Mechanisms, Matthew Kirschenbaum examines new media and electronic writing against the textual and technological primitives that govern writing, inscription, and textual transmission in all media: erasure, variability, repeatability, and survivability. Mechanisms is the first book in its field to devote significant attention to storage—the hard drive in particular—arguing that understanding the affordances of storage devices is essential to understanding new media. Drawing a distinction between “forensic materiality” and “formal materiality,” Kirschenbaum uses applied computer forensics techniques in his study of new media works. Just as the humanities discipline of textual studies examines books as physical objects and traces different variants of texts, computer forensics encourage us to perceive new media in terms of specific versions, platforms, systems, and devices. Kirschenbaum demonstrates these techniques in media-specific readings of three landmark works of new media and electronic literature, all from the formative era of personal computing: the interactive fiction game Mystery House, Michael Joyce’s Afternoon: A Story, and William Gibson’s electronic poem “Agrippa.”
Drawing on newly available archival resources for these works, Kirschenbaum uses a hex editor and disk image of Mystery House to conduct a “forensic walkthrough” to explore critical reading strategies linked to technical praxis; examines the multiple versions and revisions of Afternoon in order to address the diachronic dimension of electronic textuality; and documents the volatile publication and transmission history of “Agrippa” as an illustration of the social aspects of transmission and preservation.