Eloquent Images: Word and Image in the Age of New Media is now out from the MIT Press, edited by Mary E. Hocks and Michelle R. Kendricks. The volume as a whole looks stunning. My contribution is entitled “The Word as Image in an Age of Digital Reproduction” and basically argues that despite much rhetoric to the contrary, words and images remain as ontologically distinct now as in previous technological epochs due to the (virtual) realities of computation.
Speaking of eloquent images, here’s one:
This image eloquently illustrates what happens when you wash a Washington DC Metro fare card (Metro’s our area-wide bus and subway system). Unfortunately, this one had $17.10 left on it. The print is very faint, but it is there, dammit, especially if you hold the card to the light just so. Or let Photoshop have its way:
What do local folks think? Do I have any prayer of getting my money back if I mail it into Metro Center? Or should I go down there in person to plead my case? Or give it up as a lost cause? Any advice appreciated.
Apropos of an earlier entry of mine on software studies, Salon is running a good piece on software archeology entitled “Prowling the Ruins of Ancient Software” (Slashdotted copy here). Two choice tidbits:
“It’s funny,” says Dave Thomas, a Dallas software consultant and co-author, with Andrew Hunt, of “The Pragmatic Programmer,” a 1999 book on software design methods. “Colleges spend a lot of time teaching people how to write code, but very few teach them how to read code. When you think about it, we programmers spend most of our time reading code, not writing code.”
[ . . . ]
“Maybe I’m horribly geeky,” says [Grady] Booch, “but I find tremendous beauty in looking at well-written software programs. There’s an elegance, a brilliance that we’re only now developing the critical means to describe. We have literary critics. We have art critics. We don’t have any software critics, yet. We need software critics, too.”
According to the article, the Computer History Museum will be sponsoring some sort of conference or meeting on software archeology this fall, but I can’t find any further information online. Something to watch for though.
See these amazing images of individual bits recorded on a hard disk, obtained using a technique known as Magnetic Force Microscopy (MFM). As I understand it, MFM is not a photographic process but rather a form of visualization: magnetic sensors at the tip of the measuring instrument are able to detect the tiny fluctuations in the magnetic field on the surface of the disk, and thereby generate the images you see here.
But what are we really looking at? Douglas R. Hofstadter (best known for Gödel, Escher, Bach), has the following to say in his Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought:
Today, for instance, ultrasound allows us to see a fetus moving about inside a mother’s womb in real time. Note that we feel no need to put quotes around the word “see” — no more than around the word “talk” in the sentence “My wife and I talk everyday on the phone.” When we make such casual statements we don’t for a moment consider the weirdness of the fact that our voices are speeding in perfect silence through metallic wires; the reconstruction of sounds is so flawless and faithful that we are able to entirely forget the fact that complex coding and decoding processes are taking place in between the speaking mouth and the listening ear. [. . .] If, fifty years ago, high-frequency sounds had been scattered off a fetus, there would have been no technology to convert the scattered waves into a vivid television image, and any conclusions derived from measurements on the scattered waves would have been considered abstruse mathematical inferences; today, however, simply because computer hardware can reconstruct the scatterer from the scattered waves in real time, we feel we are directly observing the fetus. Examples like this — and they are legion in our technological era — show why any boundary between “direct observation” and “inference” is a subjective matter. (488)
Hofstadter goes on to note that, “much of science consists in blurring this seemingly sharp distinction” (488). As much or more than any of the scores of better-known prophets and pundits of the Information Age, Hofstadter has succeeded in putting his finger on one of the central dynamics of our times: that “information,” which was once explicitly defined by computer scientists as a quality independent or indeed exempt from meaning has now, as a direct consequence of advances in computing technology, become meaningful in and of itself. By this I mean not that data is useful or intelligible without context and structure, but rather that the continuum involved in the creation of meaning through the process of interpreting data now encompasses degrees of abstraction and representational artifice which would have heretofore been considered “meaningful” only after the prior imposition of some second-order procedure or analysis. Or to put it another way, advanced computing technologies, visual and graphical for the most part, now routinely allow information’s artificial or referential dimension to function as a kind of permeable membrane through which a dynamic of inference and direct observation can operate. Moreover, as Hofstadter rightly notes, examples of the oscillating dynamic between inference and direct observation are “legion” at the present moment—indeed, they define the most pedestrian behaviors of the wired lifestyle. When I first wrote this, for example, I was tracking rain cells over central Virginia using online Doppler radar available from the Weather Channel’s site (I had a chronically leaky window seam and was wondering whether I’d need to get up to arrange the pots and pans I use to catch the run-off).
Precipitation displays on Doppler radar as colored blobs, the color varying from a light green to deep reds and pinks depending on intensity. Note that unlike Hofstadter’s example of the ultrasound, what I see on the Doppler map does not, in any mimetic sense, “look like” rain. Yet not only do I accept the Doppler images as an accurate depiction of the weather conditions in my area (and thus a reliable indication of whether or not I’ll need to mop up my windowsill), I also tend to regard what I am seeing as simply “rain,” and not as “a computer-generated visualization of the prevailing atmospheric conditions.” Though I understand of course that the Doppler display is precisely that, the truth is that as a practical matter my phenomenological apprehension of rain has expanded to include images produced by radar waves reflected from bands of precipitation within what is essentially the same referential horizon as the puddles that I know will eventually appear on my windowsill.
In recent days I’ve heard from two other members of the UMCP faculty who are blogging: Peter Levine, from the Institute for Philosophy and Public Policy (I didn’t even know there was such a thing on campus) and Walter Hutchens, an Assistant Professor in the Smith School of Business who specializes in China’s markets. It’s good to see the seeds of a blogging community among the College Park faculty. If anyone else is out there, drop me a line.
The portion of Mechanisms I’m working on right now deals with magnetic storage media, and particularly hard drives. They’re fascinating machines: the platters inside the computer you’re using to read this are spinning at about 10,000 revolutions per minute, while the drive’s read/write heads float by a bare fraction of the width of a human hair above. If, as sometimes happens, the two should touch, the iron oxide surface of the disk will be scorched by the contact, like the path gouged by a meteorite plowing across a desert expanse. It is striking to me that the portion of the computer used to “save” our data is the portion most self-evidently an inscription technology, with strong roots in the kinds of machinic genealogies established by Kittler and others.
Some may object that a focus on hard drives, the most overtly mechanical portion of the computer, is arbitrary, even tendentious. Hard drives, after all, are a relatively recent innovation—the personal computer era was well underway without them, though the technology has actually been around since the fifties. They are also, of course, by no means the only storage media in everyday use today, and there is increasing evidence that they will be surpassed, not only by solid state or laser optical devices, but also by more advanced techniques such as holography. Nonetheless, magnetic storage devices have been the preeminent storage media for personal computers since the mid-eighties, and also for countless Web servers, and as such are historically central to any narrative of computing and inscription.
I vividly remember my own first experience with a hard drive. I had grown up using an Apple IIe, and was accustomed to swapping 5.25-inch disks in and out of my system’s two external floppy drives whenever I wanted to use an application or save my data. The first time I saved a file to a hard drive (on an IBM at school) was a very different kind of experience: my data was somehow “in” the computer, and not simply recorded on external media. Though I couldn’t have put it in these terms at the time, it felt as though the von Neumann architecture had been overturned, or at least jostled: the computer was not just a calculating engine sandwiched in between input and output devices, but something more like an archival entity, with its own internal memory. The machine’s individual identity was suddenly coterminous with the data it “contained.” Of course from an architectural standpoint nothing had really changed, and my computer still conformed very much to the classic von Neumann model. The storage device had simply retreated within the case. But the psychological impact of saving data “inside” the machine itself, rather than to some external locale, cannot overlooked, and is, I believe, in its own way, as significant as the advent of the GUI, the more commonly celebrated revolution.
Well, I’ve solved the DSL issues (too mundane to detail here), but the blog is going to be quiet for another couple of weeks. For one thing, we’re headed to Maine on Saturday for a week with my parents on Frye Island. No DSL (or wi-fi) there. And nope, I don’t eat lobster: but there will be wine on the beach and fresh fish on the grill. Mmm. Ahh.
There’s also another reason the blog’s been quiet of late. I’ve been having a flare-up of my RSI, and am saving keystrokes for the essentials (I’ve got almost a full chapter drafted on my book). At some point I’ll detail the ups and downs of my experience with RSI, but for now three short pieces of advice:
In my own case doctors have suggested (and I think they are right) that my RSI stems not from the keyboard per se, but from chronic postural misalignment that originates with a congenital lazy eye. Contrary to some dire predictions you may read, I don’t believe RSI means that you can never write or be productive again; but it does mean that your body has assimilated, at some fundamental physiological level, the lesson that the computer is a physical entity, a tool that you must learn to wield in harmony with the embodied rhythms of a non-virtual self. On Friday Kari and I are starting yoga.
Kari and I spent part of the day at the splendid and inspiring information technology exhibit at the Smithsonian National Museum of American History. Among its treasures: the actual wire Bell first used to speak to Watson, pieces of the original ENIAC, a snippet of the TAT-6 Trans-Atlantic telephone cable, Deep Blue, and upstairs, a Jacquard loom. We plan to make a return visit soon to see the hall of printing and graphic arts.
On the home front, some ongoing nasty DSL connectivity problems. Lite blogging fore and aft.
As advertised: the internet is shit.
It’s actually not that well written or argued, and somehow not as . . . bracing . . . as the author clearly wanted it to be. But still: I always brake for iconoclasm.
Most people know that the QWERTY keyboard layout is a holdover from the early days of the typewriter, when it was found to be the most effective arrangement for keeping the machine’s internal metal typebars from jamming. What most people don’t know is that the design was also influenced by manufacturer Remington’s desire to have the word “typewriter” appear as an acrostic in the top row of keys. Pretty neat, huh? I learned that from going back to Lisa Gitelman’s Scripts, Grooves, and Writing Machines (Stanford UP, 1999), her richly researched study of the phonograph and other late nineteenth century inscription technologies.
It’s an outstanding read, and should find its way into your summer beachbag.
Dave Ciccoricco’s “The Contour of a Contour,” new in ebr, is a serious meta-theoretical contribution to the literature on hypertext. While it is difficult to get a sense of Ciccoricco’s own relationship to this body of work—is he critic, historian, apologist, or (to borrow a term from Friedrich Kittler) literary scientist?—the essay is impressive in its (re)tracing of the tangled genealogy of the contour, the signature trope of 1990s hypertext theory. Near the end Ciccoricco writes, “[a]bove all, the Contour aspires (as does this essay) to be nothing more than a site of gathering.” In this sense the piece really is invaulable, for Ciccoricco has exhaustively teased the word’s synchronic and diachronic lexical accumulations out of very close readings of critical work by Michael Joyce, Mark Bernstein, Stuart Moulthrop, Terry Harpold, and others.
Contour, along with a host of other terms from this period (Moulthrop’s “informand” for example), reminds me a bit of the metaphors sprinkled across our computer desktops—I mean “desktops.” After all, hypertexts no more have “contours” than operating systems have “recycling bins.” Ciccoricco rightly draws attention to Mark Bernstein’s writing in “Patterns of Hypertext” and elsewhere which takes to task those critics and commentators who elide (read: don’t really understand) the minute computational particulars of the systems and software they profess to be deconstructing. And Ciccoricco, for his own part, administers a quick drubbing to those who have been critical of Eastgate’s role in the field but have yet to implement any alternative systems themselves. Even Bernstein, however, does not discuss what a link is actually doing computationally, and instead gives us a colorful semantic palette with which to name patterns, including cycle, counterpoint, tangle, sieve, and montage.
Question: why not discuss these phenomena as the
data and control structurescomputational events they actually are, utilizing the existing language of computer science, instead of adopting a wallpaper language which more than anything else resembles literary modernism?
Rote answer: because the language of computer science is as socially and historically specific as the language of literary modernism. “Things as they are / Are changed upon the blue guitar.”
Is it just that simple? (Note: Mark Bernstein reads this blog and rarely agrees with what I have to say. Look for him in the Comments section.)
Nick’s remarks on micropayments and public access (in the context of Scott McCloud’s release of The Right Number) are apropos of some of the discussion in the comments section of my post on the recent fiction I’ve been reading. I share the concerns about access, but think the solution has to start with lobbying the libraries to set aside some portion of their budgets for online acquisitions—even though those budgets are, in all too many instances, evaporating.
This excellent piece of scholarly sleuthing by Michael John Gorman tracks down the “elusive origins” of Bruno Latour’s influential idea of the immutable mobile, and offers a very lucid account of the critical debate now surrounding the term. Latour’s work on inscriptions (chapter 6 of Science in Action) should be required reading for anyone interested in the material history of textuality.
Reading Gorman’s essay (and having also read the principals: Latour, Elizabeth Eisenstein, and Adrian Johns) makes me wonder about the current state of empircal knowledge in the humanities. What counts as a fact? The nature of “knowledge” is, of course, highly contested territory in the humanities, and indeed, Latour’s own work stands as a major contribution to these very questions. But I’m interested in self-reflexively reshaping the debate along the lines of a graduate research methods course: when can a scholar be said to “know” something, and how is that knowledge constructed? The course would take a case study approach, investigating a range of controversies in the secondary literature: Latour and immutable mobiles, as documented by Gorman above; the recent exchange in Blake: An Illustrated Quarterly on Blake’s color-printing methods, overtly and unapologetically technical but also a fascinating instance of analytical thrust and parry that reveals much about the current state of Blake studies and scholarly method in general (Kari and I have discussed that point at length between ourselves); the Ulysses wars is another obvious candidate; the Alan Sokal/Social Text meltdown; plus maybe some of my own work on the shadowy lives of first-generation electronic objects.
Additional suggestions for case studies of scholarly controversies in the secondary literature of the humanities? Ideally I’d like around ten exemplars, spanning a range of different historical periods, national literatures, and critical methodologies.
I’ve been tearing through a stack of contemporary fiction these last few months. Here’s what I’ve read:
Next up: Jonathan Franzen, The Corrections (2001).
Incidentally, the above list represents something on the order of $125-$150 worth of books. Reading is an expensive little habit. I’m all for authors getting their due, but perhaps peer-to-peer swapping of contemporary fiction is what will finally break eBooks into the mainstream?